url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://matpitka.blogspot.com/2017/06/
## Sunday, June 25, 2017 ### About McKay and Langlands correspondences in TGD framework In adelic TGD Galois groups for extensions of rationals become discrete symmetry groups acting on dark matter, identified as heff/h=n phases of ordinary matter. n gives the number of sheet of covering assignable to space-time surface. Since Galois group acts on the cognitive representation defined by a discrete set of points of space-time surface with coordinates having values in extension of rationals, the action of Galois group defines n-sheeted covering, where n is the order of Galois group thus identifiable in terms of Planck constant. Adelic TGD inspires the question whether the representations of Galois groups could correspond to representations of Lie groups defining the ground states of Kac-Moody representations emerging in TGD in two manners: as representations of Kac-Moody algebra assignable the Poincare-, color- and electroweak symmetries on one hand and with dynamical generated from supersymplectic symmetry assignable with the boundaries of causal diamond (CD) and extended Kac-Moody symmetres assignable to the light-like orbits of partonic 2-surfaces defining boundaries between space-time regions with Minkowskian and Euclidian signatures of the induced metric. McKay correspondence states that the finite discrete subgroups of SU(2) can be characterized by McKay graphs characterizing the fusion rules for the tensor products for the representations of these groups. These graphs correspond to the Dynkin diagrams for Kac-Moody algebras of ADE type group (all roots have same unit length in Dynkin diagram). This inspires the conjecture that finite subgroups of SU(2) indeed correspond to Kac-Moody algebras. Could the representations of discrete subgroups appearing in the McKay graph define also representations for the ground states of corresponding ADE type Kac-Moodyt algebra? More generally, could the Mc-Kay graps of the Galois groups? Number theoretic Langlands correspondence in turn states roughly that the representations of Galois group for extensions of rationals correspond to the so called automorphic representations of algebraic variants of reductive Lie groups. This is not totally surprising since the matrices defining algebraic matrix group has matrix elements in the extension of rationals. This raises the question how closely the number theoretic Langlands correspondence corresponds to the basic physical picture of TGD. 1. Could normal sub-groups of symplectic group and of Galois groups correspond to each other? Measurement resolution realized in terms of various inclusion is the key principle of quantum TGD. There is an analogy between the hierarchies of Galois groups, of fractal sub-algebras of supersymplectic algebra (SSA), and of inclusions of hyperfinite factors of type II1 (HFFs). The inclusion hierarchies of isomorphic sub-algebras of SSA and of Galois groups for sequences of extensions of extensions should define hierarchies for measurement resolution. Also the inclusion hierarchies of HFFs are proposed to define hierarcies of measurement resolutions. How closely are these hierarchies related and could the notion of measurement resolution allow to gain new insights about these hierarchies and even about the mathematics needed to realize them? 1. As noticed, SSA and its isomorphic sub-algebras are in a relation analogous to the between normal sub-group H of group Gal (analog of isomorphic sub-algebra) and the group G/H. One can assign to given Galois extension a hierarchy of intermediate extensions such that one proceeds from given number field (say rationals) to its extension step by step. The Galois groups H for given extension is normal sub-group of the Galois group of its extension. Hence Gal/H is a group. The physical interpretation is following. Finite measurement resolution defined by the condition that H acts trivially on the representations of Gal implies that they are representations of Gal/H. Thus Gal/H is completely analogous to the Kac-Moody type algebra conjecture to result from the analogous pair for SSA. 2. How does this relate to McKay correspondence stating that inclusions of HFFs correspond to finite discrete sub-groups of SU(2) acting as isometries of regular n-polygons and Platonic solids correspond to Dynkin diagrams of ADE type Super Kac-Moody algebras (SKMAs) determined by ADE Lie group G. Could one identify the discrete groups as Galois groups represented geometrically as sub-groups of SU(2) and perhaps also those of corresponding Lie group? Could the representations of Galois group correspond to a sub-set of representations of G defining ground states of Kac-Moody representations. This might be possible. The sub-groups of SU(2) can however correspond only to a very small fraction of Galois groups. Can one imagine a generalization of ADE correspondence? What would be required that the representations of Galois groups relate in some natural manner to the representations as Kac-Moody groups. 1.1 Some basic facts about Galois groups and finite groups Some basic facts about Galois groups mus be listed before continuing. Any finite group can appear as a Galois group for an extension of some number field. It is known whether this is true for rationals (see this). Simple groups appear as building bricks of finite groups and are rather well understood. One can even speak about periodic table for simple finite groups (see this). Finite groups can be regarded as a sub-group of permutation group Sn for some n. They can be classified to cyclic, alternating , and Lie type groups. Note that alternating group An is the subgroup of permutation group Sn that consists of even permutations. There are also 26 sporadic groups and Tits group. Most simple finite groups are groups of Lie type that is rational sub-groups of Lie groups. Rational means ordinary rational numbers or their extension. The groups of Lie type (see this) can be characterized by the analogs of Dynkin diagrams characterizing Lie algebras. For finite groups of Lie type the McKay correspondence could generalize. 1.2 Representations of Lie groups defining Kac-Moody ground states as irreps of Galois group? The goal is to generalize the McKay correspondence. Consider extension of rationals with Galois group Gal. The ground staes of KMA representations are irreps of the Lie group G defining KMA. Could the allow ground states for given Gal be irreps of also Gal? This constraint would determine which group representations are possible as ground states of SKMA representations for a given Gal. The better the resolution the larger the dimensions of the allowed representations would be for given G. This would apply both to the representations of the SKMA associated with dynamical symmetries and maybe also those associated with the standard model symmetries. The idea would be quantum classical correspondence (QCC) space-time sheets as coverings would realize the ground states of SKMA representations assignable to the various SKMAs. This option could also generalize the McKay correspondence since one can assign to finite groups of Lie type an analog of Dynkin diagram (see this). For Galois groups, which are discrete finite groups of SU(2) the hypothesis would state that the Kac-Moody algebra has same Dynkin diagram as the finite group in question. To get some perspective one can ask what kind of algebraic extensions one can assign to ADE groups appearing in the McKay correspondence? One can get some idea about this by studying the geometry of Platonic solids (see this). Also the geometry of Dynkin diagrams telling about the geometry of root system gives some idea about the extension involved. 1. Platonic solids have p vertices and q faces. One has [p,q]∈ { [3, 3], [4, 3], [3, 4], [5, 3], [3, 5]}. Tetrahedron is self-dual (see this) object whereas cube and octahedron and also dodecahedron and icosahedron are duals of each other. From the table of Wikipedia article one finds that the cosines and sines for the angles between the vectors for the vertices of tetrahedron, cube, and octahedron are rational numbers. For icosahedron and dodecahedron the coordinates of vertices and the angle between these vectors involve Golden Mean φ=(1+51/2)/2 so that algebraic extension must involve 51/2 at least. The dihedral angle θ between the faces of Platonic solid [p,q] is given by sin(θ/2)= cos(π/q)/sin(π/p). For tetrahedron, cube and octahedron sin(θ) and cos(θ) involve 31/2. For icosahedron dihedral angle is tan(θ/2)= φ. For instance, the geometry of tetrahedron involves both 21/2 and 31/2. For dodecahedron more complex algebraic numbers are involved. 2. The rotation matrices for for the triangles of tetrahedron and icosahedron involve cos(2π/3) and sin(2π/3) associated with the quantum phase q= exp(i2π/3) associated with it. The rotation matrices performing rotation for a pentagonal face of dodecahedron involves cos(2π/5) and sin(2π/5) and thus q= exp(i2π/5) characterizing the extension. Both q= exp(i2π/3) and q= exp(i2π/5) are thus involved with icosahedral and dodecahedral rotation matrices. The rotation matrices for cube and for octahedron have rational matrix elements. 3. The Dynkin diagrams characterize both the finite discrete groups of SU(2) and those of ADE groups. The Dynkin diagrams of Lie groups reflecting the structure of corresponding Weyl groups involve only the angles π/2, 2π/3, π-π/6, 2π- π/6 between the roots. They would naturally relate to quadratic extensions. For ADE Lie groups the diagram tells that the roots associated with the adjoint representation are either orthogonal or have mutual angle of 2π/3 and have same length so that length ratios are equal to 1. One has sin(2π/3)= 31/2/2. This suggests that 31/2 belongs to the algebraic extension associated with ADE group always. For the non-simply laced Lie groups of type B, C, F, G the ratios of some root lengths can be 21/2 or 31/2. For ADE groups assignable to n-polygons (n>5) Galois group must involve the cyclic extension defined by exp(i2π/n). The simplest option is that the extension corresponds to the roots of the polynomial xn= 1. 2. A possible connection with number theoretic Langlands correspondence I have discussed number theoretic version of Langlands correspondence in \citeallb/Langland,Langlandsnew trying to understand it using physical intuition provided by TGD (the only possible approach in my case). Concerning my unashamed intrusion to the territory of real mathematicians I have only one excuse: the number theoretic vision forces me to do this. Number theoretic Langlands correspondence relates finite-dimensional representations of Galois groups and so called automorphic representations of reductive algebraic groups defined also for adeles, which are analogous to representations of Poincare group by fields. This is kind of relationship can exist follows from the fact that Galois group has natural action in algebraic reductive group defined by the extension in question. The "Resiprocity conjecture" of Langlands states that so called Artin L-functions assignable to finite-dimensional representations of Galois group Gal are equal to L-functions arising from so called automorphic cuspidal representations of the algebraic reductive group G. One would have correspondence between finite number of representations of Galois group and finite number of cuspidal representations of G. This is not far from what I am naively conjecturing on physical grounds: finite-D representations of Galois group are reductions of certain representations of G or of its subgroup defining the analog of spin for the automorphic forms in G (analogous to classical fields in Minkowski space). These representations could be seen as induced representations familiar for particle physicists dealing with Poincare invariance. McKay correspondence encourages the conjecture that the allowed spin representations are irreducible also with respect to Gal. For a childishly naive physicist knowing nothing about the complexities of the real mathematics this looks like an attractive starting point hypothesis. In TGD framework Galois group could provide a geometric representation of "spin" (maybe even spin 1/2 property) as transformations permuting the sheets of the space-time surface identifiable as Galois covering. This geometrization of number theory in terms of cognitive representations analogous to the use of algebraic groups in Galois correspondence might provide a totally new geometric insights to Langlands correpondence. One could also think that Galois group represented in this manner could combine with the dynamical Kac-Moody group emerging from SSA to form its Langlands dual. Skeptic physicist taking mathematics as high school arithmetics might argue that algebraic counterparts of reductive Lie groups are rather academic entities. In adelic physics the situation however changes completely. Evolution corresponds to a hierarchy of extensions of rationals reflected directly in the physics of dark matter in TGD sense: that is as phases of ordinary matter with heff/h=n identifiable as order of Galois group for extension of rationals. Algebraic groups and their representations get physical meaning and also the huge generalization of their representation to adelic representations makes sense if TGD view about consciousness and cognition is accepted. In attempts to understand what Langlands conjecture says one should understand first the rough meaning of many concepts. Consider first the Artin L-functions appearing at the number theoretic side. Consider first the Artin L-functions appearing at the number theoretic side. 1. L-functions (see this) are meromorphic functions on complex plane that can be assigned to number fields and are analogs of Riemann zeta function factorizing into products of contributions labelled by primes of the number field. The definition of L-function involves Direchlet characters: character is very general invariant of group representation defined as trace of the representation matrix invariant under conjugation of argument. 2. In particular, there are Artin L-functions (see this) assignable to the representations of non-Abelian Galois groups. One considers finite extension L/K of fields with Galois group G. The factors of Artin L-function are labelled by primes p of K. There are two cases: p is un-ramified or ramified depending on whether the number of primes of L to which p decomposes is maximal or not. The number of ramified primes is finite and in TGD framework they are excellent candidates for physical preferred p-adic primes for given extension of rationals. These factors labelled by p analogous to the factors of Riemann zeta are identified as characteristic polynomials for a representation matrix associated with any element in a preferred conjugacy class of G. This preferred conjugacy class is known as Frobenius element Frob(p) for a given prime ideal p , whose action on given algebraic integer in OL is represented as its p:th power. For un-ramified p the characteristic polynomial is explicitly given as determinant det[I-tρ(Frob(p))]-1, where one has t= N(p)-s and N(p) is the field norm of p in the extension L (see this). In the ramified case one must restrict the representation space to a sub-space invariant under inertia subgroup, which by definition leaves invariant integers of OL/p that is the lowest part of integers in expansion of powers of p. At the other side of the conjecture appear representations of algebraic counterparts of reductive Lie groups and their L-functions and the two number theoretic and automorphic L-functions would be identical. 1. Automorphic form F generalizes the notion of plane wave invariant under discrete subgroup of the group of translations and satisfying Laplace equation defining Casimir operator for translation group. Automorphic representations can be seen as analogs for the modes of classical fields with given mass having spin characterized by a representation of subgroup of Lie group G (SO(3) in case of Poincare group). Automorphic functions as field modes are eigen modes of some Casimir operators assignable to G. Algebraic groups would in TGD framework relate to adeles defined by the hierarchy of extensions of rationals (also roots of e can be considered in extensions). Galois groups have natural action in algebraic groups. 2. Automorphic form (see this) is a complex vector valued function F from topological group to some vector space V. F is an eigen function of certain Casimir operators of G. In the simplest situation these function are invariant under a discrete subgroup Γ⊂ G identifiable as the analog of the subgroup defining spin in the case of induced representations. In general situation the automorphic form F transforms by a factor j of automorphy under Γ. The factor can also act in a finite-dimensional representation of group Γ, which would suggest that it reduces to a subgroup of Γ obtained by dividing with a normal subgroup. j satisfies 1-cocycle condition j(g1,g2g3)= j(g1g2,g3) in group cohomology guaranteeing associativity (see this). Cuspidality relates to the conditions on the growth of F at infinity. 3. Elliptic functions in complex plane characterized by two complex periods are meromorphic functions of this kind. A less trivial situation corresponds to non-compact group G=SL(2,R) and Γ ⊂ SL(2,Q). There are more groups involved: Langlands group LF and Langlands dual group LG. A more technical formulation says that the automorphic representations of a reductive Lie group G correspond to homomorphisms from so called Langlands group LF (see this) at the number theoretic side to L-group LG or Langlands dual of algebraic G at group theory side (see this). It is important to notice that LG is a complex Lie group. Note also that homomorphism is a representation of Langlands group LF in L-group LG. In TGD this would be analogous to a homomorphism of Galois group defining it as subgroup of the group G defining Kac-Moody algebra. 1. Langlands group LF of number field is a speculative notion conjectured to be a extension of the Weil group of extension, which in turn is a modification of the absolute Galois group. Unfortunately, I was not able to really understand the Wikipedia definition of Weil group (this). If E/F is finite extension as it is now, the Weil group would be WE/F= WF/WcE, WcE refers to the commutator subgroup WE defining a normal subgroup, and the factor group is expected to be finite. This is not Galois group but should be closely related to it. Only finite-D representations of Langlands group are allowed, which suggests that the representations are always trivial for some normal subgroup of LF For Archimedean local fields LF is Weil group, non-Archimedean local fields LF is the product of Weil group of L and of SU(2). The first guess is that SU(2) relates to quaternions. For global fields the existence of LF is still conjectural. 2. I also failed to understand the formal Wikipedia definition of the L-group LG appearing at the group theory side. For a reductive Lie group one can construct its root datum (X*,Δ,X*, Δc), where X* is the lattice of characters of a maximal torus, X* its dual, Δ the roots, and Δc the co-roots. Dual root datum is obtained by switching X* and X* and Δ and Δc. The root datum for G and LG are related by this switch. For a reductive G the Dynkin diagram of LG is obtained from that of G by exchanging the components of type Bn with components of type Cn. For simple groups one has Bn↔ Cn. Note that for ADE groups the root data are same for G and its dual and it is the Kac-Moody counterparts of ADE groups, which appear in McKay correspondence. Could this mean that only these are allowed physically? 3. Consider now a reductive group over some field with a separable closure K (say k for rationals and K for algebraic numbers). Over K G as root datum with an action of Galois group of K/k. The full group LG is the semi-direct product LG0⋊ Gal(K/k) of connected component as Galois group and Galois group. Gal(K/k) is infinite (absolute group for rationals). This looks hopelessly complicated but it turns it that one can use the Galois group of a finite extension over which G is split. This is what gives the action of Galois group of extension (l/k) in LG having now finitely many components. The Galois group permutes the components. The action is easy to understand as automorphism on Gal elements of G. Could TGD picture provide additional insights to Langlands duality or vice versa? 1. In TGD framework the action of Gal on algebraic group G is analogous to the action of Gal on cognitive representation at space-time level permuting the sheets of the Galois covering, whose number in the general case is the order of Gal identifiable as heff/h=n. The connected component LG0 would correspond to one sheet of the covering. 2. What I do not understand is whether LG =G condition is actually forced by physical contraints for the dynamical Kac-Moody algebra and whether it relates to the notion of measurement resolution and inclusions of HFFs. 3. The electric-magnetic duality in gauge theories suggests that gauge group action of G on electric charges corresponds in the dual phase to the action of LG on magnetic charges. In self-dual situation one would have G=LG. Intriguingly, CP2 geometry is self-dual (Kähler form is self-dual so that electric and magnetic fluxes are identical) but induced Kähler form is self-dual only at the orbits of partonic 2-surfaces if weak form of electric-magnetic duality holds true. Does this condition leads to LG=G for dynamical gauge groups? Or is it possible to distinguish between the two dynamical descriptions so that Langlands duality would correspond to electric-magnetic duality. Could this duality correspond to the proposed duality of two variants of SH: namely, the electric description provided by string world sheets and magnetic description provided by partonic 2-surfaces carrying monopole fluxes? See the new chapter Are higher structures needed in the categorification of TGD? of "Towards M-matrix" or the article with the same title. For a summary of earlier postings see Latest progress in TGD. ## Saturday, June 24, 2017 ### Are Preferred Extremals Quaternion-Analytic in Some Sense? A generalization of 2-D conformal invariance to its 4-D variant is strongly suggestive in TGD framework, and leads to the idea that for preferred extremals of action space-time regions have (co-)associative/(co-)quaternionic tangent space or normal space. The notion of M8-H correspondence allows to formulate this idea more precisely. The beauty of this notion is that it does not depend on the signature of Minkowski space M4 representable as sub-space of of complexified quaternions M4c, which in turn can be seen as sub-space of complexified octonions M8c. The 4-D generalization of conformal invariance suggests strongly that the notion of analytic function generalizes somehow. This notion is however not so straightforward even in Euclidian signature, and the generalization to Minkowskian signature brings in further problems. The Cauchy-Riemann-Fuerter conditions make however sense also in Minkowskian quaternionic situation and the problem is whether they allow the physically expected solutions. One should also show that the possible generalization is consistent with (co)-associativity. In this article these problems are considered. Also a comparison with Igor Frenkel's ideas about hierarchy of Lie algebras, loop, algebras and double look algebras and their quantum variants is made: it seems that TGD as a generalization of string models replacing string world sheets with space-time surfaces gives rise to the analogs of double loop algebras and they quantum variants and Yangians. The straightforward generalization of double loop algebras seems to make sense only at the light-like boundaries of causal diamonds and at light-like orbits of partonic 2-surfaces but that in the interior of space-time surface the simple form of the conformal generators is not preserved. The twistor lift of TGD in turn corresponds nicely to the heuristic proposal of Frenkel for the realization of double loop algebras. See the article Are Preferred Extremals Quaternion-Analytic in Some Sense? or the chapter Unified Number Theoretical Vision of "TGD as Generalized Number Theory". For a summary of earlier postings see Latest progress in TGD. The p-adic aspects of Topological Geometrodynamics (TGD) will be discussed. Introduction gives a short summary about classical and quantum TGD. This is needed since the p-adic ideas are inspired by TGD based view about physics. p-Adic mass calculations relying on p-adic generalization of thermodynamics and super-symplectic and super-conformal symmetries are summarized. Number theoretical existence constrains lead to highly non-trivial and successful physical predictions. The notion of canonical identification mapping p-adic mass squared to real mass squared emerges, and is expected to be a key player of adelic physics allowing to map various invariants from p-adics to reals and vice versa. A view about p-adicization and adelization of real number based physics is proposed. The proposal is a fusion of real physics and various p-adic physics to single coherent whole achieved by a generalization of number concept by fusing reals and extensions of p-adic numbers induced by given extension of rationals to a larger structure and having the extension of rationals as their intersection. The existence of p-adic variants of definite integral, Fourier analysis, Hilbert space, and Riemann geometry is far from obvious and various constraints lead to the idea of number theoretic universality (NTU) and finite measurement resolution realized in terms of number theory. An attractive manner to overcome the problems in case of symmetric spaces relies on the replacement of angle variables and their hyperbolic analogs with their exponentials identified as roots of unity and roots of e existing in finite-dimensional algebraic extension of p-adic numbers. Only group invariants - typically squares of distances and norms - are mapped by canonical identification from p-adic to real realm and various phases are mapped to themselves as number theoretically universal entities. Also the understanding of the correspondence between real and p-adic physics at various levels - space-time level, imbedding space level, and level of "world of classical worlds" (WCW) - is a challenge. The gigantic isometry group of WCW and the maximal isometry group of imbedding space give hopes about a resolution of the problems. Strong form of holography (SH) allows a non-local correspondence between real and p-adic space-time surfaces induced by algebraic continuation from common string world sheets and partonic 2-surfaces. Also local correspondence seems intuitively plausible and is based on number theoretic discretization as intersection of real and p-adic surfaces providing automatically finite "cognitive" resolution. he existence p-adic variants of Kähler geometry of WCW is a challenge, and NTU might allow to realize it. I will also sum up the role of p-adic physics in TGD inspired theory of consciousness. Negentropic entanglement (NE) characterized by number theoretical entanglement negentropy (NEN) plays a key role. Negentropy Maximization Principle (NMP) forces the generation of NE. The interpretation is in terms of evolution as increase of negentropy resources. For details see the new chapter Philosophy of Adelic Physics of "Physics as Generalized Number Theory". ## Wednesday, June 14, 2017 ### Why should stars be borne in pairs? Stars seem to be born in pairs! For a popular article see this. The research article "Embedded Binaries and Their Dense Cores" is here. For instance, our nearest neighbor, Alpha Centauri, is a triplet system. Explanation for this have been sought for for a long time. Does star capture occur leading to binaries or triplets. Or does its reverse process in which binary splits up to become single stars occur? There has been even a search for a companion of Sun christened Nemesis. The new assertion is based on radio survey of a giant molecular cloud filled with recently formed sunlike stars (with age less than 4 million years) in constellation Perseus, a star nursery located 600 ly from us in Milky Way. All singles and twins with separations above 15 AUs were counted. The proposed mathematical model was able to explain the observations only if all sunlike stars are born as wide binaries. "Wide" means that the mutual distance is more than 500 AU, where AU is the distance of Earth from Sun. After the birth the systems would shrink or split t within time about million years. It was found that wide binaries were not only very young but also tended to be aligned along the long axes of an egg-shaped dense core. Older systems did not have this tendency. For instance, triplets could form as binary captures a single star. The theory says nothing about why the stars should born as binaries and what could be the birth mechanism. Could TGD say anything interesting about the how the binaries are formed? 1. TGD based model for galaxies leads to the proposal that the region in which dark matter has constant density corresponds to a very knotted and possibly thickened cosmic string portion or closed very knotted string associated with long cosmic string. There would be an intersection of separate cosmic strings or self-intersection of single cosmic string giving rise to a galactic blackhole from which dark matter emerges and transforms to ordinary matter. Star formation would take place in this region 2-3 times larger than the optical region. 2. Could an analogous mechanism be at work in star formation? Suppose that there is cosmic string in galactic plane and it has two nearby non-intersecting portions roughly parallel to each other. Deform the other one slightly locally so that it forms intersections with another one. The minimal number of stable intersections is 2 and even number in the general case. Single intersection corresponding to mere touching is a topologically unstable situation. If the intersections give rise to dark blackholes generating later the stars would have explanation for why stars are formed as twin pairs. This would also explain why the blackholes possibly detected by LIGO are so massive (there is still debate about this going on): they would have not yet produced ordinary stars, a process in which part of dark matter and dark energy of cosmic strings transforms to ordinary matter. 1. Suppose that these blackhole like objects are indeed intersections of two portions of cosmic string(s). The intersections have gravitational interaction and could move along the second cosmic string towards each other and eventually collide. 2. More concretely, one can imagine a straight horizontal starionary string A (at x-axis with y=0 in (x,y)-coordinates) and a folded string B with a shape of an inverted vertical parabola (y=-ax2+y0(t), a>0, and moving downwards. In other words, y0(t) decreases with time. The strings A and B have two nearby intersections x+/-= +/- (y0(t)/a)1/2. Their distance decreases with time and eventually the intersection points fuse together at y0(t)=0 and give rise to the fusion of two black-hole like entities to single one. See the the chapter TGD and astrophysics or the article TGD view about universal galactic rotation curves for spiral galaxies. For a summary of earlier postings see Latest progress in TGD. ## Tuesday, June 13, 2017 ### Are higher structures needed in the categorification of TGD? The notion of higher structures promoted by John Baez looks very promising notion in the attempts to understand various structures like quantum algebras and Yangians in TGD framework. The stimulus for this article came from the nice explanations of the notion of higher structure by Urs Screiber. The basic idea is simple: replace "=" as a blackbox with an operational definition with a proof for \$A=B\$. This proof is called homotopy generalizing homotopy in topological sense. n-structure emerges when one realizes that also the homotopy is defined only up to homotopy in turn defined only up... In TGD framework the notion of measurement resolution defines in a natural manner various kinds of "="s and this gives rise to resolution hierarchies. Hierarchical structures are characteristic for TGD: hierarchy of space-time sheet, hierarchy of p-adic length scales, hierarchy of Planck constants and dark matters, hierarchy of inclusions of hyperfinite factors, hierarchy of extensions of rationals defining adeles in adelic TGD and corresponding hierarchy of Galois groups represented geometrically, hierarchy of infinite primes, self hierarchy, etc... In this article the idea of n-structure is studied in more detail. A rather radical idea is a formulation of quantum TGD using only cognitive representations consisting of points of space-time surface with imbedding space coordinates in extension of rationals defining the level of adelic hierarchy. One would use only these discrete points sets and Galois groups. Everything would reduce to number theoretic discretization at space-time level perhaps reducing to that at partonic 2-surfaces with points of cognitive representation carrying fermion quantum numbers. Even the"{world of classical worlds" (WCW) would discretize: cognitive representation would define the coordinates of WCW point. One would obtain cognitive representations of scattering amplitudes using a fusion category assignable to the representations of Galois groups: something diametrically opposite to the immense complexity of the WCW but perhaps consistent with it. Also a generalization of McKay's correspondence suggests itself: only those irreps of the Lie group associated with Kac-Moody algebra that remain irreps when reduced to a subgroup defined by a Galois group of Lie type are allowed as ground states. See the new chapter Are higher structures needed in the categorification of TGD? of "Towards M-matrix" or the article with the same title. For a summary of earlier postings see Latest progress in TGD. ## Friday, June 09, 2017 ### New view about galaxies and galactic blackholes We had very interesting discussions with Gareth Lee Meredith in Beyond the Standard Models founded by Gareth. We talked about galaxy formation and various anomalies related to galactic dynamics popping up almost continually and challenging the halo model for dark matter. Unfortunately Gareth lost access to his FB and also Messenger account. It is extremely frustrating that this FB attack makes impossible to continue even discussions using Messenger. There are good reasons to expect that some malevolent person has made an appeal to FB - maybe claiming that there is hate speech at his page. This is certainly not true: the page has a very friendly polite spirit. I have been also myself been a victim of this kind of FB attack: my posts to another FB page were not shown at all for months. I never learned what the reason was. FB should be better prepared for the possibility that some malevolent person, perhaps envious colleague, tries to make communications impossible. One of the topics of discussion was results related to supermassive blackholes at the centers of galaxies. Gareth gave a link to an article telling about correlations between supermassive blackhole in galactic center and the evolution of galaxy itself. 1. The size of the blackhole like object - that is its mass if blackhole in GRT sense is in question - correlates with the constant rotation velocity of distant stars for spiral galaxies. 2. The relationship between the masses of black hole and galactic bulge are in constant relation: the mass ratio is about 700. 3. A further finding is that galactic blackholes of very old stars are much more massive than the idea about galactic blackhole getting gradually bigger by "eating" surrounding stars would suggests. Unfortunately, I did not find link this article due to the strange FB episode. This looks strange if one believes in the standard dogma that the galactic blackhole started to form relatively lately. What comes in mind is rather unorthodox idea. What if the large blackhole like entity was there from the beginning and gradually lost its mass? In TGD framework this could make sense! 1. In TGD Universe galaxies are like pearls in a necklace defined by a long cosmic string. This explains the flat rotational spectrum and predicts essentially free motion along the string related perhaps to coherent motions in very long length scales. This explains also the old observation that galaxies form filament like structures and the correlations between spin directions of galaxies along the same filament since one expects that the spin is parallel to the filament locally. Filament can of course change its direction locally so that charge of direction of rotation gives information about the filament shape. 2. The channelling of gravitational flux in the radial direction orthogonal to the string makes gravitational force very long ranged (1/transversal distance instead of 1/r2) and also stronger and predicts rotational spectrum. This model of dark matter differs dramatically from the fashionable halo model and involves only the string tension as a parameter unlike the halo model. The observed rigid body rotation within radius 2-3 times the optical radius (region inside which most stars are) can be understood if the long cosmic string is either strongly knotted or has closed galactic string around long cosmic string. The knotted portion would formed a highly knotted spaghetti like structure giving approximately constant mass density. Stars would be associated with the knotted structure as sub-knots. Light beams from supernovas could be along the string going through the star. Maybe even planets might be associated with thickened strings! One can also imagine intersections of long cosmic strings and Milky Way could contain such. 3. Galactic black hole like object could correspond to a self intersection of the long cosmic string or of closed galactic cosmic string bound to it. There could be several intersections. They would contain both dark matter and energy in TGD sense and located inside the string. Matter antimatter asymmetry would mean that there is slightly more antimatter inside string and slightly more matter outside it. Twistor lift of TGD predicts the needed new kind of CP breaking. What is new that the galactic blackhole like objects would be present from the beginning and lose their dark mass gradually. Time evolution would be opposite to what it has been usually thought to be! Most of the energy of the cosmic string would be magnetic energy identifiable as dark energy. During the cosmic evolution various perturbations would force the cosmic string to gradually thicken so that in M4 projection ceases to be pointlike. Magnetic monopole flux is conserved (BS= constant, S the transversal area), which forces magnetic energy density per unit length - string tension - to be reduced like 1/S. The lost energy becomes ordinary matter: the energy of inflaton field would be replaced with dark magnetic energy and the TGD counterpart for inflationary period would be transition from cosmic string dominated period to radiation dominated cosmology and also the emergence of space-time in GRT sense. The primordial cosmic string dominated phase would consist of cosmic strings in M2×CP2. The explanation for the constancy of CMB temperature would suggest quantum coherence in even cosmic scales made possible by the hierarchy of dark matters labelled by the valued of Planck constant heff/h=n. Maybe characterization as a super-fluid rather than gas discussed with Garrett is more precise manner to say it. What would be fantastic that these primordial structures would be directly visible nowadays. 4. The dark matter particles emanating from the dark supermassive blackhole would transform gradually to ordinary matter so that galaxy would be formed. This would explain the correlation of the bulge size with the mass (and size) of the blackhole correlating with the string tension. The rotational velocity of distant stars with string tension so that the strange correlation between velocity of distant stars and size of galactic blackhole is implied by a common cause. This also explains the appearance of Fermi bubbles. Fermi bubbles are formed when dark particles from the blackhole scatter with dark matter and partially transform to ordinary cosmic rays and produce dark photons transformed to visible photons partially. This occurs only within the region where the spaghetti like structure containing dark matter inside the cosmic string exists. Fermi bubbles indeed have the same size as this region. 5. While writing this I realized that also the galactic bar (2/3 of spiral galaxies have it) should be understood. This is difficult if there is nothing breaking the rotational symmetry around the long cosmic string. The situation changes if one has a portion of cosmic string along the plane of galaxy. There is indeed evidence for the second straight string portion: in Milky Way there are mini-galaxies rotating in the plane forming roughly 60 degrees angle with respect to galactic plane and the presence of two cosmic strings portions roughly orthogonal to each other could explain this (see this). Galactic blackhole could be associated with the intersection of string portions. The horizontal string portion could be part of long cosmic string, a separate closed cosmic string, or even another long cosmic string. One can imagine two basic options for the formation of the bar. 1. The first option is that galactic bar is formed around the straight portion of string. The gravitational force orthogonal to the string portion would create the bar. The ordinary matter in rigid body rotation would be accelerated while approaching the bar and then slow down and dissipate part of its energy in the process. The slowed down stars would after a further rotation of π tend to stuck around the string portion forming bound states with it and start to rotate around it: a kind of galactic traffic jam. Bars would be asymptotic outcomes of the galactic dynamics. Recent studies have confirmed the idea that bars are now are signs of full maturity as the "formative years" end (see this). 2. Second option is that the bar is formed as dark matter inside bar is transformed to ordinary matter as the portion thickens and loses dark energy identified as Kähler magnetic energy by a process analogous to the decay of inflaton vacuum energy. Bars would be transients in the evolution of galaxies rather than final outcomes. This option is not consistent with the idea that that only the galactic blackhole serves as the source of dark matter transforming to ordinary matter. 6. The pearls in string model explains also why elliptic galaxies have declining rotational velocity. They correspond to "free" closed strings which have not formed bound states with long cosmic strings transforming them to spiral galaxies. The recently found 10 billion old galaxies with declining rotational velocity could correspond to elliptical galaxies of this kind. One can also imagine the analog of ionization. The bound state of closed cosmic string and long cosmic string decays and spiral galaxy starts to decay under centrifugal force not anymore balanced by the gravitational force of the long cosmic strings and would transform to elliptic galaxy. Also the central bulge would start to increase in size. It would also lose its central blackhole if is associated with the long cosmic string. I am grateful for Garreth for giving a link to a popular article telling about this kind of elliptic galaxy with very large size of one million light years and without central blackhole and unusually large bulge region. This view about galactic blackholes also suggests a profound revision of GRT based view for the formation of blackholes. Note that in TGD one must of course speak about blackhole like objects differing from their GRT counterparts inside Schwartschildt radius and also outside it in microscopic scales (gravitational flux is mediated by magnetic flux tubes carrying dark particles). Perhaps also ordinary blachholes were once intersections of dark cosmic strings containing dark matter which gradually produce the stellar matter! If so, old blackholes would be more massive than the young ones. 1. This new thinking conform with the findings of LIGO. All the three stellar blackholes have been by more than order of magnitude massive than expected. There are also indications that the members of the second blackhole pair merging together did not have parallel spin directions. This does not fit with the idea that a twin pairs of stars was in question. It is very difficult to understand how two blackholes, which do not form bound system could find each other. Similar problem is encountered in bio-catalysis: who to biomolecules manage to find each other in the molecular crowd. The solution to the both problem is very similar. 2. TGD suggests that the collision could have occurred when to blackholes travelling along strings or portions of the same knotted string arrived from different directions. The gravitational attraction between strings would have helped to generate the intersection and strings would have guided the blackholes together. In biological context even a phase transition reducing Planck constant to the flux tube connecting the molecules could occur and bring the molecules together. See the article TGD view about universal galactic rotation curves for spiral galaxies. For a summary of earlier postings see Latest progress in TGD. ## Friday, June 02, 2017 ### Neutron production from an arc current in gaseous hydrogen: 66 year old nuclear physics anomaly I learned about nuclear physics anomaly new to me (actually the anomaly is 64 years old) from an article of Norman and Dunning-Davies in Research Gate (see this). Neutrons are produced from an arc current in hydrogen gas with a rate exceeding dramatically the rate predicted by the standard model of electroweak interactions, in which the production should occur through e-+p→ n+ν by weak boson exchange. The low electron energies make the process also kinematically impossible. Additional strange finding due to Borghi and Santilli is that the neutron production can in some cases be delayed by several hours. Furthermore, according to Santilli neutron production occurs only for hydrogen but not for heavier nuclei. In the following I sum up the history of the anomaly following closely the representation of Norman and Dunning-Davies (see this): this article gives references and details and is strongly recommended. This includes the pioneering work of Sternglass in 1951, the experiments of Don Carlo Borghi in the late 1960s, and the rather recent experiments of Ruggiero Santilli (see this). The pioneering experiment of Sternglass The initial anomalously large production of neutrons using an current arc in hydrogen gas was performed by Earnest Sternglass in 1951 while completing his Ph.D. thesis at Cornell. He wrote to Einstein about his inexplicable results, which seemed to occur in conditions lacking sufficient energy to synthesize the neutrons that his experiments had indeed somehow apparently created. Although Einstein firmly advised that the results must be published even though they apparently contradicted standard theory, Sternglass refused due to the stultifying preponderance of contrary opinion and so his results were preemptively excluded under orthodox pressure within discipline leaving them unpublished. Edward Trounson, a physicist working at the Naval Ordnance Laboratory repeated the experiment and again gained successful results but they too, were not published. One cannot avoid the question, what physics would look like today, if Sternglass had published or managed to publish his results. One must however remember that the first indications for cold fusion emerged also surprisingly early but did not receive any attention and that cold fusion researchers were for decades labelled as next to criminals. Maybe the extreme conservatism following the revolution in theoretical physics during the first decades of the previous century would have prevented his work to receive the attention that it would have deserved. The experiments of Don Carlo Borghi Italian priest-physicist Don Carlo Borghi in collaboration with experimentalists from the University of Recife, Brazil, claimed in the late 1960s to have achieved the laboratory synthesis of neutrons from protons and electrons. C. Borghi, C. Giori, and A. Dall'Olio published 1993 an article entitled "Experimental evidence of emission of neutrons from cold hydrogen plasma" in Yad. Fiz. 56 and Phys. At. Nucl. 56 (7). Don Borghi's experiment was conducted via a cylindrical metallic chamber (called "klystron") filled up with a partially ionized hydrogen gas at a fraction of 1 bar pressure, traversed by an electric arc with about 500V and 10mA as well as by microwaves with 1010 Hz frequency. Note that the energies of electrons would be below .5 keV and non-relativistic. In the cylindrical exterior of the chamber the experimentalists placed various materials suitable to become radioactive when subjected to a neutron flux (such as gold, silver and others). Following exposures of the order of weeks, the experimentalists reported nuclear transmutations due to a claimed neutron flux of the order of 104 cps, apparently confirmed by beta emissions not present in the original material. Don Borghi's claim remained un-noticed for decades due to its incompatibility with the prevailing view about weak interactions. The process e-+p→ n+ν is also forbidden by conservation of energy unless the total cm energy of proton and the electron have energy larger than Δ E= mn-mp-me=0.78 MeV. This requires highly relativistic electrons. Also the cross section for the reaction proceeding by exchange of W boson is extremely small at low energies (about 10-20 barn: barn=10-28 m2 represents the natural scale for cross section in nuclear physics). Some new physics must be involved if the effect is real. Situation is strongly reminiscent of cold fusion (or low energy nuclear reactions (LENR), which many main stream nuclear physicists still regard as a pseudoscience. Santilli's experiments Ruggero Santilli (see this) replicated the experiments of Don Borghi. Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. Santilli analyzes several alternative proposals explaining the anomalyn and suggests that new spin zero bound state of electron and proton with rest mass below the sum of proton and electron masses and absorbed by nuclei decaying then radioactively could explain the anomaly. The energy needed to overcome the kinematic barrier could come from the energy liberated by electric arc. The problem of the model is that it has no connection with standard model. Both in the experiments of Don Carlo Borghi and those of Santilli, delayed neutron synthesis was sometimes observed. According to Santilli: According to Santilli: " A first series of measurements was initiated with Klystron I on July 28,2006, at 2 p.m. Following flushing of air, the klystron was filled up with commercial grale hydrogen at 25 psi pressure. We first used detector PM1703GN to verify that the background radiations were solely consisting of photon counts of 5-7 μR/h without any neutron count; we delivered a DC electric arc at 27 V and 30 A (namely with power much bigger than that of the arc used in Don Borghi's tests...), at about 0.125" gap for about 3 s; we waited for one hour until the electrodes had cooled down, and then placed detector PM1703GN against the PVC cylinder. This resulted in the detection of photons at the rate of 10 - 15 μR/hr expected from the residual excitation of the tips of the electrodes, but no neutron count at all. However, about three hours following the test, detector PM1703GN entered into sonic and vibration alarms, specifically, for neutron detections off the instrument maximum of 99 cps at about 5' distance from the klystron while no anomalous photon emission was measured. The detector was moved outside the laboratory and the neutron counts returned to zero. The detector was then returned to the laboratory and we were surprised to see it entering again into sonic and vibrational alarms at about 5' away from the arc chamber with the neutron count off scale without appreciable detection of photons, at which point the laboratory was evacuated for safety. After waiting for 30 minutes (double neutron's lifetime), we were surprised to see detector PMl703GN go off scale again in neutron counts at a distance of 10' from the experimental set up, and the laboratory was closed for the day." TGD based model The basic problems to be solved are following. 1. What is the role of current arc and other triggering impulses (such as microwave radiation or pressure surge mentioned by Santilli): do they provide energy or do they have some other role? 2. Neutron production is kinematically impossible if weak interactions mediate it. Even if kinematically possible, weak interaction rates are quite too slow. The creation of intermediate states via other than weak interactions would solve both problems. If weak interactions are involved with the creation of the intermediate states, how there rates can be so high? 3. What causes the strange delays in the production in some cases but now always? Why hydrogen gas is preferred? The effect brings strongly in mind cold fusion for which TGD proposes a model (see this) in terms of generation of dark nuclei with non-standard value heff=n× h of Planck constant formed from dark proton sequences at flux tubes. The binding energy for these states is supposed to be much lower than for the ordinary nuclei and eventually these nuclei would decay to ordinary nuclei in collisions with metallic targets attracting positively charged magnetic flux tubes. The energy liberated would be of the essentially the ordinary nuclear binding energy. Note that the creation of dark proton sequences does not require weak interactions so that the basic objections are circumvented. TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics. Could this model explain the anomalous neuron production and its strange features? 1. Why electric arc, pressure surge, or microwave radiation would be needed? Dark phases are formed at quantum criticality (see this) and give rise to the long range correlations via quantum entanglement made possible by large heff=n× h. The presence of electron arc occurring as di-electric breakdown is indeed a critical phenomenon. Already Tesla discovered strange phenomena in his studies of arc discharges but his discoveries were forgotten by mainstream. TGD explanation (see this) could be the same for Tesla's findings, for cold fusion (see this), Pollack effect (see this) and for the anomalous production of neutrons. Even electrolysis would involve in an essential manner Pollack effect and new physics. Also energy feed might be involved. Quite generally, in TGD inspired quantum biology generation of dark states requires energy feed and the role of metabolic energy is to excite dark states. For instance, dark atoms have smaller binding energy and the energies of cyclotron states increase with heff/h. For instance, part of microwave photons could be dark and have much higher energy than otherwise. Could the production of dark proton sequences at magnetic flux tubes be all that is needed so that the possible dark variant of the reaction e-+p→ n+ν would not be needed at all? 2. If also weak bosons appear as dark variants, their Compton length is scaled up accordingly and in scales shorter than the Compton length, they behave effectively as massless particles and weak interactions would become as strong as electromagnetic interactions. This would make possible the decay of dark proton sequences at magnetic flux tubes to beta stable dark isotopes via p→ n+e++ν. Neutrons would be produced in the decays of the dark nuclei to ordinary nuclei liberating nuclear binding energy. Note however that TGD allows also to consider p-adically scaled variants of weak bosons with much smaller mass scale possible important in biology, and one cannot exclude them from consideration. 3. The reaction e-+p→ n+ν is not necessary in the model. One can however ask, whether there could exist a mechanism making the dark reaction e-+p→ n+ν kinematically possible. If the scale of dark nuclear binding energy is strongly reduced, also p→ n+e++ν in dark nuclei would become kinematically impossible (in ordinary nuclei nuclear binding energy makes n effectively lighter than p). TGD based model for nuclei as strings of nucleons (see this and this) connected by neutral or charged (possibly colored) mesonlike bonds with quark and antiquark at its ends could resolve this problem. One could have exotic nuclei in which proton plus negatively charged bond could effectively behave like neutron. Dark weak interactions would take place for neutral bonds between protons and reduce the charge of the bond from q=0 to q= -1 and transform p to effective n. This was assumed also in the model of dark nuclei and also in the model of ordinary nuclei and predicts large number of exotic states. One can of course ask, whether the nuclear neutrons are actually pairs of proton and negatively charged bond. 4. What about the delays in neutron production occurring in some cases? Why not always? In the situations, when there is a delay in neutron production, the dark nuclei could have rotated around magnetic flux tubes of the magnetic body (MB) of the system before entering to the metal target, one would have a delayed production. 5. Why would hydrogen be preferred? Why for instance, deuteron and heavier isotopes containing neutrons would not form dark proton sequences at magnetic flux tubes. Why would be the probability for the transformation of say D=pn to its dark variant be very small? If the binding energy of dark nuclei per nucleon is several orders of magnitude smaller than for ordinary nuclei, the explanation is obvious. The ordinary nuclear binding energy is much higher than the dark binding energy so that only the sequences of dark protons can form dark nuclei. The first guess (see this) was that the binding energy is analogous to Coulomb energy and thus inversely proportional to the size scale of dark nucleus scaling like h/heff. One can however ask why D with ordinary size could not serve as sub-unit. See the article Anomalous neutron production from an arc current in gaseous hydrogen or the chapter Cold Fusion Again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy". For a summary of earlier postings see Latest progress in TGD. ### Third gravitational wave detection by LIGO collaboration The news about third gravitational wave detection managed to direct the attention of at least some of us from the doings of Donald J. Trump. Also New York Times told about the gravitational wave detection by LIGO, the Laser Interferometer Gravitational-Wave Observatory. Gravitational waves are estimated to be created by a black-hole merger at distance of 3 billion light years. The results are published in the article "Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2" in Phys Rev Lett. Two black holes with masses 19× M(Sun) and 31× M(Sun) merged to single blackhole hole of with mass of 49× M(Sun) meaning that roughly one solar mass was transformed to gravitational radiation. During the the climax of the merger, they were emitting more energy in the form of gravitational waves than all the stars in the observable universe. The colliding blackholes were very massive in all three events. There should be some explanation for this. An explanation considered in the article is that the stars giving rise to blackholes were rather primitive containing light elements and this would have allowed large masses. The transformation to blackholes could have occurred directly without the intervening supernova phase. There is indeed quite recent finding showing a disappearance of very heavy star with 25 solar masses suggesting that direct blackhole formation without super-nova explosion is possible for heavy stars. It is interesting to take a fresh look to these blackhole like entities in TGD framework. This however requires brief summary about the formation of galaxies and stars in TGD Universe (see this and this). 1. The simplest possibility allowed by TGD is that galaxies as pearls in necklace are knots (or spagettilike substructures) in long cosmic strings. This does not exclude the original identification as closed strings around long cosmic string. These loops must be however knotted. Galactic super-blackhole could correspond to a self-intersection of the long cosmic string. This view is forced by the experimental finding that for mini spirals, there is volume with radius containing essentially constant density of dark matter. The radius of this volume is 2-3 times larger than the volume containing most stars of the galaxy. This region would contain a galactic knot. The important conclusion is that stars would be subknots of these galactic knots as indeed proposed earlier. Part of the magnetic energy would decay to ordinary matter giving rise to visible part of start as the cosmic string thickens. This conforms with the finding that the region in which dark matter density seems to be constant has size few times larger than the region containing the stars (size scale is few kpc). 2. The light beams from supernovas would most naturally arrive along the flux tubes being bound to helical orbits rotating around them. Primordial cosmic string as stars, galaxies, linear structures of galaxies, even elementary particles, hadrons, nuclei, and biomolecules: all these structures would be magnetic flux tubes possibly knotted and linked. The space-time of GRT as a small deformation of M4 would have emerged from cosmic string dominated phase via the TGD counterpart of inflationary period. The signatures of the primordial cosmic string dominated period would be directly visible in all scales! We would be seeing the incredibly simple truth but our theories would prevent us to become aware about what we are seeing! The crucial question concerns the dark matter fraction of the star. 1. The fraction depends on the thickness of the deformed cosmic string having originally 1-D projection E3⊂ M4. If Kähler magnetic energy dominates, the energy per length for a thickened flux tube is proportional to 1/S, S the area of M4 projection and thus decreases rapidly with thickening. The thickness of the flux tube would be in minimum about CP2 size scale of 104 Planck lengths. If S is large enough, the contribution of cosmic string to the mass of the star is smaller than that of visible matter created in the thickening. 2. What about very primitive stars - say those associated with LIGO mergers. The proportion of visible matter in star should gradually increase as flux tube thickens. Could the detected blackhole fusion correspond to a fusion of dark matter stars rather than that of Einsteinian blackholes? If the radius of the objects satisfies rS=2GM, the blackhole like entities are in question also in TGD. The space-time sheet assigable to blachhole according to TGD has however two horizons. The first horizon would be a counterpart of the usual Schwartschild horizons. At second horizon the signature of the induced metric would become Euclidian - this is possible only in TGD. Cosmic string would topologically condense at this space-time sheet. 3. Could most of matter be dark even in the case of Sun? What can we really say about the portion of the ordinary matter inside Sun? The total rate of nuclear fusion in the solar core depends on the density of ordinary matter and one can argue that existing model does not allow a considerable reduction of the portion of ordinary matter. There is however also another option - dark fusion - which would be at work in TGD based model of cold fusion (see this) (low energy nuclear reactions (LENR) is less misleading term) and also in TGD inspired biology (there is evidence for bio-fusion) as Pollack effect (see this), in which part of protons go to dark phase at magnetic flux tubes to form dark nuclear strings creating negatively charged exclusion zone). Dark fusion would give rise to dark proton sequences at magnetic flux tubes decaying by dark beta emission to beta stable nuclei and later to ordinary nuclei and releasing nuclear binding energy. Dark fusion could explain the generation of elements heavier than iron not possible in stellar cores (see this). Standard model assumes that they are formed in supernova explosions by so called r-process but empirical data do not support this hypothesis. In TGD Universe dark fusion could occur outside stellar interiors. 4. But if heavier elements are formed via dark fusion, why the same could not be true for the lighter elements? The TGD based model of atomic nuclei represents nucleus as a string like object or several of them possibly linked and knotted. Thickened cosmic strings again! Nucleons would be connected by meson like bonds with quark and antiquark at their ends. This raises a heretic question: could also ordinary nuclear fusion rely on similar mechanism? Standard nuclear physics relies on potential models approximating nucleons with point like particles: this is of course the only thing that nuclear physicists of past could imagine as children of their time. Should the entire nuclear physics be formulated in terms of many-sheeted space-time concept and flux tubes? I have proposed this kind of formulation long time ago (see this). What would distinguish between ordinary and dark fusion would be the value of heff=n× h. After this prelude it is possible to speculate about blackholes in the spirit of TGD . 1. Also the interiors of blackholes would contain dark knots and have magnetic structure. This predicts unexpected features such as magnetic moments not possible for GRT blackholes. Also the matter inside blackhole would be dark (the TGD based explanation for Fermi bubbles assumes this (see this). Already the model for the first LIGO event explained the unexpected gamma ray bursts in terms of the twisting of rotating flux tubes as effect analogous to what causes sunspots: twisting and finally reconnection. 2. One must also ask whether LIGO blackholes are actually dark stars with very small amount of ordinary matter. If the radius is indeed equal to Schwarschild radius rS= 2GM and mass is really what it is estimated to be rather than being systematically smaller, then the interpretation as TGD counterparts of blackholes makes sense. If mass is considerably smaller, the radius would be correspondingly large, and one would not have genuine blackhole. I do not however take this option too seriously. 3. What about collisions of blackholes? Could they correspond to two knots moving along same string in opposite directions and colliding? Or two cosmic strings intersecting and forming a cosmic crossroad with second blackhole in the crossing? Or self-intersection of single cosmic string? In any case, cosmic traffic accident would be in question. The second LIGO event gave hints that the spin directions of the colliding blackholes were not the same. This does not conform with the assumption that binary blackhole system was in question. Since the spin direction would be naturally that of long cosmic string, this suggests that the traffic accident in cosmic cross road defined by intersection or self-intersection created the merger. Note that intersections tend to occur (think of moving strings in 3-D space) and could be stablized by gravitational attraction: two string world sheet at 4-D space-time surface have stable intersections just like strings in plane unless they reconnect. See the article LIGO and TGD or the chapter Quantum astrophysics of "Physics in many-sheeted space-time". For a summary of earlier postings see Latest progress in TGD.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534432053565979, "perplexity": 1267.4059619594123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00585.warc.gz"}
https://math.libretexts.org/Courses/Montana_State_University/M273%3A_Multivariable_Calculus/16%3A_Vector_Fields%2C_Line_Integrals%2C_and_Vector_Theorems
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 16: Vector Fields, Line Integrals, and Vector Theorems $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ • Conservative Vector Fields In this section, we continue the study of conservative vector fields. We examine the Fundamental Theorem for Line Integrals, which is a useful generalization of the Fundamental Theorem of Calculus to line integrals of conservative vector fields. We also discover show how to test whether a given vector field is conservative, and determine how to build a potential function for a vector field known to be conservative. • Divergence and Curl Divergence and curl are two important operations on a vector field. They are important to the field of calculus for several reasons, including the use of curl and divergence to develop some higher-dimensional versions of the Fundamental Theorem of Calculus. In addition, curl and divergence appear in mathematical descriptions of fluid mechanics, electromagnetism, and elasticity theory, which are important concepts in physics and engineering. • Green's Theorem Green’s theorem is an extension of the Fundamental Theorem of Calculus to two dimensions. It has two forms: a circulation form and a flux form, both of which require region D in the double integral to be simply connected. However, we will extend Green’s theorem to regions that are not simply connected. Green’s theorem relates a line integral around a simply closed plane curve C and a double integral over the region enclosed by C. • Introduction to Vector Field Chapter Vector fields have many applications because they can be used to model real fields such as electromagnetic or gravitational fields. A deep understanding of physics or engineering is impossible without an understanding of vector fields. Furthermore, vector fields have mathematical properties that are worthy of study in their own right. In particular, vector fields can be used to develop several higher-dimensional versions of the Fundamental Theorem of Calculus. • Line Integrals Line integrals have many applications to engineering and physics. They also allow us to make several useful generalizations of the Fundamental Theorem of Calculus. And, they are closely connected to the properties of vector fields, as we shall see.Line integrals have many applications to engineering and physics. They also allow us to make several useful generalizations of the Fundamental Theorem of Calculus. And, they are closely connected to the properties of vector fields, as we shall see. • Stokes' Theorem In this section, we study Stokes’ theorem, a higher-dimensional generalization of Green’s theorem. This theorem, like the Fundamental Theorem for Line Integrals and Green’s theorem, is a generalization of the Fundamental Theorem of Calculus to higher dimensions. Stokes’ theorem relates a vector surface integral over surface S in space to a line integral around the boundary of S. • Surface Integrals If we wish to integrate over a surface (a two-dimensional object) rather than a path (a one-dimensional object) in space, then we need a new kind of integral. We can extend the concept of a line integral to a surface integral to allow us to perform this integration. Surface integrals are important for the same reasons that line integrals are important. They have many applications to physics and engineering, and they allow us to expand the Fundamental Theorem of Calculus to higher dimensions. • The Divergence Theorem We have examined several versions of the Fundamental Theorem of Calculus in higher dimensions that relate the integral around an oriented boundary of a domain to a “derivative” of that entity on the oriented domain. In this section, we state the divergence theorem, which is the final theorem of this type that we will study. • Vector Calculus (Exercises) These are homework exercises to accompany Chapter 16 of OpenStax's "Calculus" Textmap. • Vector Fields Vector fields are an important tool for describing many physical concepts, such as gravitation and electromagnetism, which affect the behavior of objects over a large region of a plane or of space. They are also useful for dealing with large-scale behavior such as atmospheric storms or deep-sea ocean currents. In this section, we examine the basic definitions and graphs of vector fields so we can study them in more detail in the rest of this chapter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686618447303772, "perplexity": 162.69211882177606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00379.warc.gz"}
http://alice-publications.web.cern.ch/node/4505
# Medium modification of the shape of small-radius jets in central Pb-Pb collisions at $\sqrt{s_{\mathrm {NN}}} = 2.76\,\rm{TeV}$ We present the measurement of a new set of jet shape observables for track-based jets in central Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}} = 2.76$ TeV. The set of jet shapes includes the first radial moment or angularity, $g$; the momentum dispersion, $p_{\rm T}D$; and the difference between the leading and sub-leading constituent track transverse momentum, $LeSub$. These observables provide complementary information on the jet fragmentation and can constrain different aspects of the theoretical description of jet-medium interactions. The jet shapes were measured for a small resolution parameter $R = 0.2$ and were fully corrected to particle level. The observed jet shape modifications indicate that in-medium fragmentation is harder and more collimated than vacuum fragmentation as obtained by PYTHIA calculations, which were validated with the measurements of the jet shapes in proton-proton collisions at $\sqrt{s} = 7$ TeV. The comparison of the measured distributions to templates for quark and gluon-initiated jets indicates that in-medium fragmentation resembles that of quark jets in vacuum. We further argue that the observed modifications are not consistent with a totally coherent energy loss picture where the jet loses energy as a single colour charge, suggesting that the medium resolves the jet structure at the angular scales probed by our measurements ($R=0.2$). Furthermore, we observe that small-$R$ jets can help to isolate purely energy loss effects from other effects that contribute to the modifications of the jet shower in medium such as the correlated background or medium response. Accepted by: JHEP e-Print: arXiv:1807.06854 | PDF | inSPIRE Figures ## Figure 1 $g$, $p_{\rm T}D$, and $LeSub$ for quark and gluon jets as obtained from PYTHIA Perugia 2011 simulations of pp collisions at $\sqrt{s}=2.76$\,TeV in the transverse momentum interval $40 \leq p_{\mathrm{T,jet}}^{\rm{part,ch}} \leq 60$\,GeV/$c$. ## Figure 2 $g$, $p_{\rm T}D$, and $LeSub$ as obtained from PYTHIA Perugia 2011 simulations of pp collisions at $\sqrt{s}=2.76$\,TeV for three different transverse momentum intervals. ## Figure 3 Background subtraction performance for jet shapes studied with jets from PYTHIA events embedded into real Pb--Pb events, in the background subtracted transverse momentum interval $40 \leq p_{\mathrm{T,jet}}^{\rm{rec,ch}} \leq 60$\,GeV/$c$ for the area derivatives and constituent subtraction methods. ## Figure 4 Left plots show the distributions of residuals for the set of three jet shapes in a given interval of $p_{\rm{T,jet}}^{\rm{part,ch}}$ $40$--$60$ GeV/$c$ using the PYTHIA and PYTHIA embedded simulations. Right plots show the width (quantified as the standard deviation) of the distributions on the left as a function of the values of the shapes at particle level. The black and red curvescorrespond to pp and Pb--Pb simulations, respectively. The line connecting the points on the right is drawn to guide the eye. ## Figure 5 Fully corrected jet shape distributions measured in pp collisions at $\sqrt{s}=7$\,TeV for $R = 0.2$ inthe range of jet $p_{\mathrm{T,jet}}^{\rm ch}$ of $40$--$60$\,GeV$/c$. The results are compared to PYTHIA. The results are compared to PYTHIA. The coloured boxes represent the uncertainty on the jet shape (upper panels) and its propagation to the ratio (lower panels) ## Figure 6 Fully corrected jet shape distributions in $0$--$10\%$ central Pb--Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$\,TeV for $R = 0.2$ inthe range of jet $p_{\mathrm{T,jet}}^{\rm ch}$ of $40$--$60$\,GeV$/c$. The results are compared to PYTHIA. The coloured boxes represent the uncertainty on the jet shape (upper panels) and its propagation to the ratio (lower panels). ## Figure 7 Jet shape distributions in $0$--$10\%$ central Pb--Pb collisions at $\sqrt{s_{\rm NN}}=2.76$\,TeV for $R = 0.2$ inrange of jet $p_{\mathrm{T,jet}}^{\rm ch}$ of $40$--$60$\,GeV$/c$ compared to quark and gluon vacuum generated jet shape distributions. The coloured boxes represent the experimental uncertainty on the jet shapes. ## Figure 8 Jet shape distributions in $0$--$10\%$ central Pb--Pb collisions at $\sqrt{s_{\rm NN}}=2.76$\,TeV for $R = 0.2$ inrange of jet $p_{\mathrm{T,jet}}^{\rm ch}$ of $40$--$60$\,GeV$/c$ compared to JEWEL with and without recoils with different subtraction methods. The coloured boxes represent the experimental uncertainty on the jet shapes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792450070381165, "perplexity": 2262.7563599574255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743105.25/warc/CC-MAIN-20181116152954-20181116174954-00550.warc.gz"}
https://placailleblog.wordpress.com/2017/04/02/applying-temporal-difference-methods-to-machine-learning-part-1/
Applying Temporal Difference Methods to Machine Learning — Part 1 In this post I detail my project for the course Reinforcement Learning (COMP767) taken at McGill, applying Temporal Difference (TD) methods in a Machine Learning setting. This concept was first discussed by Sutton when he introduced this family of learning algorithms. I aim to go over what was discussed in the paper and see how it performs on a traditional machine learning problem. In this part, I will be covering the concepts underlying this application. Introduction When introducing these methods, Sutton makes the claim that temporal-difference methods have an advantage over typical supervised learning because it has the effect of spreading out the computation load and generate more accurate predictions. He emphasizes that this is true in a particular setting where the predictions are for multi-step prediction problems, meaning what we are trying to predict is only revealed after a sequence of predictions. In a sense, problems where we can only know the true outcome after multiple steps have been observed and predicted. The author even further argues these types of problems occur more often in the real-world rather than single-step prediction problems, where each time a prediction is made, the real outcome can be verified. Here are some examples of multi-steps prediction problems • Monthly predictions for the end-of-year financial results of a company • Daily predictions for rain on the upcoming Saturday • Class prediction at each second for a video clip What I propose in this case study is to play with a classic machine learning problem that is the MNIST dataset (more info about MNIST here). I will detail further along how this problem can be modified to be considered a multi-step prediction problem. Temporal-Difference learning The main concept behind the temporal-difference learning methods is to allow feedback to be learned based on the differences of the predictions that are made at each step, as opposed to waiting for the real outcome at the end of a sequence of observations. Let’s consider a sequence of observations $x_1, x_2, \dots, x_m$ that lead to the outcome $z$. Let’s further denote the prediction of $z$ at each time step $t \in \{1, \dots, m\}$ as $P_t$, and where $P_{m+1} = z$. In addition, $x_t$ could denote an observation vector of different attributes. We are therefore trying to predict what will be the final outcome after each observation $x_t$ at time $t$. To do so, we will be using a set of weights denoted $w$, where $P_t$ can now be written as $P(x_t, w)$. Sutton analyses the case where $P(x_t, w)$ is a linear function of $x_t$ and $w$. I will be exploring the non-linear case further down the line when being applied to MNIST. For now, let’s focus on a variant of this problem where the weights are only updated at the end of the sequence under the following update rule, $w \leftarrow w + \sum \limits_{t=1}^m \Delta w_t$ Under the traditional supervised learning approach, all observations $\{ x_1, x_2, \dots, x_m \}$ are considered paired observations with the outcome $z$. Under this approach, and given our prediction function $P(x_t, w)$ a very popular gradient update rule for $w$ based on backpropagation of the error can be given by the following, $\Delta w_t = \alpha (z - P_t) \nabla_w P_t$ Where $\nabla_w P_t$ is the gradient of the prediction at time $t$ with regard to the weights of our function and $\alpha$ our learning rate. An important observation that the author emphasizes is the fact that all $\Delta w_t$ for all $t$ depend on the error at each time step $(z - P_t)$ which themselves depend on $z$ that is only known at the end of the sequence under the types of problems that we are exploring. In practical terms, we would therefore make a prediction and determine the gradient at each time step, store them in memory until the end of the full sequence and once the outcome is known, compute the errors at each time step and do the update to our weights. This is what TD learning will aim to circumvent, to allow iterative calculations to be made rather than stacking up the information in other to reduce the memory requirements. TD approach The main issue with the traditional machine learning approach described above is the update rule referring to the real outcome at each time step. Sutton suggests that rather to see the error as the outcome vs our current predictions, to consider the sum of all differences between our future predictions that will occur $(P_{t+1} - P_t)$. These differences in predictions at each time steps are called temporal differences, hence the name of the method! Now let’s do a bit of math to figure out what would be the update rule based on this approach. Arithmetically, we can rewrite $z - P_t$ as $\sum \limits_{k=t}^m(P_{k+1} - P_k)$, where $P_{m+1} = z$. We can then re-write the update rule from the first approach, $w \leftarrow w + \sum \limits_{t=1}^m \alpha (z - P_t) \nabla_w P_t = w + \sum \limits_{t=1}^m \alpha \sum \limits_{k=t}^m (P_{k+1} - P_k) \nabla_w P_t$ By following Fubini’s theorem, we can switch the indices of the double summation to obtain $w + \sum \limits_{t=1}^m \alpha \sum \limits_{k=t}^m (P_{k+1} - P_k) \nabla_w P_t = w + \sum \limits_{k=1}^m \alpha \sum \limits_{t=1}^k (P_{k+1} - P_k) \nabla_w P_t$. By simply swapping the indices $k$ and $t$ for clearer understanding and moving around constants, we can finally obtain the update rule, $w \leftarrow w + \sum \limits_{t=1}^m \alpha (P_{t+1} - P_t) \sum \limits_{k=1}^t \nabla_w P_k$ We can then see this update rule as a sum of $\Delta w_t$ for any $t$ as, $\Delta w_t = \alpha (P_{t+1} - P_t) \sum \limits_{k=1}^t \nabla_w P_k$. We can therefore notice the update rule for the TD approach doesn’t required the actual outcome $z$ (unless on the last prediction $m$). It therefore doesn’t require us to track all the predictions that were made during the sequence. To compensate though, we need to compute the sum of gradients over previous time steps, which can be done easily in terms of memory, as we can only store the current sum and add the current gradient when obtained. In other words, when we compute the predictions at time step $t+1$, we can obtain the sum of previous updates easily. We determine the TD error $(P_{t+1} - P_t)$, simply add the gradient to the total gradient kept in memory w.r.t. the weights and increment our sum of updates. When we reach the end of the sequence, no additional computation than any previous step is required other than doing the actual updates to $w$. This dramatically reduces the memory requirements compared to the traditional machine learning approach detailed above, especially in cases of long sequence. Using MNIST as a multi-step prediction problem Traditionally MNIST has been seen as a single-step prediction problem, i.e. we see the 28×28 pixel input as a whole, compute a prediction and compare it to the real number. In order to use it as a multi-step prediction problem, we can simply consider the image input as a sequence of 784 pixels! This way, after each pixel if observed, we can make a prediction w.r.t. the image and have the real outcome once the image has been fully covered, making it a multi-step prediction problem. Indeed, we can therefore denote pixel $i$ as $p_i$ and going from left to right and from top row to bottom row, we can obtain a sequence $p_1, p_2, \dots, p_{784}$. We can further denote the outcome of the sequence as being the class of the image $z \in \{ 0, 1, 2, \dots, 9 \}$. Additionally, to make the computation of the prediction $P_t$ at time step $t$ a function of all previously observed pixels in the sequence rather than just the current pixel $p_t$, we can express our observation at time $t$ as $x_t = [p_1, p_2, \dots, p_{t-1}, p_t, 0, \dots, 0, 0]^T$ as being a vector of size 784 with all previously seen pixels up to time step $t$ and 0 afterwards. Next steps With the concepts underlying the case study exposed, in the following part I will cover the performance of the TD approach in comparison to the traditional machine learning approach for multi-step prediction problem. To be fair, there are some very powerful methods that perform extremely well on MNIST these days. The goal here is not to compare the best of machine learning to the TD learning approach mentioned above. It is meant to be an exercise of applying the fundamental concept introduced by Sutton. The machine learning approach used will have the same sequenced inputs as the TD method. Understandably, knowing that MNIST is mostly seen as a purely single-step prediction problem could be exploited by having as an input the full sequence of pixels. I trust the reader to understand this nuance :).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742204904556274, "perplexity": 370.2107494531}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866511.32/warc/CC-MAIN-20180524151157-20180524171157-00149.warc.gz"}
http://math.stackexchange.com/questions/19642/can-a-rule-be-formulated-to-explain-this-to-7-year-old
# Can a rule be formulated to explain this to 7 year old? I'm trying to teach math to my 7 year old daughter. I'm teaching following type of equations. $$\cdots - x = y$$ I'm able to explain her the rule that: when $\cdots- x = y$, we can always take $x$ (value on the left of equation) to the other side of $=$ sign and flip( $-$ to $+$ and vice versa), $-$ to $+$ and get the answer. Meaning when $\cdots - x = y$, we can always do $\cdots = y + x$ and get the answer. This rule works for \begin{align*} x + \cdots &= y \\ \cdots + x &= y\\ \cdots - x &= y \\ \end{align*} But it doesn't work for $x - \cdots = y$. Because if you apply the rule, you get - (answer) and not just ( answer ) My question is given that I'm trying to teach this to 7 year old, is there any better method where one rule would cover all 4 cases? Any ideas, thoughts... \begin{align*} - x + \cdots &= y\\ -\cdots + x &= y\\ - \cdots - x &= y\\ - x - \cdots &= y \end{align*} - I always tro to avoid that rule (moving over and replacing sign) as it doesn't give any insight in the problem. If you have, for example, 3+x=5, I always say we add -3 to both sides (because that doesn't change equality), and we get -3+3+x=-3+5, hence x=8. –  Fredrik Meyer Jan 31 '11 at 1:55 I agree with Fredrik that thinking in terms of doing the same thing to both sides is a better approach. However, I disagree with his arithmetic... :-) –  Jesse Madnick Jan 31 '11 at 2:04 I've to come up with examples where i avoid negative numbers, they haven't reached there yet. –  zobars Jan 31 '11 at 2:18 You shouldn't be using "rules" at all; that is not what mathematics is about, at any level. (This is an enormous pet peeve of mine. There is a commercial floating around Hulu about some kind of online tutoring program where a woman describes to a girl the rule for computing the area of a triangle given its base and height, and then completely fails to draw the diagram that explains why this rule works. It annoys me to no end. (This particular example is also brought up in the infamous Lockhart's lament.)) I have some amount of money in my bank account. When I withdraw $x$ dollars, I have $y$ dollars left. How much money did I have originally? $x + y$. How much money do I have now? $(x + y) - x = y$. I have some amount of money in my bank account. When I deposit $x$ dollars, I now have $y$ dollars. How much money did I have originally? $y - x$. How much money do I have now? $(y - x) + x = y$. At some point it is probably a good idea to mention that $x + y$ is the same as $y + x$ (that is, depositing $x$ dollars and then depositing $y$ dollars is the same as depositing $y$ dollars and then depositing $x$ dollars). Then you've covered all of the "cases." Alternately, a physical analogy ought to work well. I am some distance away from a wall. When I move $x$ feet towards the wall, I am $y$ feet away from the wall. How far was I originally away from the wall? $x + y$. How far am I away from the wall now? $(x + y) - x = y$. I am some distance away from a wall. When I move $x$ feet away from the wall, I am $y$ feet away from the wall. How far was I originally away from the wall? $y - x$. How far am I away from the wall now? $(y - x) + x = y$. - Hmm.. Why do you say math is not about rules ? I agree that i would really like them to learn what's actually going on when i'm teaching them x + .. = y type of equations and for single digit integers she's able to do mental math(similar to your bank deposit examples) and come up with answers, it's when i reached to double digit numbers i had to use something more than mental math. What would that be ? Am i just trying to teach something that has to wait ?? –  zobars Jan 31 '11 at 2:07 @zobars: mathematics is as much about following rules as literature is about writing words. If you'd like a thorough discussion of this, you can read Lockhart's lament, which I've linked to above. –  Qiaochu Yuan Jan 31 '11 at 2:12 Alright Qiaochu, i get the point. I would personally not like to go by rules. But i'm trying to find out best way to teach an elementary level kid and i get the idea that i could still do that without forming rules, they would get better idea with physical analogy. I'll give it a try and see how it goes.. Thanks for your answer and time. –  zobars Jan 31 '11 at 2:15 @zobars: There's a place for rules and memorization, and a place for understanding. I would say everyone should memorize the multiplication tables, simply because having them at your fingertips is so much more useful than trying to figure them out from scratch every time; but understanding what multiplication is will be more useful than not understanding it. But once you get past the very basics, memorization just tends to get in the way of both understanding and ability to use the material. Some memorization is still useful (e.g., $(\sin x)'=\cos x$), but much less than people think. –  Arturo Magidin Jan 31 '11 at 3:09 @Arturo, can't agree more with you. Just want to tailor this to elementary kids. –  zobars Jan 31 '11 at 19:10 I'm not clear on what the "rule" you say "doesn't work" is... Still... As Qiaochu says, don't do "rules". The key to all of these manipulations is: If two things are equal, and you do the same thing to each of them, the results will also be equal. So, if $A$ is equal to $B$, then adding $2$ to $A$ will result in the same thing as adding $2$ to $B$: if $A=B$, then $A+2 = B+2$. If you have $\cdots - x = y$, then you have two things that are equal. Adding $x$ to both will still give you equal things, so $$(\cdots - x) + x = y + x.$$ Then using the fact that $-x+x = 0$, you get $\cdots = y+x$. All of the manipulations you propose are instances of this: if you have two equal things, and you do the same thing to both, the results are still equal. - Yes Arturo, i'm realizing exactly how i should go about this now. I was just presuming that it would be hard for them to really understand equality, but i guess i don't know until i've tried. Thanks. –  zobars Jan 31 '11 at 2:16 @zobars: You may want to go over the points of equality; they are intuitive enough, so perhaps they won't have any trouble with them. Everything is equal to itself; if $A$ is equal to $B$, then $B$ is equal to $A$; and if $A$ is equal to $B$, and $B$ is equal to $C$, then $A$ is equal to $C$. Also, you may want to delete one of the two comments above. –  Arturo Magidin Jan 31 '11 at 2:55 Yes that makes sense. Thanks. i finally figured out how to delete a comment... –  zobars Jan 31 '11 at 3:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026868343353271, "perplexity": 356.1305584731665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00203-ip-10-180-206-219.ec2.internal.warc.gz"}
http://linear.ups.edu/jsmath/0202/fcla-jsmath-2.02li33.html
### Section MINM  Matrix Inverses and Nonsingular Matrices From A First Course in Linear Algebra Version 2.02 © 2004. Licensed under the GNU Free Documentation License. http://linear.ups.edu/ We saw in Theorem CINM that if a square matrix A is nonsingular, then there is a matrix B so that AB = {I}_{n}. In other words, B is halfway to being an inverse of A. We will see in this section that B automatically fulfills the second condition (BA = {I}_{n}). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We’ll make all these connections precise now. Not many examples or definitions in this section, just theorems. #### Subsection NMI: Nonsingular Matrices are Invertible We need a couple of technical results for starters. Some books would call these minor, but essential, results “lemmas.” We’ll just call ’em theorems.  See Technique LC for more on the distinction. The first of these technical results is interesting in that the hypothesis says something about a product of two square matrices and the conclusion then says the same thing about each individual matrix in the product. This result has an analogy in the algebra of complex numbers: suppose α,\kern 1.95872pt β ∈ ℂ, then αβ\mathrel{≠}0 if and only if α\mathrel{≠}0 and β\mathrel{≠}0. We can view this result as suggesting that the term “nonsingular” for matrices is like the term “nonzero” for scalars. Theorem NPNT Nonsingular Product has Nonsingular Terms Suppose that A and B are square matrices of size n. The product AB is nonsingular if and only if A and B are both nonsingular. Proof   (⇒) We’ll do this portion of the proof in two parts, each as a proof by contradiction (Technique CD). Assume that AB is nonsingular. Establishing that B is nonsingular is the easier part, so we will do it first, but in reality, we will need to know that B is nonsingular when we prove that A is nonsingular. You can also think of this proof as being a study of four possible conclusions in the table below. One of the four rows must happen (the list is exhaustive). In the proof we learn that the first three rows lead to contradictions, and so are impossible. That leaves the fourth row as a certainty, which is our desired conclusion. A B Case Singular Singular 1 Nonsingular Singular 1 Singular Nonsingular 2 Nonsingular Nonsingular Part 1. Suppose B is singular. Then there is a nonzero vector z that is a solution to ℒS\kern -1.95872pt \left (B,\kern 1.95872pt 0\right ). So \eqalignno{ (AB)z & = A(Bz) & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = A0 & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & \cr & & & & } Because z is a nonzero solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ), we conclude that AB is singular (Definition NM). This is a contradiction, so B is nonsingular, as desired. Part 2. Suppose A is singular. Then there is a nonzero vector y that is a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). Now consider the linear system ℒS\kern -1.95872pt \left (B,\kern 1.95872pt y\right ). Since we know B is nonsingular from Case 1, the system has a unique solution (Theorem NMUS), which we will denote as w. We first claim w is not the zero vector either. Assuming the opposite, suppose that w = 0 (Technique CD). Then \eqalignno{ y & = Bw & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = B0 & &\text{Hypothesis} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & \text{contrary to $y$ being nonzero. So $w\mathrel{≠}0$. The pieces are in place, so here we go,} \cr (AB)w & = A(Bw) & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = Ay & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & & & & } So w is a nonzero solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ), and thus we can say that AB is singular (Definition NM). This is a contradiction, so A is nonsingular, as desired. () Now assume that both A and B are nonsingular. Suppose that x ∈ {ℂ}^{n} is a solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ). Then \eqalignno{ 0 & = \left (AB\right )x & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = A\left (Bx\right ) & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & } By Theorem SLEMM, Bx is a solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ), and by the definition of a nonsingular matrix (Definition NM), we conclude that Bx = 0. Now, by an entirely similar argument, the nonsingularity of B forces us to conclude that x = 0. So the only solution to ℒS\kern -1.95872pt \left (AB,\kern 1.95872pt 0\right ) is the zero vector and we conclude that AB is nonsingular by Definition NM. This is a powerful result in the “forward” direction, because it allows us to begin with a hypothesis that something complicated (the matrix product AB) has the property of being nonsingular, and we can then conclude that the simpler constituents (A and B individually) then also have the property of being nonsingular. If we had thought that the matrix product was an artificial construction, results like this would make us begin to think twice. The contrapositive of this result is equally interesting. It says that A or B (or both) is a singular matrix if and only if the product AB is singular. Notice how the negation of the theorem’s conclusion (A and B both nonsingular) becomes the statement “at least one of A and B is singular.” (See Technique CP.) Theorem OSIS One-Sided Inverse is Sufficient Suppose A and B are square matrices of size n such that AB = {I}_{n}. Then BA = {I}_{n}. Proof   The matrix {I}_{n} is nonsingular (since it row-reduces easily to {I}_{n}, Theorem NMRRI). So A and B are nonsingular by Theorem NPNT, so in particular B is nonsingular. We can therefore apply Theorem CINM to assert the existence of a matrix C so that BC = {I}_{n}. This application of Theorem CINM could be a bit confusing, mostly because of the names of the matrices involved. B is nonsingular, so there must be a “right-inverse” for B, and we’re calling it C. Now \eqalignno{ BA & = (BA){I}_{n} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = (BA)(BC) & &\text{@(a href="fcla-jsmath-2.02li32.html#theorem.CINM")Theorem CINM@(/a)} & & & & \cr & = B(AB)C & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = B{I}_{n}C & &\text{Hypothesis} & & & & \cr & = BC & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = {I}_{n} & &\text{@(a href="fcla-jsmath-2.02li32.html#theorem.CINM")Theorem CINM@(/a)} & & & & } which is the desired conclusion. So Theorem OSIS tells us that if A is nonsingular, then the matrix B guaranteed by Theorem CINM will be both a “right-inverse” and a “left-inverse” for A, so A is invertible and {A}^{−1} = B. So if you have a nonsingular matrix, A, you can use the procedure described in Theorem CINM to find an inverse for A. If A is singular, then the procedure in Theorem CINM will fail as the first n columns of M will not row-reduce to the identity matrix. However, we can say a bit more. When A is singular, then A does not have an inverse (which is very different from saying that the procedure in Theorem CINM fails to find an inverse). This may feel like we are splitting hairs, but its important that we do not make unfounded assumptions. These observations motivate the next theorem. Theorem NI Nonsingularity is Invertibility Suppose that A is a square matrix. Then A is nonsingular if and only if A is invertible. Proof   () Suppose A is invertible, and suppose that x is any solution to the homogeneous system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ). Then \eqalignno{ x & = {I}_{n}x & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = \left ({A}^{−1}A\right )x & &\text{@(a href="fcla-jsmath-2.02li32.html#definition.MI")Definition MI@(/a)} & & & & \cr & = {A}^{−1}\left (Ax\right ) & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = {A}^{−1}0 & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.SLEMM")Theorem SLEMM@(/a)} & & & & \cr & = 0 & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMZM")Theorem MMZM@(/a)} & & & & } So the only solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt 0\right ) is the zero vector, so by Definition NM, A is nonsingular. () Suppose now that A is nonsingular. By Theorem CINM we find B so that AB = {I}_{n}. Then Theorem OSIS tells us that BA = {I}_{n}. So B is A’s inverse, and by construction, A is invertible. So for a square matrix, the properties of having an inverse and of having a trivial null space are one and the same. Can’t have one without the other. Theorem NME3 Nonsingular Matrix Equivalences, Round 3 Suppose that A is a square matrix of size n. The following are equivalent. 1. A is nonsingular. 2. A row-reduces to the identity matrix. 3. The null space of A contains only the zero vector, N\kern -1.95872pt \left (A\right ) = \left \{0\right \}. 4. The linear system ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution for every possible choice of b. 5. The columns of A are a linearly independent set. 6. A is invertible. Proof   We can update our list of equivalences for nonsingular matrices (Theorem NME2) with the equivalent condition from Theorem NI. In the case that A is a nonsingular coefficient matrix of a system of equations, the inverse allows us to very quickly compute the unique solution, for any vector of constants. Theorem SNCM Solution with Nonsingular Coefficient Matrix Suppose that A is nonsingular. Then the unique solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) is {A}^{−1}b. Proof   By Theorem NMUS we know already that ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ) has a unique solution for every choice of b. We need to show that the expression stated is indeed a solution (the solution). That’s easy, just “plug it in” to the corresponding vector equation representation (Theorem SLEMM), \eqalignno{ A\left ({A}^{−1}b\right ) & = \left (A{A}^{−1}\right )b & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMA")Theorem MMA@(/a)} & & & & \cr & = {I}_{n}b & &\text{@(a href="fcla-jsmath-2.02li32.html#definition.MI")Definition MI@(/a)} & & & & \cr & = b & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & } Since Ax = b is true when we substitute {A}^{−1}b for x, {A}^{−1}b is a (the!) solution to ℒS\kern -1.95872pt \left (A,\kern 1.95872pt b\right ). #### Subsection UM: Unitary Matrices Recall that the adjoint of a matrix is {A}^{∗} ={ \left (\overline{A}\right )}^{t} (Definition A). Definition UM Unitary Matrices Suppose that U is a square matrix of size n such that {U}^{∗}U = {I}_{ n}. Then we say U is unitary. This condition may seem rather far-fetched at first glance. Would there be any matrix that behaved this way? Well, yes, here’s one. Example UM3 Unitary matrix of size 3 U = \left [\array{ {1+i\over \sqrt{5}} &{3+2\kern 1.95872pt i\over \sqrt{55}} & {2+2i\over \sqrt{22}} \cr {1−i\over \sqrt{5}} &{2+2\kern 1.95872pt i\over \sqrt{55}} & {−3+i\over \sqrt{22}} \cr {i\over \sqrt{5}} &{3−5\kern 1.95872pt i\over \sqrt{55}} &− {2\over \sqrt{22}} } \right ] The computations get a bit tiresome, but if you work your way through the computation of {U}^{∗}U, you will arrive at the 3 × 3 identity matrix {I}_{3}. Unitary matrices do not have to look quite so gruesome. Here’s a larger one that is a bit more pleasing. Example UPM Unitary permutation matrix The matrix P = \left [\array{ 0&1&0&0&0 \cr 0&0&0&1&0 \cr 1&0&0&0&0 \cr 0&0&0&0&1 \cr 0&0&1&0&0 } \right ] is unitary as can be easily checked. Notice that it is just a rearrangement of the columns of the 5 × 5 identity matrix, {I}_{5} (Definition IM). An interesting exercise is to build another 5 × 5 unitary matrix, R, using a different rearrangement of the columns of {I}_{5}. Then form the product PR. This will be another unitary matrix (Exercise MINM.T10). If you were to build all 5! = 5 × 4 × 3 × 2 × 1 = 120 matrices of this type you would have a set that remains closed under matrix multiplication. It is an example of another algebraic structure known as a group since together the set and the one operation (matrix multiplication here) is closed, associative, has an identity ({I}_{5}), and inverses (Theorem UMI). Notice though that the operation in this group is not commutative! If a matrix A has only real number entries (we say it is a real matrix) then the defining property of being unitary simplifies to {A}^{t}A = {I}_{ n}. In this case we, and everybody else, calls the matrix orthogonal, so you may often encounter this term in your other reading when the complex numbers are not under consideration. Unitary matrices have easily computed inverses. They also have columns that form orthonormal sets. Here are the theorems that show us that unitary matrices are not as strange as they might initially appear. Theorem UMI Unitary Matrices are Invertible Suppose that U is a unitary matrix of size n. Then U is nonsingular, and {U}^{−1} = {U}^{∗}. Proof   By Definition UM, we know that {U}^{∗}U = {I}_{ n}. The matrix {I}_{n} is nonsingular (since it row-reduces easily to {I}_{n}, Theorem NMRRI). So by Theorem NPNT, U and {U}^{∗} are both nonsingular matrices. The equation {U}^{∗}U = {I}_{ n} gets us halfway to an inverse of U, and Theorem OSIS tells us that then U{U}^{∗} = {I}_{ n} also. So U and {U}^{∗} are inverses of each other (Definition MI). Theorem CUMOS Columns of Unitary Matrices are Orthonormal Sets Suppose that A is a square matrix of size n with columns S = \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}\right \}. Then A is a unitary matrix if and only if S is an orthonormal set. Proof   The proof revolves around recognizing that a typical entry of the product {A}^{∗}A is an inner product of columns of A. Here are the details to support this claim. \eqalignno{ {\left [{A}^{∗}A\right ]}_{ ij} & ={ \mathop{∑ }}_{k=1}^{n}{\left [{A}^{∗}\right ]}_{ ik}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.EMP")Theorem EMP@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [{\left (\overline{A}\right )}^{t}\right ]}_{ ik}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.EMP")Theorem EMP@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [\kern 1.95872pt \overline{A}\kern 1.95872pt \right ]}_{ ki}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.02li30.html#definition.TM")Definition TM@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}\overline{{\left [A\right ]}_{ ki}}{\left [A\right ]}_{kj} & &\text{@(a href="fcla-jsmath-2.02li30.html#definition.CCM")Definition CCM@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [A\right ]}_{ kj}\overline{{\left [A\right ]}_{ki}} & &\text{@(a href="fcla-jsmath-2.02li69.html#property.CMCN")Property CMCN@(/a)} & & & & \cr & ={ \mathop{∑ }}_{k=1}^{n}{\left [{A}_{ j}\right ]}_{k}\overline{{\left [{A}_{i}\right ]}_{k}} & & & & \cr & = \left \langle {A}_{j},\kern 1.95872pt {A}_{i}\right \rangle & &\text{@(a href="fcla-jsmath-2.02li28.html#definition.IP")Definition IP@(/a)} & & & & } We now employ this equality in a chain of equivalences, \eqalignno{ &\text{$S = \left \{{A}_{1},\kern 1.95872pt {A}_{2},\kern 1.95872pt {A}_{3},\kern 1.95872pt \mathop{\mathop{…}},\kern 1.95872pt {A}_{n}\right \}$ is an orthonormal set} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \left \langle {A}_{j},\kern 1.95872pt {A}_{i}\right \rangle = \left \{\array{ 0\quad &\text{if $i\mathrel{≠}j$} \cr 1\quad &\text{if $i = j$} } \right . & &\text{@(a href="fcla-jsmath-2.02li28.html#definition.ONS")Definition ONS@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {\left [{A}^{∗}A\right ]}_{ ij} = \left \{\array{ 0\quad &\text{if $i\mathrel{≠}j$} \cr 1\quad &\text{if $i = j$} } \right . & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {\left [{A}^{∗}A\right ]}_{ ij} ={ \left [{I}_{n}\right ]}_{ij},\ 1 ≤ i ≤ n,\ 1 ≤ j ≤ n & &\text{@(a href="fcla-jsmath-2.02li21.html#definition.IM")Definition IM@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt {A}^{∗}A = {I}_{ n} & &\text{@(a href="fcla-jsmath-2.02li30.html#definition.ME")Definition ME@(/a)} & & & & \cr &\kern 3.26288pt \mathrel{⇔}\kern 3.26288pt \text{$A$ is a unitary matrix} & &\text{@(a href="#definition.UM")Definition UM@(/a)} & & & & } Example OSMC Orthonormal set from matrix columns The matrix U = \left [\array{ {1+i\over \sqrt{5}} &{3+2\kern 1.95872pt i\over \sqrt{55}} & {2+2i\over \sqrt{22}} \cr {1−i\over \sqrt{5}} &{2+2\kern 1.95872pt i\over \sqrt{55}} & {−3+i\over \sqrt{22}} \cr {i\over \sqrt{5}} &{3−5\kern 1.95872pt i\over \sqrt{55}} &− {2\over \sqrt{22}} } \right ] from Example UM3 is a unitary matrix. By Theorem CUMOS, its columns \left \{\left [\array{ {1+i\over \sqrt{5}} \cr {1−i\over \sqrt{5}} \cr {i\over \sqrt{5}} } \right ],\kern 1.95872pt \left [\array{ {3+2\kern 1.95872pt i\over \sqrt{55}} \cr {2+2\kern 1.95872pt i\over \sqrt{55}} \cr {3−5\kern 1.95872pt i\over \sqrt{55}} } \right ],\kern 1.95872pt \left [\array{ {2+2i\over \sqrt{22}} \cr {−3+i\over \sqrt{22}} \cr − {2\over \sqrt{22}} } \right ]\right \} form an orthonormal set. You might find checking the six inner products of pairs of these vectors easier than doing the matrix product {U}^{∗}U. Or, because the inner product is anti-commutative (Theorem IPAC) you only need check three inner products (see Exercise MINM.T12). When using vectors and matrices that only have real number entries, orthogonal matrices are those matrices with inverses that equal their transpose. Similarly, the inner product is the familiar dot product. Keep this special case in mind as you read the next theorem. Theorem UMPIP Unitary Matrices Preserve Inner Products Suppose that U is a unitary matrix of size n and u and v are two vectors from {ℂ}^{n}. Then \eqalignno{ \left \langle Uu,\kern 1.95872pt Uv\right \rangle & = \left \langle u,\kern 1.95872pt v\right \rangle & &\text{and} &\left \Vert Uv\right \Vert & = \left \Vert v\right \Vert & & & & & & } Proof \eqalignno{ \left \langle Uu,\kern 1.95872pt Uv\right \rangle & = {(Uu)}^{t}\overline{Uv} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIP")Theorem MMIP@(/a)} & & & & \cr & = {u}^{t}{U}^{t}\overline{Uv} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMT")Theorem MMT@(/a)} & & & & \cr & = {u}^{t}{U}^{t}\overline{U}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMCC")Theorem MMCC@(/a)} & & & & \cr & = {u}^{t}{\left (\overline{\overline{U}}\right )}^{t}\overline{U}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li69.html#theorem.CCT")Theorem CCT@(/a)} & & & & \cr & = {u}^{t}\overline{{\left (\overline{U}\right )}^{t}}\overline{U}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li30.html#theorem.MCT")Theorem MCT@(/a)} & & & & \cr & = {u}^{t}\overline{{\left (\overline{U}\right )}^{t}U}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMCC")Theorem MMCC@(/a)} & & & & \cr & = {u}^{t}\overline{{U}^{∗}U}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li30.html#definition.A")Definition A@(/a)} & & & & \cr & = {u}^{t}\overline{{I}_{ n}}\overline{v} & &\text{@(a href="#definition.UM")Definition UM@(/a)} & & & & \cr & = {u}^{t}{I}_{ n}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li21.html#definition.IM")Definition IM@(/a)} & & & & \cr & = {u}^{t}\overline{v} & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIM")Theorem MMIM@(/a)} & & & & \cr & = \left \langle u,\kern 1.95872pt v\right \rangle & &\text{@(a href="fcla-jsmath-2.02li31.html#theorem.MMIP")Theorem MMIP@(/a)} & & & & \cr & & & & } The second conclusion is just a specialization of the first conclusion. \eqalignno{ \left \Vert Uv\right \Vert & = \sqrt{{\left \Vert Uv \right \Vert }^{2}} & & & & \cr & = \sqrt{\left \langle Uv, \kern 1.95872pt Uv\right \rangle } & &\text{@(a href="fcla-jsmath-2.02li28.html#theorem.IPN")Theorem IPN@(/a)} & & & & \cr & = \sqrt{\left \langle v, \kern 1.95872pt v\right \rangle } & & & & \cr & = \sqrt{{\left \Vert v\right \Vert }^{2}} & &\text{@(a href="fcla-jsmath-2.02li28.html#theorem.IPN")Theorem IPN@(/a)} & & & & \cr & = \left \Vert v\right \Vert & & & & } Aside from the inherent interest in this theorem, it makes a bigger statement about unitary matrices. When we view vectors geometrically as directions or forces, then the norm equates to a notion of length. If we transform a vector by multiplication with a unitary matrix, then the length (norm) of that vector stays the same. If we consider column vectors with two or three slots containing only real numbers, then the inner product of two such vectors is just the dot product, and this quantity can be used to compute the angle between two vectors. When two vectors are multiplied (transformed) by the same unitary matrix, their dot product is unchanged and their individual lengths are unchanged. The results in the angle between the two vectors remaining unchanged. A “unitary transformation” (matrix-vector products with unitary matrices) thus preserve geometrical relationships among vectors representing directions, forces, or other physical quantities. In the case of a two-slot vector with real entries, this is simply a rotation. These sorts of computations are exceedingly important in computer graphics such as games and real-time simulations, especially when increased realism is achieved by performing many such computations quickly. We will see unitary matrices again in subsequent sections (especially Theorem OD) and in each instance, consider the interpretation of the unitary matrix as a sort of geometry-preserving transformation. Some authors use the term isometry to highlight this behavior. We will speak loosely of a unitary matrix as being a sort of generalized rotation. A final reminder: the terms “dot product,” “symmetric matrix” and “orthogonal matrix” used in reference to vectors or matrices with real number entries correspond to the terms “inner product,” “Hermitian matrix” and “unitary matrix” when we generalize to include complex number entries, so keep that in mind as you read elsewhere. #### Subsection READ: Reading Questions 1. Compute the inverse of the coefficient matrix of the system of equations below and use the inverse to solve the system. \eqalignno{ 4{x}_{1} + 10{x}_{2} & = 12 & & \cr 2{x}_{1} + 6{x}_{2} & = 4 & & } 2. In the reading questions for Section MISLE you were asked to find the inverse of the 3 × 3 matrix below. \left [\array{ 2 & 3 & 1 \cr 1 &−2&−3 \cr −2& 4 & 6 } \right ] Because the matrix was not nonsingular, you had no theorems at that point that would allow you to compute the inverse. Explain why you now know that the inverse does not exist (which is different than not being able to compute it) by quoting the relevant theorem’s acronym. 3. Is the matrix A unitary? Why? A = \left [\array{ {1\over \sqrt{22}}\left (4 + 2i\right )& {1\over \sqrt{374}}\left (5 + 3i\right ) \cr {1\over \sqrt{22}}\left (−1 − i\right )& {1\over \sqrt{374}}\left (12 + 14i\right ) \cr } \right ] #### Subsection EXC: Exercises C40 Solve the system of equations below using the inverse of a matrix. \eqalignno{ {x}_{1} + {x}_{2} + 3{x}_{3} + {x}_{4} = 5 & & \cr − 2{x}_{1} − {x}_{2} − 4{x}_{3} − {x}_{4} = −7 & & \cr {x}_{1} + 4{x}_{2} + 10{x}_{3} + 2{x}_{4} = 9 & & \cr − 2{x}_{1} − 4{x}_{3} + 5{x}_{4} = 9 & & } Contributed by Robert Beezer Solution [676] M20 Construct an example of a 4 × 4 unitary matrix. Contributed by Robert Beezer Solution [677] M80 Matrix multiplication interacts nicely with many operations. But not always with transforming a matrix to reduced row-echelon form. Suppose that A is an m × n matrix and B is an n × p matrix. Let P be a matrix that is row-equivalent to A and in reduced row-echelon form, Q be a matrix that is row-equivalent to B and in reduced row-echelon form, and let R be a matrix that is row-equivalent to AB and in reduced row-echelon form. Is PQ = R? (In other words, with nonstandard notation, is \text{rref}(A)\text{rref}(B) = \text{rref}(AB)?) Construct a counterexample to show that, in general, this statement is false. Then find a large class of matrices where if A and B are in the class, then the statement is true. Contributed by Mark Hamrick Solution [677] T10 Suppose that Q and P are unitary matrices of size n. Prove that QP is a unitary matrix. Contributed by Robert Beezer T11 Prove that Hermitian matrices (Definition HM) have real entries on the diagonal. More precisely, suppose that A is a Hermitian matrix of size n. Then {\left [A\right ]}_{ii} ∈ {ℝ}^{}, 1 ≤ i ≤ n. Contributed by Robert Beezer T12 Suppose that we are checking if a square matrix of size n is unitary. Show that a straightforward application of Theorem CUMOS requires the computation of {n}^{2} inner products when the matrix is unitary, and fewer when the matrix is not orthogonal. Then show that this maximum number of inner products can be reduced to {1\over 2}n(n + 1) in light of Theorem IPAC. Contributed by Robert Beezer #### Subsection SOL: Solutions C40 Contributed by Robert Beezer Statement [673] The coefficient matrix and vector of constants for the system are \eqalignno{ \left [\array{ 1 & 1 & 3 & 1 \cr −2&−1&−4&−1 \cr 1 & 4 &10& 2 \cr −2& 0 &−4& 5 } \right ] & &b = \left [\array{ 5 \cr −7 \cr 9 \cr 9 } \right ] & & & & } {A}^{−1} can be computed by using a calculator, or by the method of Theorem CINM. Then Theorem SNCM says the unique solution is { A}^{−1}b = \left [\array{ 38 & 18 & −5 &−2 \cr 96 & 47 &−12&−5 \cr −39&−19& 5 & 2 \cr −16& −8 & 2 & 1 } \right ]\left [\array{ 5 \cr −7 \cr 9 \cr 9 } \right ] = \left [\array{ 1 \cr −2 \cr 1 \cr 3 } \right ] M20 Contributed by Robert Beezer Statement [673] The 4 × 4 identity matrix, {I}_{4}, would be one example (Definition IM). Any of the 23 other rearrangements of the columns of {I}_{4} would be a simple, but less trivial, example. See Example UPM. M80 Contributed by Robert Beezer Statement [673] Take \eqalignno{ A & = \left [\array{ 1&0 \cr 0&0 } \right ] &B & = \left [\array{ 0&0 \cr 1&0 } \right ] & & & & } Then A is already in reduced row-echelon form, and by swapping rows, B row-reduces to A. So the product of the row-echelon forms of A is AA = A\mathrel{≠}O. However, the product AB is the 2 × 2 zero matrix, which is in reduced-echelon form, and not equal to AA. When you get there, Theorem PEEF or Theorem EMDRO might shed some light on why we would not expect this statement to be true in general. If A and B are nonsingular, then AB is nonsingular (Theorem NPNT), and all three matrices A, B and AB row-reduce to the identity matrix (Theorem NMRRI). By Theorem MMIM, the desired relationship is true.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9762443900108337, "perplexity": 1483.9204691747607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00585.warc.gz"}
http://mathhelpforum.com/statistics/90461-number-outcomes-two-dice-three-throws.html
# Thread: Number of outcomes of two dice, three throws 1. ## Number of outcomes of two dice, three throws What is the maximum number of outcomes of a pair of dice thrown three times? For example: 1st Set: 2, 10, 4 2nd Set: 2, 2, 2 3rd Set: 10, 4, 2 Is there a universal formula I can use to calculate this problem with different variables? (For example, pair of dice thrown five times, three dice thrown six times, etc.) Any help would be appreciated! 2. Originally Posted by seavari What is the maximum number of outcomes of a pair of dice thrown three times? For example: 1st Set: 2, 10, 4 2nd Set: 2, 2, 2 3rd Set: 10, 4, 2 Is there a universal formula I can use to calculate this problem with different variables? (For example, pair of dice thrown five times, three dice thrown six times, etc.) It is not at all clear as to what your question means. If we toss a pair of dice, then add the showing numbers we have eleven outcomes: 2 to 12. If we repeat this three times, we get a set of ordered triples. There are then $11^3$ possible triples. Is that the meaning of this question? If not please try to explain further. 3. Originally Posted by Plato It is not at all clear as to what your question means. If we toss a pair of dice, then add the showing numbers we have eleven outcomes: 2 to 12. If we repeat this three times, we get a set of ordered triples. There are then $11^3$ possible triples. Is that the meaning of this question? If not please try to explain further. (I'm sorry, it's been a really long time since I last tried to grapple a math problem. ^^; ) I think you answered my question though...so there could be 1,331 possible outcomes? (For example, "2, 3, 4" being one possible outcome.) If I repeated four times instead of three, it would be 11^4? If I used just one die and cast three times, would it be 6^3? 4. Originally Posted by seavari I think you answered my question though...so there could be 1,331 possible outcomes? (For example, "2, 3, 4" being one possible outcome.) If I repeated four times instead of three, it would be 11^4? If I used just one die and cast three times, would it be 6^3? Yes that is correct. If repeated N times instead of three, it would be $11^N$? If just one die and cast N times, would it be $6^N$? 5. You have made my day! Thank you so much! I didn't know it would be so simple...you are a hero!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426222205162048, "perplexity": 777.2328886710031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00028-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/math/150006-integer-sequences.html
## Integer sequences What websites are really useful can be debatable, but I think this one really is: The On-Line Encyclopedia of Integer Sequences™ (OEIS™)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937253594398499, "perplexity": 4577.3278881166625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645322940.71/warc/CC-MAIN-20150827031522-00228-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.dsprelated.com/thread/628/prewarping-fc-q-or-both
## Prewarping Fc, Q or both Started by 7 years ago6 replieslatest reply 7 years ago194 views RBJ's Audio EQ cookbook takes into account only frequency prewarping when filter is build using case Q for bandwidth. Why not Q prewarping as well with some of those filter types defined there? At KVR I got info that BLT turns to kind of II (Impulse Invariant) when both prewarpings are in use and BLT looses some nice properties in that case. Well, I tried the Q prewarping and made few plots: Large image - https://postimg.org/image/c74zc1b9z/ Right column plots uses EE's Q definition just for the (original) cookbook. Did not check the situation of phases yet. If my plots are correct then it looks like prewarping Q might bring some improvement at least for peak filter magnitude. For LP filter, situation is different. BTW, some doctoral thesis showed plots where both prewarpings were used and set to have equal factor. How's that factoring done? Any thoughts? [ - ] alright, there are a couple of different issues that may (or may not) need to be de-conflated. i'm gonna try to keep the number of symbols minimized. $$dB_\text{gain}$$ is the number of dB gain of the peak (for $$dB_\text{gain} > 0$$) or cut (for $$dB_\text{gain} < 0$$). appears to be 6 dB in the plots. $$A^2 \triangleq 10^{dB_\text{gain}/20}$$ is the linear gain for the boost or cut. $$f_\text{s}$$ is sample rate. $$f_0$$ is frequency of the boost or cut in the same units as the sample rate. the analog transfer function of a resonant second-order filter (a.k.a. "biquad" or "SOS") is \begin{align} \\ H(s) &= \frac{b_0 + b_1 s + b_2 s^2 }{1 + a_1 s + a_2 s^2} \\ \\ &= \frac{b_0 + b_1 s + b_2 s^2}{1 + \frac{1}{Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2} \\ \\ \end{align} don't conflate the $$a_k, \ b_k$$ coefficients of the analog prototype with those in the resulting digital filter from the recipe in the cookbook. this is how the "EE definition" of $$Q$$ is defined. an analog BPF with passband gain of 0 dB, has transfer function: $$H(s) = \frac{ \frac{1}{Q}\frac{s}{2 \pi f_0} }{1 + \frac{1}{Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2}$$ note when $$f = f_0$$ then $$H(j 2 \pi f_0) = 1$$. note also that there are two bandedges $$f_+$$ and $$f_-$$ such that $$f_- < f_0 < f_+$$ and $$f_+ = f_0 \cdot 2^{bw/2}$$ $$f_- = f_0 \cdot 2^{-bw/2}$$ and $$|H(j 2 \pi f_-)|^2 = |H(j 2 \pi f_+)|^2 = \frac{1}{2}$$ we define those bandedges for the BPF to be the "half-power frequencies". and the $$BW$$parameter is the bandwidth expressed in octaves. the higher bandedge $$f_+ = 2^{bw} f_-$$ is $$BW$$octaves higher than the lower bandedge $$f_-$$. turns out that this bandwidth is related to Q as follows: \begin{align} \\ \frac{1}{Q} &= \frac{2^{bw} - 1}{2^{bw/2}} \\ \\ &= 2 \ \sinh \left( \frac{\ln(2)}{2} \ bw \right) \\ \end{align} keeping the same definition of $$Q$$, the "bell-shaped" boost/cut parametric EQ takes that BPF, gives it some gain (with sign) and adds it to a wire. it has transfer function: \begin{align} H(s) &= (A^2 - 1)\frac{ \frac{1}{Q}\frac{s}{2 \pi f_0} }{1 + \frac{1}{Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2} \ + 1 \\ \\ &= \frac{1 + \frac{A^2}{Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2}{1 + \frac{1}{Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2}\\ \end{align} that is the "traditional" analog parametric EQ. note that $$|H(j 2 \pi f_0)| = A^2 = 10^{dB_\text{gain}/20}$$ problem is that, leaving $$f_0$$ and $$Q$$ constant, the curve for $$dB_\text{gain} > 0$$ is not a mirror image for $$dB_\text{gain} < 0$$, given the same magnitude $$|dB_\text{gain}|$$. the cut will be much skinnier than the boost. some people might want the cut to exactly undo the boost given all other parameters being the same. so in the cookbook (and in some other papers), a redefinition of $$Q$$ is made. this redefinition is the substitution: $$Q \ \leftarrow \ A \cdot Q$$ so that makes the transfer function for the parametric EQ (with adjusted $$Q$$): \begin{align} H(s) &= \frac{1 + \frac{A^2}{A \cdot Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2}{1 + \frac{1}{A \cdot Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2} \\ \\ &= \frac{1 + \frac{A}{Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2}{1 + \frac{1}{A \ Q}\frac{s}{2 \pi f_0} + \left(\frac{s}{2 \pi f_0} \right)^2} \\ \end{align} note that if $$dB_\text{gain}$$ is replaced with $$-dB_\text{gain}$$, then $$A$$ is replaced with $$\frac{1}{A}$$ and the numerator and denominator are essentially swapped, which causes the frequency response of the cut to mirror that of the boost. now, using the same relationship between bandwidth $$BW$$and $$Q$$: $$\frac{1}{Q} = 2 \ \sinh \left( \frac{\ln(2)}{2} \ bw \right)$$ then the bandedges $$f_+ = f_0 \cdot 2^{bw/2}$$ $$f_- = f_0 \cdot 2^{-bw/2}$$ satisfy this gain definition: $$|H(j 2 \pi f_-)| = |H(j 2 \pi f_+)| = A = 10^{dB_\text{gain}/40}$$ which is the "mid-gain frequency" having gain of $$\frac{dB_\text{gain}}{2}$$. so the definition of bandedge gain is a bit different, but at least for the analog prototype, we're keeping the relationship between $$BW$$and $$Q$$ the same. higher $$Q$$ means tighter $$bw$$. so, first, before we discuss "warping Q", let's be completely consistent about which Q to compare. for this, i might recommend leaving the "EE definition" of Q behind to not confuse. so the bilinear transform that compensates for the frequency warping at the "significant frequency" or the resonant frequency $$f_0$$ makes this substitution: $$\text{normalized }s \triangleq \frac{s}{2 \pi f_0} \ \leftarrow \ \frac{1}{\tan(\pi f_0/f_\text{s})} \ \frac{1 - z^{-1}}{1 + z^{-1}}$$ $$H(z)$$ is the resulting digital filter transfer function after making that substitution for normalized $$s$$. and that does not compensate for a cramped bandwidth because, besides the resonant frequency $$f_0$$, so also are the bandedges $$f_+$$ and $$f_-$$ warped by the bilinear transform. measured in octaves the bandwidth, $$BW$$, in the digital filter is: \begin{align} BW &= \log_2\left( \arctan\left(\frac{\pi f_+}{f_\text{s}}\right) \right) - \log_2\left( \arctan\left(\frac{\pi f_-}{f_\text{s}}\right) \right) \\ \\ &= \log_2\left( \arctan\left(\frac{\pi f_0 2^{bw/2}}{f_\text{s}}\right) \right) - \log_2\left( \arctan\left(\frac{\pi f_0 2^{-bw/2}}{f_\text{s}}\right) \right) \\ \end{align} you can see that the mapping of $$bw$$ to $$BW$$ is an odd-symmetry function, so it goes through $$0$$ and has no even-order terms in a Maclaurin (a.k.a. Taylor series) expansion. now if you fix $$f_0$$ and plot digital $$BW$$vs. analog $$bw$$, you will see that $$BW < bw$$ and that is the bandwidth cramping done by frequency warping of the bilinear transform. to uncramp the bandwidth, you would have to solve for $$BW$$in terms of $$BW$$and $$f_0$$ and $$f_\text{s}$$. and that is (how shall we say?) a female canine. if you read the cookbook a little, you will notice a slightly adjusted mapping between $$BW$$and $$Q$$: $$\frac{1}{Q} = 2 \ \sinh \left( \frac{\ln(2)}{2} \ BW \frac{2 \pi f_0/f_\text{s}}{\sin(2 \pi f_0/f_\text{s})} \right)$$ so the digital filter bandwidth $$BW$$was increased by a factor of $$\frac{2 \pi f_0/f_\text{s}}{\sin(2 \pi f_0/f_\text{s})}$$ as a first-order attempt to compensate for the bandwidth cramping. this result comes from assuming a narrow bandwidth in the first place and evaluating the derivative: $$\frac{d \ BW}{d \ bw} \Bigg|_{bw = 0} = \frac{\sin(2 \pi f_0/f_\text{s})}{2 \pi f_0/f_\text{s}}$$ the first term of the Maclaurin series is $$BW \approx \frac{\sin(2 \pi f_0/f_\text{s})}{2 \pi f_0/f_\text{s}} bw$$ compensating (or "prewarping") that first-order cramping of the bandwidth (which is what i think you mean by "prewarping Q") is done in the cookbook and always has been. but it is only a first-order compensation. if you want to do it better, be my guest and try to compute the next third-order term for the Maclaurin expansion (and flip it around so you have an expression of the approximate analog $$BW$$in terms of digital $$bw$$). doing this with Maclaurin series expansion is the only way i know because i don't think you will invert the $$bw \to BW$$ mapping above directly. [ - ] I did plot the peak filter build through cookbook's "case BW" instead of "case Q". It gave better magnitude response. 'Prewarping' Q improved the magnitude response when filter was build through "case Q". Plots showing the end results for this thread. [ - ] It looks like it might be interesting, but: • Graphs are too small too read • TLA density is way too high -- what does "turning into II" mean?  What do bacon, lettuce and tomato sandwiches have to do with digital signal processing?  Who's "EE"? It may all be totally clear if the graphs were large enough that you could read their text -- but you should still unpack the three-letter abbreviations. [ - ] Is there a way prevent this forum software from shrinking the image (original is W=1752px X H=1513px)? I'll add link to some picture sharing site. Abbreviation 'EE' comes from RBJ's Audio EQ Cookbook (linked in post #1). II means Impulse Invariant. [ - ]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997184872627258, "perplexity": 2332.5149723766485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00520.warc.gz"}
https://cran.irsn.fr/web/packages/gauseR/vignettes/gauseR_examples.html
# Example Analyses in gauseR ## The GauseR package The GauseR package includes tools and data for analyzing the Gause microcosm experiments, and for fitting Lotka-Volterra models to time series data. Below, we will demonstrate some of the basic features of this package, including several optimization methods, a function for calculating the goodness of fit for models, and an automated wrapper function. Note that the general methods applied here, as well as the form of the differential equations that we use, are described in detail in the Quantitative Ecology textbook by Lehman et al., available at http://hdl.handle.net/11299/204551. The full R package, and accompanying documentation, is available at https://github.com/adamtclark/gauseR. ## Linearized estimates As a first example, we will use data from Gause’s experiments with Paramecium. The plotted data in the figure below shows the logistic growth of Paramecium aurelia in monoculture. # load package require(gauseR) ## Loading required package: gauseR # load data data(gause_1934_book_f22) test_goodness_of_fit(observed = logistic_data$Volume_Species2, predicted = prediction_short) ## [1] 0.9701034 Here, the goodness of fit is around 97%, which indicates that the model fits the data very closely. ## The optimizer As an example for using the optimizer, we use another data set of Gause’s Paramecium experiments. This predator-prey experiment shows the interaction between Didinium nasutum and P. caudatum. First, we can try to fit the model using the same three step process described above. # load data from competition experiment data(gause_1934_book_f32) # keep all data - no separate treatments exist for this experiment predatorpreydata<-gause_1934_book_f32 # get time-lagged observations for each species prey_lagged<-get_lag(x = predatorpreydata$Individuals_Prey, time = predatorpreydata$Day) predator_lagged<-get_lag(x = predatorpreydata$Individuals_Predator, time = predatorpreydata$Day) # calculate per-capita growth rates prey_dNNdt<-percap_growth(x = prey_lagged$x, laggedx = prey_lagged$laggedx, dt = prey_lagged$dt) predator_dNNdt<-percap_growth(x = predator_lagged$x, laggedx = predator_lagged$laggedx, dt = predator_lagged$dt) # fit linear models to dNNdt, based on average # abundances between current and lagged time steps prey_mod_dat<-data.frame(prey_dNNdt=prey_dNNdt, prey=prey_lagged$laggedx, predator=predator_lagged$laggedx) mod_prey<-lm(prey_dNNdt~prey+predator, data=prey_mod_dat) predator_mod_dat<-data.frame(predator_dNNdt=predator_dNNdt, predator=predator_lagged$laggedx, prey=prey_lagged$laggedx) mod_predator<-lm(predator_dNNdt~predator+prey, data=predator_mod_dat) # model summaries summary(mod_prey) ## ## Call: ## lm(formula = prey_dNNdt ~ prey + predator, data = prey_mod_dat) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.44938 -0.18313 -0.09181 0.61768 0.75852 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.99795 0.29658 3.365 0.00718 ** ## prey -0.02061 0.01255 -1.642 0.13154 ## predator -0.06758 0.01831 -3.690 0.00417 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.6755 on 10 degrees of freedom ## (4 observations deleted due to missingness) ## Multiple R-squared: 0.6355, Adjusted R-squared: 0.5626 ## F-statistic: 8.717 on 2 and 10 DF, p-value: 0.006436 summary(mod_predator) ## ## Call: ## lm(formula = predator_dNNdt ~ predator + prey, data = predator_mod_dat) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.6943 -0.3762 -0.1436 0.2799 1.3008 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.06931 0.53816 -0.129 0.9011 ## predator -0.02602 0.01965 -1.324 0.2271 ## prey 0.03895 0.01324 2.943 0.0216 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.6432 on 7 degrees of freedom ## (7 observations deleted due to missingness) ## Multiple R-squared: 0.7039, Adjusted R-squared: 0.6194 ## F-statistic: 8.322 on 2 and 7 DF, p-value: 0.01412 # extract parameters # growth rates r1 <- unname(coef(mod_prey)["(Intercept)"]) r2 <- unname(coef(mod_predator)["(Intercept)"]) # self-limitation a11 <- unname(coef(mod_prey)["prey"]) a22 <- unname(coef(mod_predator)["predator"]) # effect of Pa on Pc a12 <- unname(coef(mod_prey)["predator"]) # effect of Pc on Pa a21 <- unname(coef(mod_predator)["prey"]) # run ODE: # make parameter vector: parms <- c(r1, r2, a11, a12, a21, a22) initialN <- c(4, 0.1) out <- deSolve::ode(y=initialN, times=seq(1, 17, length=100), func=lv_interaction, parms=parms) matplot(out[,1], out[,-1], type="l", xlab="time", ylab="N", col=c("black","red"), lty=c(1,3), lwd=2, ylim=c(0, 60)) legend("topright", c("Pc", "Dn"), col=c(1,2), lwd=2, lty=c(1,3)) # now, plot in points from data points(predatorpreydata$Day, predatorpreydata$Individuals_Predator , col=2) points(predatorpreydata$Day, predatorpreydata$Individuals_Prey, col=1) Sadly, it seems that the model doesn’t fit very well in this case. The reason turns out not to be because the model itself is bad, but rather because the method that we are using for estimating parameters is subject to high error. One way to get around this problem is to use an optimizer to directly fit the predicted dynamics to the observed data. We can do this using the lv_optim function. Note that things get a bit complicated, because we need to set the sign of the parameters (i.e. positive or negative) before we conduct the analysis. This is because a model with too many positive coefficients will lead to unbounded growth, which will ultimately crash the optimizer. For this analysis, we simply take the signs for the parameters from the estimate that we got above from the linear regressions. # Data for the optimizer: # Must have a column with time steps labeled 'time', and # columns for each species in the community. opt_data<-data.frame(time=predatorpreydata$Day, Prey=predatorpreydata$Individuals_Prey, Predator=predatorpreydata$Individuals_Predator) # Save the signs of the parameters - # optimizer works in log space, so these # must be specified separately parm_signs<-sign(parms) # parameter vector for optimizer - # must be a vector with, first, the # starting abundances in log space, # and second, the parameter values, # again in log space pars<-c(log(initialN), log(abs(parms))) # run optimizer optout<-optim(par = pars, fn = lv_optim, hessian = TRUE, opt_data=opt_data, parm_signs=parm_signs) # extract parameter vector: parms <- exp(optout$par[-c(1:2)])*parm_signs initialN <- exp(optout$par[1:2]) out <- deSolve::ode(y=initialN, times=seq(1, 17, length=100), func=lv_interaction, parms=parms) matplot(out[,1], out[,-1], type="l", xlab="time", ylab="N", col=c("black","red"), lty=c(1,3), lwd=2, ylim=c(0, 60)) legend("topright", c("Pc", "Dn"), col=c(1,2), lwd=2, lty=c(1,3)) # now, plot in points from data points(predatorpreydata$Day, predatorpreydata$Individuals_Predator , col=2) points(predatorpreydata$Day, predatorpreydata$Individuals_Prey, col=1) This process is a little complicated, but it seems to fit the data much better. ## The wrapper function Finally, let’s try a simpler example, tracking competitive interactions between P. aurelia and P. caudatum. Rather than going through all the coding involved in fitting the linear models and running the optimizer, we can simply run the gause_wrapper function, which automates all of these steps. #load competition data data("gause_1934_science_f02_03") #subset out data from species grown in mixture mixturedat<-gause_1934_science_f02_03[gause_1934_science_f02_03$Treatment=="Mixture",] #extract time and species data time<-mixturedat$Day species<-data.frame(mixturedat$Volume_Species1, mixturedat$Volume_Species2) colnames(species)<-c("P_caudatum", "P_aurelia") #run wrapper gause_out<-gause_wrapper(time=time, species=species) Again, this yields a close fit between observations and model predictions. ## Further examples Although the optimization method that we employ is very stable, there is one disadvantage. Because parameter signs are fixed, confidence intervals estimated from this procedure are not especially informative (since they by definition cannot cross zero). To address this problem, we also include the ode_prediction function. This function takes in parameter values and returns predictions of species abundances as a single vector. This can be useful for interfacing with other optimization functions, which can be used to produce informative confidence intervals. As an example below, we use the nls function. #load competition data data("gause_1934_science_f02_03") #subset out data from species grown in mixture mixturedat<-gause_1934_science_f02_03[gause_1934_science_f02_03$Treatment=="Mixture",] #extract time and species data time<-mixturedat$Day species<-data.frame(mixturedat$Volume_Species1, mixturedat$Volume_Species2) colnames(species)<-c("P_caudatum", "P_aurelia") #run wrapper gause_out<-gause_wrapper(time=time, species=species) # number of species N<-ncol(gause_out$rawdata)-1 # parameters pars_full<-c(gause_out$parameter_intervals$mu) # data.frame for optimization fittigdata<-data.frame(y=unlist(gause_out$rawdata[,-1]), time=gause_out$rawdata$time, N=N) yest<-ode_prediction(pars_full, time=fittigdata$time, N=fittigdata$N) plot(fittigdata\$y, yest, xlab="observation", ylab="prediction") abline(a=0, b=1, lty=2) #example of how to apply function, using nls() mod<-nls(y~ode_prediction(pars_full, time, N), start = list(pars_full=pars_full), data=fittigdata) summary(mod) ## ## Formula: y ~ ode_prediction(pars_full, time, N) ## ## Parameters: ## Estimate Std. Error t value Pr(>|t|) ## pars_full1 0.574824 0.682966 0.842 0.405246 ## pars_full2 4.533909 3.276244 1.384 0.174472 ## pars_full3 1.734873 0.400458 4.332 0.000104 *** ## pars_full4 0.810376 0.218226 3.713 0.000654 *** ## pars_full5 -0.007675 0.002237 -3.431 0.001464 ** ## pars_full6 -0.010787 0.002324 -4.640 4.05e-05 *** ## pars_full7 -0.001688 0.001048 -1.611 0.115413 ## pars_full8 -0.005377 0.001317 -4.084 0.000220 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 10.1 on 38 degrees of freedom ## ## Number of iterations to convergence: 8 ## Achieved convergence tolerance: 3.162e-06 Note, however, that in some cases parameters will still lead to unbounded growth, which will crash most optimizers. Under these circumstances, users will have to be careful and creative - e.g. by setting informative priors in a Baysian analysis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48259052634239197, "perplexity": 7487.644582662729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00522.warc.gz"}
https://drexel28.wordpress.com/2011/03/25/representation-theory-the-connection-between-the-square-root-function-and-the-number-of-self-conjugate-irreps-cont/
# Abstract Nonsense ## The Connection Between the Square Root Function and the Number of Self-Conjugate Irreps (Cont.) Point of post: This post is a continuation of this one. Ambivalent Conjugacy Classes Let $G$ be a finite group and let $\mathcal{C}$ be a conjugacy class in $G$. We say that $\mathcal{C}$ is ambivalent if $\iota\left(\mathcal{C}\right)\subseteq\mathcal{C}$ where $\iota:G\to G$ is the inversion map $g\mapsto g^{-1}$. Since $G$ and consequently $\mathcal{C}$ is finite and $\iota:G\to G$ is a bijection, it follows from first principles that being ambivalent is equivalent to $\mathcal{C}=\iota\left(\mathcal{C}\right)$. Now, for notational convenience for $A\subseteq G$ we denote $\iota\left(A\right)$ by $A^{-1}$ and so ambivalence of $\mathcal{C}$ takes either of the two equivalent forms $\mathcal{C}^{-1}\subseteq\mathcal{C}$ or $\mathcal{C}^{-1}=\mathcal{C}$. We now show the relationship between ambivalent conjugacy classes of a finite group $G$ and the number of self-conjugate $\alpha\in\widehat{G}$. Indeed: Theorem: Let $G$ be a finite group  with conjugacy classes $\mathcal{C}_1,\cdots,\mathcal{C}_k$ and let $\mathfrak{a}$ and $\mathfrak{s}$ be the number of ambivalent conjugacy classes in $G$ and the number of self-conjugate $\alpha\in\widehat{G}$ respectively. Then, $\mathfrak{a}=\mathfrak{s}$. Proof: Note first that $\displaystyle \frac{1}{|G|}\sum_{g\in G}\chi^{(\alpha)}(g)^2=|c_\alpha|\quad\quad\mathbf{(1)}$ Indeed, note that if $\text{Conj}^J_{\rho^{(\alpha)}}$ is any complex conjugate of any irrep $\rho^{(\alpha)}\in\alpha$ then it’s clear that $\chi_{\text{Conj}^J_{\rho^{(\alpha)}}}(g)=\overline{\chi^{(\alpha)}(g)}$ for every $g\in G$ (this follows by considering our earlier characterization of complex conjugate maps) and since $\chi^{(\alpha)}(g)=\overline{\overline{\chi^{(\alpha)}(g)}}$ it follows that the left-hand side of $\mathbf{(1)}$ can be considered as $\left\langle \chi^{(\overline{\alpha})},\chi^{(\alpha)}\right\rangle$ where $\overline{\alpha}$ denotes the element of $\widehat{G}$ containing $\text{Conj}^J_{\rho^{(\alpha)}}$ from where the claim follows. Clearly then from this and the fact that each $\chi^{(\alpha)}$  is a class function we see that \displaystyle \begin{aligned}\mathfrak{s} &= \frac{1}{|G|}\sum_{\alpha\in\widehat{G}}\sum_{g\in G}\chi^{(\alpha)}(g)^2\\ &=\sum_{j=1}^{k}\frac{\#\left(\mathcal{C}_j\right)}{|G|}\sum_{\alpha\in\widehat{G}}\chi^{(\alpha)}\left(g_j\right)^2\end{aligned} where $g_j$ is any element of $\mathcal{C}_j$. Recall though that $\chi^{(\alpha)}\left(g_j\right)=\overline{\chi^{(\alpha)}\left(g_j^{-1}\right)}$. Thus, it follows from the second orthogonality relation that $\displaystyle \sum_{\alpha\in\widehat{G}}\chi^{(\alpha)}(g_j)^2=\frac{|G|}{\#\left(\mathcal{C}_j\right)}c\left(g_j,g_j^{-1}\right)$ But, by definition $c\left(g_j,g_j^{-1}\right)$ is equal to $1$ if and only if $g_j$ is conjugate to $g_j^{-1}$ which is true if and only if $\mathcal{C}_j=\mathcal{C}_{g_j^{-1}}$ which is true if and only if $\mathcal{C}_j=\mathcal{C}_j^{-1}$. Thus, $c\left(g_j,g_j^{-1}\right)$ is one if $\mathcal{C}_j$ is ambivalent and zero otherwise. So, \displaystyle \begin{aligned}\mathfrak{s} &= \sum_{j=1}^{k}\frac{\#\left(\mathcal{C}_j\right)}{|G|}\sum_{\alpha\in\widehat{G}}\chi^{(\alpha)}\left(g_j\right)^2\\ &= \sum_{j=1}^{k}\frac{\#\left(\mathcal{C}_j\right)}{|G|}\frac{|G|}{\#\left(\mathcal{C}_j\right)}c\left(g_j,g_j^{-1}\right)\\ &= \sum_{j=1}^{k}c\left(g_j,g_j^{-1}\right)\\ &= \sum_{\mathcal{C}_j\text{ is ambivalent}}1\\ &=\mathfrak{a}\end{aligned} from where the conclusion follows. $\blacksquare$ References: 1. Isaacs, I. Martin. Character Theory of Finite Groups. New York: Academic, 1976. Print. 2. Simon, Barry. Representations of Finite and Compact Groups. Providence, RI: American Mathematical Society, 1996. Print.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984815716743469, "perplexity": 209.2638162910288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515165.6/warc/CC-MAIN-20171212041010-20171212061010-00405.warc.gz"}
http://www.physicsforums.com/showthread.php?t=673395
## Deriving electromagnetic fieldfor point charge Hi all, I was going through the derivation for the electromagnetic field of point charges by Griffith(Introduction to electrodynamics page 437). I'm missing minus sign somewhere: The book says that:$\nabla(\vec{n}\cdot\vec{v})=\vec{a}(\vec{n}\cdot \nabla tr)+\vec{v}-\vec{v}(\vec{v}\cdot\nabla tr)-\vec{n}\times(\vec{a}\times \nabla tr)+\vec{v}\times (\vec{v}\times \nabla tr)$ Using the rule for triple cross products gives: $\vec{v}+(v^{2}-\vec{n}\cdot\vec{a})\nabla tr$ However is should be: $\vec{v}+(-v^{2}+\vec{n}\cdot\vec{a})\nabla tr$ I'm sure Im missing something. Thanks PhysOrg.com physics news on PhysOrg.com >> Is there an invisible tug-of-war behind bad hearts and power outages?>> Penetrating the quantum nature of magnetism>> Rethinking the universe: Groundbreaking theory proposed in 1997 suggests a 'multiverse' oh I've got it. Forgot the minus sign in the triple cross product rule :\$ Similar discussions for: Deriving electromagnetic fieldfor point charge Thread Forum Replies General Physics 4 Introductory Physics Homework 1 Introductory Physics Homework 9 Introductory Physics Homework 0 Introductory Physics Homework 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7617717385292053, "perplexity": 4457.208892300327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707434477/warc/CC-MAIN-20130516123034-00065-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/pythagoras-and-distance-formula
Pythagoras and Distance Formula In this Pythagoras and Distance Formula activity, students read real-life story problems, determine needed information, write equations, and then solve the problem. Students use the Pythagorean Theorem and Distance Formula to compute answers. This three-page activity contains 5 multi-step problems. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635831475257874, "perplexity": 5816.085083306268}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484815.34/warc/CC-MAIN-20191206050236-20191206074236-00048.warc.gz"}
https://puzzling.stackexchange.com/questions/68353/15-squares-into-3-stars
# 15 squares into 3 stars You are given $15$ unit squares as shown below: You would like to create $3$ six-sided stars as something like shown below by only using given $15$ squares: How is it possible if possible?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5893692970275879, "perplexity": 886.3482903873328}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00340.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2003.9.1465
# American Institute of Mathematical Sciences November  2003, 9(6): 1465-1492. doi: 10.3934/dcds.2003.9.1465 ## Heteroclinic foliation, global oscillations for the Nicholson-Bailey model and delay of stability loss 1 Department of Mathematics, National Tsing Hua University, Hsinchu 300, Taiwan 2 Department of Mathematics, National Changhua University of Education, Changhua 500, Taiwan 3 Department of Mathematics, University of Kansas, Lawrence, KS 66045, United States 4 Department of Mathematics, Nizhny Novgorod State University, Nizhny Novgorod, Russian Federation Received  September 2002 Revised  June 2003 Published  September 2003 This paper is concerned with the classical Nicholson-Bailey model [15] defined by $f_\lambda(x,y)=(y(1-e^{-x}), \lambda y e^{-x})$. We show that for $\lambda=1$ a heteroclinic foliation exists and for $\lambda>1$ global strict oscillations take place. The important phenomenon of delay of stability loss is established for a general class of discrete dynamical systems, and it is applied to the study of nonexistence of periodic orbits for the Nicholson-Bailey model. Citation: Sze-Bi Hsu, Ming-Chia Li, Weishi Liu, Mikhail Malkin. Heteroclinic foliation, global oscillations for the Nicholson-Bailey model and delay of stability loss. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1465-1492. doi: 10.3934/dcds.2003.9.1465 [1] Jinhu Xu, Yicang Zhou. Global stability of a multi-group model with vaccination age, distributed delay and random perturbation. Mathematical Biosciences & Engineering, 2015, 12 (5) : 1083-1106. doi: 10.3934/mbe.2015.12.1083 [2] Anatoly Neishtadt. On stability loss delay for dynamical bifurcations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 897-909. doi: 10.3934/dcdss.2009.2.897 [3] Tarik Mohammed Touaoula. Global stability for a class of functional differential equations (Application to Nicholson's blowflies and Mackey-Glass models). Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4391-4419. doi: 10.3934/dcds.2018191 [4] C. Connell McCluskey. Global stability of an $SIR$ epidemic model with delay and general nonlinear incidence. Mathematical Biosciences & Engineering, 2010, 7 (4) : 837-850. doi: 10.3934/mbe.2010.7.837 [5] C. Connell McCluskey. Global stability for an SEIR epidemiological model with varying infectivity and infinite delay. Mathematical Biosciences & Engineering, 2009, 6 (3) : 603-610. doi: 10.3934/mbe.2009.6.603 [6] Yincui Yan, Wendi Wang. Global stability of a five-dimensional model with immune responses and delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 401-416. doi: 10.3934/dcdsb.2012.17.401 [7] Ilona Gucwa, Peter Szmolyan. Geometric singular perturbation analysis of an autocatalator model. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 783-806. doi: 10.3934/dcdss.2009.2.783 [8] Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 [9] Jinliang Wang, Lijuan Guan. Global stability for a HIV-1 infection model with cell-mediated immune response and intracellular delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 297-302. doi: 10.3934/dcdsb.2012.17.297 [10] Marina Ghisi, Massimo Gobbino. Hyperbolic--parabolic singular perturbation for mildly degenerate Kirchhoff equations: Global-in-time error estimates. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1313-1332. doi: 10.3934/cpaa.2009.8.1313 [11] Marcelo Messias. Periodic perturbation of quadratic systems with two infinite heteroclinic cycles. Discrete & Continuous Dynamical Systems, 2012, 32 (5) : 1881-1899. doi: 10.3934/dcds.2012.32.1881 [12] Songbai Guo, Wanbiao Ma. Global dynamics of a microorganism flocculation model with time delay. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1883-1891. doi: 10.3934/cpaa.2017091 [13] Eduard Marušić-Paloka, Igor Pažanin. Homogenization and singular perturbation in porous media. Communications on Pure & Applied Analysis, 2021, 20 (2) : 533-545. doi: 10.3934/cpaa.2020279 [14] Maria Carvalho, Alexander Lohse, Alexandre A. P. Rodrigues. Moduli of stability for heteroclinic cycles of periodic solutions. Discrete & Continuous Dynamical Systems, 2019, 39 (11) : 6541-6564. doi: 10.3934/dcds.2019284 [15] Bao-Zhu Guo, Li-Ming Cai. A note for the global stability of a delay differential equation of hepatitis B virus infection. Mathematical Biosciences & Engineering, 2011, 8 (3) : 689-694. doi: 10.3934/mbe.2011.8.689 [16] Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727 [17] Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, 2021, 29 (4) : 2599-2618. doi: 10.3934/era.2021003 [18] Jianxin Yang, Zhipeng Qiu, Xue-Zhi Li. Global stability of an age-structured cholera model. Mathematical Biosciences & Engineering, 2014, 11 (3) : 641-665. doi: 10.3934/mbe.2014.11.641 [19] Bin Fang, Xue-Zhi Li, Maia Martcheva, Li-Ming Cai. Global stability for a heroin model with two distributed delays. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 715-733. doi: 10.3934/dcdsb.2014.19.715 [20] Yukihiko Nakata, Yoichi Enatsu, Yoshiaki Muroya. On the global stability of an SIRS epidemic model with distributed delays. Conference Publications, 2011, 2011 (Special) : 1119-1128. doi: 10.3934/proc.2011.2011.1119 2020 Impact Factor: 1.392
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38974371552467346, "perplexity": 8440.771355556099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00435.warc.gz"}
https://oeis.org/wiki/Absolute_primes
This site is supported by donations to The OEIS Foundation. # Absolute primes Absolute primes in a given base ${\displaystyle b}$ are prime numbers which are still prime numbers after any permutation whatsoever of their base ${\displaystyle b}$ digits. The base ${\displaystyle b}$ repunit primes are a subset of the base ${\displaystyle b}$ absolute primes. The base ${\displaystyle b}$ absolute primes are in turn a subset of the permutable primes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485013842582703, "perplexity": 748.5533711491199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00454.warc.gz"}
https://squanderingti.me/blog/2019/05/14/i-have-a-binary-but-where-is-main.html
Lately I’ve been working on a project to explore the x86-64 instruction set. Part of this exploration requires using a disassembler to get the actual instructions that comprise a piece of software. That leads to the question “Where exactly are the instructions that make up my program? “ Let’s say you have a program written in C like the following #include<stdio.h> int main() { printf("Hello World!\n"); return 0; } Print the program, compile it, run it. The first leading question is: what exactly is a.out anyway? In most circles you’ll hear it called a binary but that’s not the whole picture. Specifically, it’s an Executable & Linkable Format file also called an elf file. Amazing write-ups exist on the structure of this file so if you want to learn all the nitty gritty details I’d highly suggest the wiki page. For our purposes the important thing to know is that the file contains a very informative program header with all the physical offsets to physically locate the bits we care about. The main part of our program lives within a section called .text. There’s a few different ways to find the physical offset and size of the text section. One way is to use the readelf util. readelf output of the sections in a.out Here we can see that offset for .text is 0x530 with a size of 0x1a2. This, however, includes all the instructions that the compiler designated as our program. It includes a lot of additional boilerplate to setup the environment and stack that executes before main. If we wanted the opcodes for just main we would need to look inside the symbol table to find the specific symbol’s offset and size. One way to get that information is to use objdump. objdump output of the symbol main Here we can see that main is defined in the .text section with offset 0x63a and with size 0x17. We can use a different trick with gdb to confirm these offsets and lengths are correct. Here’s an example using gdb to disassemble a particular symbol so we can see the individual instructions. gdb confirming the offsets As an interesting side note we can look at the 3rd instruction lea (which is ‘load effective address’ if you aren’t used to reading assembly). This is going to load the effective address of 0x9f+%rip (%rip is a register) which gdb says is 0x6e4 via the comment on the right. If we return to the same utility objdump we used above we can get the data section of the file we can confirm that’s the address of “Hello World!” objdump output of the .rodata section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5994607210159302, "perplexity": 1103.571577063869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00207.warc.gz"}
http://clay6.com/qa/52083/a-solution-is-to-be-kept-between-68-f-and-77-f-what-is-the-range-in-to-temp
A solution is to be kept between $68^{\circ} F$ and $77^{\circ} F$ . What is the range in to temperature in degree ceisius (C) if the ceisius (C) if the ceisius / Fahrenheit (F) conversion. formula is given by $F= \large\frac{9}{8} $$c +32? 1 Answer Toolbox: • Same Quantity can be added (a subtracted ) to (from ) both sides of the inequality with out changing the sign of the in equality. • Same positive quantities can be multiplied or divided to both side of the in equality with out changing the sign of the inequality. • If same negative quantity is multiplied or divided to both sides of the inequality is reversed i.e '>' sign changes to '<' and '<' changes '>' . Step 1: Since the solution is to be kept between 68^{\circ} and 77^{\circ} F We have that, 68 < F <77 Substituting for F= \large\frac{9}{5}$$C +32$ We get $68 < \large\frac{9}{5}$$c +32 < 77 Step 2: Subtracting 32 from both sides of inequality => 68-32 < \large\frac{9}{5}$$ C < 77 -32$ $=> 36 < \large\frac{9}{5} $$C<45 Multiplying by positive number \large\frac{5}{9} on both sides, of inequality => 36 \times \large\frac{5}{9} < \frac{9}{5} \times \frac{5}{9}$$C < 45 \times \large\frac{5}{9}$ $=> 20 < C <25$ Step 3: The required range of temperature in degree ceisius is between $20^{\circ}$ and $25^{\circ}C$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809672832489014, "perplexity": 1324.0283207337407}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00521.warc.gz"}
https://keio.pure.elsevier.com/ja/publications/thermodynamic-bounds-on-precision-in-ballistic-multiterminal-tran
# Thermodynamic Bounds on Precision in Ballistic Multiterminal Transport Kay Brandner, Taro Hanazato, Keiji Saito 50 被引用数 (Scopus) ## 抄録 For classical ballistic transport in a multiterminal geometry, we derive a universal trade-off relation between total dissipation and the precision, at which particles are extracted from individual reservoirs. Remarkably, this bound becomes significantly weaker in the presence of a magnetic field breaking time-reversal symmetry. By working out an explicit model for chiral transport enforced by a strong magnetic field, we show that our bounds are tight. Beyond the classical regime, we find that, in quantum systems far from equilibrium, the correlated exchange of particles makes it possible to exponentially reduce the thermodynamic cost of precision. 本文言語 English 090601 Physical review letters 120 9 https://doi.org/10.1103/PhysRevLett.120.090601 Published - 2018 3 2 ## ASJC Scopus subject areas • 物理学および天文学(全般) ## フィンガープリント 「Thermodynamic Bounds on Precision in Ballistic Multiterminal Transport」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523917198181152, "perplexity": 2657.9836484889415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00402.warc.gz"}
http://mathhelpforum.com/advanced-statistics/132802-finding-sample-standard-deviation.html
# Math Help - Finding sample standard deviation 1. ## Finding sample standard deviation If a 90% confidence interval for $\sigma^2$ is reported to be (51.47,261.90), what is the value of the sample standard deviation. Attempt: $51.47<\sigma^2<261.90$ So $\frac{(n-1)s^2}{\chi_{.95,n-1}^2}=51.47$ and $\frac{(n-1)s^2}{\chi_{.05,n-1}^2}=261.90$ Then $s=\sqrt{\frac{51.47*\chi_{.95,n-1}^2}{n-1}}=\sqrt{\frac{261.90*\chi_{.05,n-1}^2}{n-1}}$ $5.0884=\frac{\chi_{.95,n-1}^2}{\chi_{.05,n-1}^2}$ So this looks like it should have an F-distribution with same numerator and denominator degrees of freedom, which should help me figure out what those degrees of freedom are. I can't figure out how to do this. Or am I way off? 2. you need n, and then just look up either of those chi-square percentiles then solve for s. If, you really don't have n, just go down the tables and see which of those ratio of percentiles Which F-Distribution table should I use? In other words, how do I infer the alpha level from my work so far? Is it just the same as for $\sigma^2$, which is .10?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275954961776733, "perplexity": 399.5705335378557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694108/warc/CC-MAIN-20140313024454-00098-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.chemeurope.com/en/encyclopedia/Titration.html
My watch list my.chemeurope.com # Titration Titration is a common laboratory method of quantitative/chemical analysis that can be used to determine the concentration of a known reactant. Because volume measurements play a key role in titration, it is also known as volumetric analysis. A reagent, called the titrant, of known concentration (a standard solution) and volume is used to react with a solution of the analyte, whose concentration is not known in advance. Using a calibrated burette to add the titrant, it is possible to determine the exact amount that has been consumed when the endpoint is reached. The endpoint is the point at which the titration is complete, as determined by an indicator (see below). This is ideally the same volume as the equivalence point - the volume of added titrant at which the number of moles of titrant is equal to the number of moles of analyte, or some multiple thereof (as in polyprotic acids). In the classic strong acid-strong base titration, the endpoint of a titration is the point at which the pH of the reactant is just about equal to 7, and often when the solution permanently changes color due to an indicator. There are however many different types of titrations (see below). Many methods can be used to indicate the endpoint of a reaction; titrations often use visual indicators (the reactant mixture changes colour). In simple acid-base titrations a pH indicator may be used, such as phenolphthalein, which becomes pink when a certain pH (about 8.2) is reached or exceeded. Another example is methyl orange, which is red in acids and yellow in alkali solutions. Not every titration requires an indicator. In some cases, either the reactants or the products are strongly coloured and can serve as the "indicator". For example, an oxidation-reduction titration using potassium permanganate (pink/purple) as the titrant does not require an indicator. When the titrant is reduced, it turns colourless. After the equivalence point, there is excess titrant present. The equivalence point is identified from the first faint pink colour that persists in the solution being titrated. Due to the logarithmic nature of the pH curve, the transitions are, in general, extremely sharp; and, thus, a single drop of titrant just before the endpoint can change the pH significantly — leading to an immediate colour change in the indicator. There is a slight difference between the change in indicator color and the actual equivalence point of the titration. This error is referred to as an indicator error, and it is indeterminate. ## History and etymology The word "titration" comes from the Latin word titalus, meaning inscription or title. The French word titre, also from this origin, means rank. Titration, by definition, is the determination of rank or concentration of a solution with respect to water with a pH of 7 (which is the pH of pure water). The origins of volumetric analysis are in late-18th-century French chemistry. Francois Antoine Henri Descroizilles developed the first burette (which looked more like a graduated cylinder) in 1791. Joseph Louis Gay-Lussac, this gay developed an improved version of the burette that included a side arm, and coined the terms "pipette" and "burette" in an 1824 paper on the standardization of indigo solutions. A major breakthrough in the methodology and popularization of volumetric analysis was due to Karl Friedrich Mohr, who redesigned the burette by placing a clamp and a tip at the bottom, and wrote the first textbook on the topic, Lehrbuch der chemisch-analytischen Titrirmethode (Textbook of analytical-chemical titration methods), published in 1855.[1] ## Preparing a sample for titration In a titration, both titrant and analyte are required to be aqueous, or in a solution form. If the sample is not a liquid or solution, the samples must be dissolved. If the analyte is very concentrated in the sample, it might be useful to dilute the sample. Although the vast majority of titrations are carried out in aqueous solution, other solvents such as glacial acetic acid or ethanol (in petrochemistry) are used for special purposes. A measured amount of the sample can be given in the flask and then be dissolved or diluted. The mathematical result of the titration can be calculated directly with the measured amount. Sometimes the sample is dissolved or diluted beforehand, and a measured amount of the solution is used for titration. In this case the dissolving or diluting must be done accurately with a known coefficient because the mathematical result of the titration must be multiplied with this factor. Many titrations require buffering to maintain a certain pH for the reaction. Therefore, buffer solutions are added to the reactant solution in the flask. Some titrations require "masking" of a certain ion. This can be necessary when two reactants in the sample would react with the titrant and only one of them must be analysed, or when the reaction would be disturbed or inhibited by this ion. In this case another solution is added to the sample, which "masks" the unwanted ion (for instance by a weak binding with it or even forming a solid insoluble substance with it). Some redox reactions may require heating the solution with the sample and titration while the solution is still hot (to increase the reaction rate). ## Procedure A typical titration begins with a beaker or Erlenmeyer flask containing a precise volume of the reactant and a small amount of indicator, placed underneath a burette containing the reagent. By controlling the amount of reagent added to the reactant, it is possible to detect the point at which the indicator changes colour. As long as the indicator has been chosen correctly, this should also be the point where the reactant and reagent neutralise each other, and, by reading the scale on the burette, the volume of reagent can be measured. As the concentration of the reagent is known, the number of moles of reagent can be calculated (since concentration = moles / volume). Then, from the chemical equation involving the two substances, the number of moles present in the reactant can be found. Finally, by dividing the number of moles of reactant by its volume, the concentration is calculated. ## Titration curves Titrations are often recorded on titration curves, whose compositions are generally identical: the independent variable is the volume of the titrant, while the dependent variable is the pH of the solution (which changes depending on the composition of the two solutions). The equivalence point is a significant point on the graph (the point at which all of the starting solution, usually an acid, has been neutralized by the titrant, usually a base). It can be calculated precisely by finding the second derivative of the titration curve and computing the points of inflection (where the graph changes concavity); however, in most cases, simple visual inspection of the curve will suffice (in the curve given to the right, both equivalence points are visible, after roughly 15 and 30 mL of NaOH solution has been titrated into the oxalic acid solution.) To calculate the pKa values, one must find the volume at the half-equivalence point, that is where half the amount of titrant has been added to form the next compound (here, sodium hydrogen oxalate, then disodium oxalate). Halfway between each equivalence point, at 7.5 mL and 22.5 mL, the pH observed was about 1.5 and 4, giving the pKa values. In monoprotic acids, the point halfway between the beginning of the curve (before any titrant has been added) and the equivalence point is significant: at that point, the concentrations of the two solutions (the titrant and the original solution) are equal. Therefore, the Henderson-Hasselbalch equation can be solved in this manner: $pH = pK_a + \log \left( \frac{[\mbox{base}]}{[\mbox{acid}]} \right)$ $pH = pK_a + \log(1)\,$ $pH = pK_a \,$ Therefore, one can easily find the acid dissociation constant of the monoprotic acid by finding the pH of the point halfway between the beginning of the curve and the equivalence point, and solving the simplified equation. In the case of the sample curve, the Ka would be approximately 1.78×10-5 from visual inspection (the actual Ka2 is 1.7×10-5) For polyprotic acids, calculating the acid dissociation constants is only marginally more difficult: the first acid dissociation constant can be calculated the same way as it would be calculated in a monoprotic acid. The second acid dissociation constant, however, is the point halfway between the first equivalence point and the second equivalence point (and so on for acids that release more than two protons, such as phosphoric acid). ## Types of titrations Titrations can be classified by the type of reaction. Different types of titration reaction include: • Acid-base titrations are based on the neutralization reaction between the analyte and an acidic or basic titrant. These most commonly use a pH indicator, a pH meter, or a conductance meter to determine the endpoint. • Redox titrations are based on an oxidation-reduction reaction between the analyte and titrant. These most commonly use a potentiometer or a redox indicator to determine the endpoint. Frequently either the reactants or the titrant have a colour intense enough that an additional indicator is not needed. • Complexometric titrations are based on the formation of a complex between the analyte and the titrant. The chelating agent EDTA is very commonly used to titrate metal ions in solution. These titrations generally require specialized indicators that form weaker complexes with the analyte. A common example is Eriochrome Black T for the titration of calcium and magnesium ions. • A form of titration can also be used to determine the concentration of a virus or bacterium. The original sample is diluted (in some fixed ratio, such as 1:1, 1:2, 1:4, 1:8, etc.) until the last dilution does not give a positive test for the presence of the virus. This value, the titre, may be based on TCID50, EID50, ELD50, LD50 or pfu. This procedure is more commonly known as an assay. ## Measuring the endpoint of a titration Main article: Endpoint (chemistry) Different methods to determine the endpoint include: • pH indicator: This is a substance that changes colour in response to a chemical change. An acid-base indicator (e.g., phenolphthalein) changes colour depending on the pH. Redox indicators are also frequently used. A drop of indicator solution is added to the titration at the start; when the colour changes the endpoint has been reached. • A potentiometer can also be used. This is an instrument that measures the electrode potential of the solution. These are used for titrations based on a redox reaction; the potential of the working electrode will suddenly change as the endpoint is reached. • pH meter: This is a potentiometer that uses an electrode whose potential depends on the amount of H+ ion present in the solution. (This is an example of an ion-selective electrode. This allows the pH of the solution to be measured throughout the titration. At the endpoint, there will be a sudden change in the measured pH. It can be more accurate than the indicator method, and is very easily automated. • Conductance: The conductivity of a solution depends on the ions that are present in it. During many titrations, the conductivity changes significantly. (For instance, during an acid-base titration, the H+ and OH- ions react to form neutral H2O. This changes the conductivity of the solution.) The total conductance of the solution depends also on the other ions present in the solution (such as counter ions). Not all ions contribute equally to the conductivity; this also depends on the mobility of each ion and on the total concentration of ions (ionic strength). Thus, predicting the change in conductivity is harder than measuring it. • Colour change: In some reactions, the solution changes colour without any added indicator. This is often seen in redox titrations, for instance, when the different oxidation states of the product and reactant produce different colours. • Precipitation: If the reaction forms a solid, then a precipitate will form during the titration. A classic example is the reaction between Ag+ and Cl- to form the very insoluble salt AgCl. This usually makes it difficult to determine the endpoint precisely. As a result, precipitation titrations often have to be done as "back" titrations (see below). • An isothermal titration calorimeter uses the heat produced or consumed by the reaction to determine the endpoint. This is important in biochemical titrations, such as the determination of how substrates bind to enzymes. • Thermometric titrimetry is an extraordinarily versatile technique. This is differentiated from calorimetric titrimetry by the fact that the heat of the reaction (as indicated by temperature rise or fall) is not used to determine the amount of analyte in the sample solution. Instead, the endpoint is determined by the rate of temperature change. • Spectroscopy can be used to measure the absorption of light by the solution during the titration, if the spectrum of the reactant, titrant or product is known. The relative amounts of the product and reactant can be used to determine the endpoint. • Amperometry can be used as a detection technique (amperometric titration). The current due to the oxidation or reduction of either the reactants or products at a working electrode will depend on the concentration of that species in solution. The endpoint can then be detected as a change in the current. This method is most useful when the excess titrant can be reduced, as in the titration of halides with Ag+. (This is handy also in that it ignores precipitates.) ### Other terms The term back titration is used when a titration is done "backwards": instead of titrating the original analyte, one adds a known excess of a standard reagent to the solution, then titrates the excess. A back titration is useful if the endpoint of the reverse titration is easier to identify than the endpoint of the normal titration. They are also useful if the reaction between the analyte and the titrant is very slow. ## Particular uses • As applied to biodiesel, titration is the act of determining the acidity of a sample of WVO by the dropwise addition of a known base to the sample while testing with pH paper for the desired neutral pH=7 reading. By knowing how much base neutralizes an amount of WVO, we discern how much base to add to the entire batch. • Titrations in the petrochemical or food industry to define oils, fats or biodiesel and similar substances. An example procedure for all three can be found here: [1]. ## References 1. ^ Louis Rosenfeld. Four Centuries of Clinical Chemistry. CRC Press, 1999, p. 72-75.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8634171485900879, "perplexity": 1492.5783458182855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00741.warc.gz"}
http://edboost.org/summer-enrichment/mathboost-science-k2
# MathBoost! + Hands-on Biology One of the secrets of math success -- and even math love -- is the ability to work with numbers quickly and easily.  No one likes work that takes too long or takes too much work. So, asking a student to do a complex operation, like long division or a complicated equation, that requires many steps, feels impossible for students who are still using their fingers to add and subtract or consulting a chart for times tables.  MathBoost uses fun ways -- games, races, contests -- to push kids to become fast and fluent in their grade level math facts (and above grade level math facts if we can get there).  For many kids, learning that math can be pain-free is a revelation.  Providing that revelation in the beginning of the school years can make a huge difference in how those kids look at math.  So, we take mornings to try to grow our students' excitement for and competence in math. Then, after lunch, we will dig into hands-on biology.  We're going to use microscopes to explore cultures and micro-organisms as well as watching zebra fish eggs and babies develop from fertilization through cell division.  We will also be studying anatomy and body structures and exploring animals through dissection (always a crowd favorite!).  This is the ideal intro hands-on biology class for any budding doctor, veterinarian, or life scientist! Sign up using the links below.  You can also sign up for aftercare (3pm-6pm). You can sign up for the week ($100) or for a few days (you can let us know which days on your form, or closer to the date of the camp). Grades: 1st 2nd 3rd Cost:$350 (Scholarships available) Time: 9 am - 3 pm (Monday - Friday) Sessions: Monday, July 20, 2020 to Friday, July 24, 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16982100903987885, "perplexity": 3807.471925239564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00054.warc.gz"}
https://competitive-exam.in/questions/discuss/the-force-of-attraction-or-repulsion-between-two-2
# The force of attraction or repulsion between two charged bodies is directly proportional to the product of the charges and inversely proportional to the square of distance between them. This was propounded by Coulomb Gilbert Volta Rutherford Please do not use chat terms. Example: avoid using "grt" instead of "great".
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751028180122375, "perplexity": 240.6836962855478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00198.warc.gz"}
http://tex.stackexchange.com/questions/54795/question-mark-shown-instead-of-citation-using-harvard
# Question mark shown instead of citation using harvard I am getting [?] when citing using harvard. If I use \citeasnoun I get the citation. How do I solve this problem. I am using TeXnicCenter with MikTeX 2.9 and installed the harvard package. \documentclass[a4paper,12pt]{article} \usepackage{float,epsfig} \usepackage{pifont,epsfig} \usepackage[dvips]{color} \usepackage{graphicx,color} %\usepackage{harvard} \usepackage{natbib} \usepackage[sort]{cite} %\usepackage{ifpdf} \usepackage{cleveref} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \usepackage{amsfonts} \renewcommand{\baselinestretch}{1.0} \parindent=0pt % Do not indent paragraphs \setlength{\hoffset}{-0.5cm} \setlength{\voffset}{-2.0cm} \setlength{\textheight}{9.3in} \setlength{\textwidth}{5.8in} \begin{document} ------ \bibliographystyle{agsm} \bibliography{ref} %\input{bio_2} \end{document} When I use natbib I get it correct. I also wish to replace & with and when I use \citet in natbib. - It seems you want sorted citations and therefore load both the harvard and cite packages, but apparently the latter is not compatible with the former. I suggest that you load just natbib with its sort option, which emulates the sort option of cite . \documentclass[a4paper,12pt]{article} % Variant A: Doesn't work % \usepackage{harvard} % \usepackage[sort]{cite} % Variant B: Works \usepackage[sort]{natbib} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @misc{A01, author = {Author, A.}, year = {2001}, title = {Alpha}, } @misc{B02, author = {Buthor, B.}, year = {2002}, title = {Bravo}, } \end{filecontents} \begin{document} \citep{B02,A01} \bibliographystyle{agsm} \bibliography{\jobname} \end{document} - You shouldn't load both the harvard and the cite packages (or, for that matter, both the natbib and the cite packages). The cite package is designed for numeric-style citations, whereas the harvard and natbib packages are meant mainly for authoryear-style citations. (OK, the natbib package can be used for numeric-style citations if it is invoked with the numbers option.) If you're using the agsm bibliography style, which comes with the harvard package but can be along with natbib as well, you can get the conjunction between authors' names to be displayed as and instead of as & by issuing the command \def\harvardand{and} % default: "&" after loading either the harvard or the natbib package. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998260259628296, "perplexity": 5854.1124115922685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379916.51/warc/CC-MAIN-20141119123259-00115-ip-10-235-23-156.ec2.internal.warc.gz"}
http://slideplayer.com/slide/4061059/
# Am I Me or Am I the Situation?. Does Personality Change? l Foundation of personality psychology is personality stability and predictive utility l If personality. ## Presentation on theme: "Am I Me or Am I the Situation?. Does Personality Change? l Foundation of personality psychology is personality stability and predictive utility l If personality."— Presentation transcript: Am I Me or Am I the Situation? Does Personality Change? l Foundation of personality psychology is personality stability and predictive utility l If personality changes this threatens the field’s usefulness l Especially if change is random l Prediction difficult l Central debate has been whether personality or situation better predictor Person or Situation? l Account for 2 observations: l Behavior varies across situation l Perceive ourselves as the same person l Are we different people in different situations? l Historical emphasis on one or the other l Internal or external l Implications for methodology, Qs, etc. Person-Situation Debate l Either Or Question l 1940s, 1950s internal emphasis l Freudian personality types (anal character) l Projective techniques and trait inventories Mischel (1968) challenge l Dissatisfied with internal emphasis on traits l Argued for situational focus l Situational changes predict behavior > traits l Ostensive personality consistency due to situational consistencies l Learning view Mischel (1968) challenge l Primary criticisms l Little evidence of cross-situation consistency in behavior l Traits dependent on situational evocation l Traits poor predictors of behavior across situations (r < =.30) l Trait merely labels w/ no independent reality Controversy l Leads to near collapse of personality psychology l If traits aren’t real/stable/predictive what use are they? l No need for personality if behavior primarily due to situational features l Predict/understand behavioral variation via situation Field’s Response l Funder & Ozer (1983) l Reanalyzed studies showing situational influence on behavior l Situation had ~same predictive power as personality (r < =.30) l Power of situation = power of personality l Both rs < =.30 Activity 13: Mischel l In groups of 3-4 l Describe the response to Mischel’s challenge made by Epstein. How does Mischel (re) challenge Epstein here? l Next describe at least 1 more response made by the field in defense of traits. l PLEASE TURN THESE IN AFTER CLASS! Kenrick & Funder: Fallout l PS debate led to numerous hypotheses regarding the relative importance of personality/situation l Many assumed that personality was an artifact, unreal and a weak predictor (empirically and conceptually) of behavior Kenrick & Funder: Hypotheses l H1: Personality is in eye of beholder l Interrater agreement fails to support l H4: Shared (incorrect) stereotypes account for rater agreement l Ratings predict independent behavioral manifestations (aggression, delay of gratification, social behavior) Kenrick & Funder: Hypotheses l H7: Effect of personality on behavior too small to be meaningful (.30) l Situational features share effect size (.30) l Small can be important & meaningful l Effect increases with aggregation Mischel’s Fallout l 1980s/90s reality/stability of traits revealed l Genetics, longitudinal, cross-cultural studies l Interactionism (nature & nurture) : l Effect of personality depends on situation l Effects of situation depends on personality l Behavior = traits + situation + traits x situation l Greater external validity (Cattell: multivariate world) Lessons from PS Debate l Kenrick & Funder l Gray > black or white (closer to reality) l Limitations on behavioral prediction from personality and situation l Boundary conditions on personality ratings l Personality & Social Psych MUST work together Knowns l Person & situation important l Consistency varies across people l Situations vary in their power l Consistency varies as a function of both Unknowns l What person & situation Vs best? l Goals l Situational taxonomy Future: Integration? l Social-cognitive approach of Mischel l Trait approach of Costa & McCrae l Function & structure? Download ppt "Am I Me or Am I the Situation?. Does Personality Change? l Foundation of personality psychology is personality stability and predictive utility l If personality." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395178914070129, "perplexity": 25023.582578628488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00107.warc.gz"}
https://www.physicsforums.com/threads/fourier-sine-series-for-a-triangular-wave-on-a-finite-string.863179/
# Homework Help: Fourier sine series for a triangular wave on a finite string 1. Mar 22, 2016 ### nazmus sakib 1. The problem statement, all variables and given/known data A string of length L =8 is fixed at both ends. It is given a small triangular displacement and released from rest at t=0. Find out Fourier coefficient Bn. 2. Relevant equations what should i use for U0(x) ? 3. The attempt at a solution 2. Mar 22, 2016 ### BvU Hello nazmus, ax from 0 to L/2 and x(1-x) from L/2 to L . Oops , I'm not supposed/allowed to give direct answers ! 3. Mar 22, 2016 ### LCKurtz First, you need to know what the "small" displacement is. Let's say you lift the center by an amount $h$, so the center of the string is at $(\frac L 2,h)$. Now just find the equation of the two straight line segments forming the triangular displacement. Also, I would ignore BvU's answer which is a) discontinuous and b) partially parabolic. 4. Mar 22, 2016 ### BvU No, it was just a typo. I (of course) meant ax from 0 to L/2 and a(L-x) from L/2 to L And for the Fourier coefficient calculation it really doesn't matter how big a (or h) is. All lin good spirit Cheap, fast, and reliable. Pick any two . Last edited: Mar 22, 2016
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309490919113159, "perplexity": 1940.091629540545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00009.warc.gz"}
http://piping-designer.com/index.php/mathematics/geometry/plane-geometry/2347-hollow-circle
# Hollow Circle Written by Jerry Ratzlaff on . Posted in Plane Geometry ## Hollow Circle - Geometric Properties ### area of a Hollow Circle  formula $$\large{ A = \pi \left( R^2 - r^2 \right) }$$ ### Center of a Hollow Circle All points on the line circumference are at equal distance from the center point. ### Perimeter of a Hollow Circle formula $$\large{ P = 2 \pi R }$$   (Outside) $$\large{ P = 2 \pi r }$$   (Inside) ### Radius of a Hollow Circle formula $$\large{ r = \sqrt {\frac {2A} {\pi} } }$$ ### Distance from Centroid of a Hollow Circle formula $$\large{ C_x = r }$$ $$\large{ C_y = r }$$ ### Elastic Section Modulus of a Hollow Circle formula $$\large{ S = \frac { \pi \left( R^4 - r^4 \right) } { 4R } }$$ ### Plastic Section Modulus of a Hollow Circle formula $$\large{ Z = \frac { 4 \left( R^3 - r^3 \right) } { 3 } }$$ ### Polar Moment of Inertia of a Hollow Circle formula $$\large{ J_{z} = \frac { \pi }{2} \left( R^4 - r^4 \right) }$$ $$\large{ J_{z1} = \frac { \pi }{2} \left( R^4 - r^4 \right) + 2 \pi R^2 \left( R^2 - r^2 \right) }$$ ### Radius of Gyration of a Hollow Circle formula $$\large{ k_{x} = \frac {1}{2} \sqrt { R^2 + r^2 } }$$ $$\large{ k_{y} = \frac {1}{2} \sqrt { R^2 + r^2 } }$$ $$\large{ k_{z} = \frac { \sqrt { 2 } }{2} \sqrt { R^2 + r^2 } }$$ $$\large{ k_{x1} = \frac {1}{2} \sqrt { 5 R^2 + r^2 } }$$ $$\large{ k_{y1} = \frac {1}{2} \sqrt { 5 R^2 + r^2 } }$$ $$\large{ k_{z1} = \frac { \sqrt { 2 } }{2} \sqrt { 5 R^2 + r^2 } }$$ ### Second Moment of Area of a Hollow circle formula $$\large{ I_{x} = \frac { \pi }{4} \left( R^4 - r^4 \right) }$$ $$\large{ I_{y} = \frac { \pi }{4} \left( R^4 - r^4 \right) }$$ $$\large{ I_{x1} = \frac { \pi }{4} \left( R^4 - r^4 \right) + \pi R^2 \left( R^2 - r^2 \right) }$$ $$\large{ I_{y1} = \frac { \pi }{4} \left( R^4 - r^4 \right) + \pi R^2 \left( R^2 - r^2 \right) }$$ ### Torsional Constant of a Hollow Circle formula $$\large{ J = \frac { \pi \left( R^4 - r^4 \right) } { 2 } }$$ $$\large{ J = \frac { \pi \left( D^4 - d^4 \right) } { 32 } }$$ Where: $$\large{ A }$$ = area $$\large{ C }$$ = circumference $$\large{ C_x, C_y }$$ = distance from centroid $$\large{ d }$$ =  inside diameter $$\large{ D }$$ =  outside diameter $$\large{ I }$$ = moment of inertia $$\large{ k }$$ = radius of gyration $$\large{ J }$$ = torsional constant $$\large{ P }$$ = perimeter $$\large{ r }$$ = inside radius $$\large{ R }$$ = outside radius $$\large{ S }$$ = elastic section modulus $$\large{ Z }$$ = plastic section modulus $$\large{ \theta }$$ = angle $$\large{ \pi }$$ = Pi
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793897867202759, "perplexity": 11347.539578875796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741176.4/warc/CC-MAIN-20181113000225-20181113022225-00400.warc.gz"}
https://mathoverflow.net/questions/366884/list-chromatic-index-of-a-particular-graph
# List chromatic index of a particular graph Consider the graph $$G$$ of order $$n$$ consisting of two disjoint cliques of even order $$\frac{n}{2}=p+1$$ (where $$p$$ is odd prime) joined by a bipartite graph (that is, deleting the edges of the two disjoint cliques from $$G$$ leaves a bipartite graph) of maximum degree $$p$$. Then, does the graph have list chromatic index $$\le 2p+1$$? The bipartite graph is also quite specific, in that it has one vertex in each partite set of degree exactly equal to $$0,1,2,\dotsc,p$$. My view is that, by Schauz - Proof of the list edge coloring conjecture for complete graphs of prime degree paper, we have that the disjoint cliques are chromatic edge-choosable. In addition, the edges joining the two cliques is a bipartite graph, which is again chromatic edge-choosable by the Galvin's theorem. Thus, it makes me think the above question has a positive answer. By the way, the graph has chromatic index equal to $$2p$$, that is the graph is of class $$1$$. Any hints? • @GregoryJ.Puleo thanks! edited the post. – vidyarthi Jul 29 '20 at 22:04 Greedy coloring works here to show $$2p$$-choosability, I believe, and the hypothesis that $$p$$ is prime doesn't appear to be necessary. Write the cliques as $$A = \{a_1, \ldots, a_{p+1}\}$$ and $$B = \{b_1, \ldots, b_{p+1}\}$$, taking the notation so that $$a_i$$ has exactly $$i-1$$ neighbors in $$B$$ and vice versa. First color the edges in the bigraph between $$A$$ and $$B$$; observe that each such edge is adjacent (in $$L(G)$$) to at most $$2p-1$$ previously colored edges when it is processed, thus has a color available. (Alternatively, just use Galvin's theorem for this part; then these edges only need to have lists of size $$p$$.) Then color the edges $$a_ia_j$$ within $$A$$, ordering the edges so that $$i + j$$ is non-increasing. Observe that an edge $$a_ia_j$$ with $$i \leq j$$ has, within the clique $$A$$, exactly $$p+1-j$$ previously-colored adjacent edges at its $$a_i$$-endpoint and $$(p+1)-i-1 = p-i$$ previously-colored adjacent edges at its $$a_j$$-endpoint, for a total of $$2p+1-(i+j)$$ previously-colored adjacent edges within $$A$$. Furthermore, $$a_ia_j$$ has exactly $$(i-1) + (j-1) = i+j-2$$ previously-colored adjacent edges going to $$B$$. Thus, each edge $$a_ia_j$$ within $$A$$ is adjacent to exactly $$2p-1$$ previously-colored edges when it is processed, and therefore has a color available. Coloring $$B$$ the same way finishes the proof. • great! but I think first coloring the edges of $A$ (or $B$) and then the edges of the bipartite graph and lastly $B$ (or $A$) would also work. But, for this, I would use the paer refereed and the Galvin' stheorem. – vidyarthi Jul 30 '20 at 18:00 • by the way, would replacing the bipartite graph with arbitrary bipartite graph have any effect (I dont think so)? If so, then I think we could extend this method to prove edge chromatic choosability for all graphs with maximum degree$\ge\frac{n}{2}$, whre $n$ is the order of the graph – vidyarthi Jul 30 '20 at 18:03 • The degree constraint is essential here for arguing that each edge has few enough previously-colored adjacent edges going to $B$. In the extreme case where the bipartite graph was $K_{p+1, p+1}$, the whole graph would just be $K_{2p+2}$, and then no matter how you slice it the last edge you consider will have $2(2p+1) - 1 = 4p+1$ previously-colored adjacent edges. – Gregory J. Puleo Jul 30 '20 at 18:46 • ok, let us limit the degree of the bipartite graph to a maximum of $p$, then I think it should be possible,right? – vidyarthi Jul 30 '20 at 18:52 • I suspect there would still be far too many previously-colored adjacent edges for the late edges within $A$. Note that the last edge considered within $A$ will have $2p-1$ previously-colored adjacent edges just within $A$, and therefore couldn't afford to be incident to any edges going to $B$. I think the only way to relax the hypothesis you stated in the question and have this proof still go through is to allow vertex $a_i$ to have degree at most $i-1$ in $B$, rather than degree exactly $i-1$ (and likewise for $B$-vertices). – Gregory J. Puleo Jul 30 '20 at 18:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349952101707458, "perplexity": 184.08573626789013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00575.warc.gz"}
http://www.lofoya.com/Logical-Reasoning/Venn-Diagrams/s1p2
# Practice Questions on Venn DiagramsLogical Reasoning ## Section-1: Venn Diagrams Question - 6 Q6. A survey was conducted of 100 people to find out whether they had read recent issues of Golmal, a monthly magazine. The summarized information regarding readership in 3 months is given below: Only September: 18; September but not August: 23; September and July: 8; September: 28; July: 48; July and August: 10; None of the three months: 24. What is the number of surveyed people who have read exactly two consecutive issues (out of the three)? A. 7 B. 9 C. 12 D. 14 E. 17 Common Information The Venn diagram given below shows the estimated readership of 3 daily newspapers ($X$, $Y$ & $Z$) in a city. The total readership and advertising cost for each of these papers is as below Table below can be scrolled horizontally $X$ 8.7 6000 $Y$ 9.1 6500 $Z$ 5.6 5000 The total population of the city is estimated to be 14 million. The common readership (in lakhs) is indicated in the given Venn diagram. ## Section-1: Venn Diagrams Question - 7 Q7. Common Information Question: 1/2 The number of people (in lakhs) who read at least one newspaper is: A. 4.7 lakhs B. 11.9 lakhs C. 17.4 lakhs D. 23.4 lakhs ## Section-1: Venn Diagrams Question - 8 Q8. Common Information Question: 2/2 The number of people (in lakhs) who read only one newspaper is: A. 4.7 lakhs B. 11.9 lakhs C. 17.4 lakhs D. 23.4 lakhs ## Section-1: Venn Diagrams Question - 9 Q9. It is known that at the university, 60% of the students play tennis, 50% of them play bridge, 70% jog, 20% play tennis and bridge, 30% play tennis and jog, and 40% play bridge and jog. If someone claimed that 20% students play bridge, jog and play tennis, then which of the below statement is true: A. The person is telling the truth. B. Students who do all three activities are more than 20%. C. Students who do all three activities are less than 20%. D. There are no students who do all three activities. ## Section-1: Venn Diagrams Question - 10 Q10. At a T-shirt auction, 42 Reds United T-shirts were sold and 30 Blues T-shirts were sold. No one bought more than one T-shirt of the same type and everyone bought at least one. If 60 people participated in the auction, how many bought both T-shirts? A. 10 B. 12 C. 14 D. 16 Hide Number formats Decimals Lofoya.com   2016 You may drag this calculator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21579857170581818, "perplexity": 3343.346180869879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543567.64/warc/CC-MAIN-20161202170903-00134-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/50865/irreducibility-of-analytic-sets
# Irreducibility of Analytic Sets How does one prove that an Analytic set $V$ in $C^n$ is irreducible if the set of regular points $V^*$ is connected? Proceeding by contradiction, if we assume that $V$ is in fact reducible and if $V ={V_1} \cup{V_2}$ is the decomposition, then it suffices to show that $V_1\cap V_2 \subset V_s$ where $V_s$ is the set of the singular points in $V$. I am unable to prove this. Any suggestions would be welcome! - Product rule?... –  Thierry Zell Jan 1 '11 at 15:51 Can you elaborate? –  Poincare-Lelong Jan 1 '11 at 19:07 Dear unknown, your statement (and Griffiths-Harris's) should be made more precise. Indeed, if $V$ is reducible, it can be be decomposed into irreducibles but there might be more than two irreducible components. Actually there might be infinitely many such components.For example, think of a comb i.e. in $\mathbb C^2$ the union of the horizontal $x$-axis and the vertical lines with integral first coordinate (to be continued) –  Georges Elencwajg Jan 2 '11 at 1:36 (continuation) And if you just write $V=V_1\cup V_2$ without bothering whether the $V_i$'s are irreducible, the statement is false: just add a smooth point $s$ of $V$ to each of $V_1$ and $V_2$ and look at $V= W_1 \cup W_2$ with $W_i=V_i \cup \{s\}$. The point $s$ is in $W_1 \cap W_2$ and yet is a smooth point of $V$. I have modified your question in my answer below in order to take these remarks into account. –  Georges Elencwajg Jan 2 '11 at 2:02 Dear unknown, here is a sketch of proof of your question ( which I have modified to make it more accurate, as explained in my comments to your original post .) Statement If $V=V_1 \cup V_2$ with $V_1, V_2$ irreducible and distinct from $V$, then the intersection $V_1 \cap V_2$ consists of singular points of $V$. Sketch of proof Suppose there is a point $v\in V_1\cap V_2$ which is holomorphically non singular on $V$, i.e. holomorphically smooth. Then the germ of analytic space $V_v$ would have a decomposition $V_v=(V_1)_v \cup (V_2)_v$ . But this is absurd because the germ of an analytic space at a smooth point is irreducible. This boils down to the fact that the local ring of a smooth point of an analytic space is an integral domain, which is clear since it is a a ring of convergent power series $\mathbb C \{z_1,\ldots, z_n\}$. By the way, judging from your notation, I suppose you extracted this question from Griffiths-Harris. I find their treatment a little cavalier , since indeed they give no explanation at all for their assertion, which is actually not quite correct, as explained in my comments to your question. If you want full and details, I recommend the brothers Kaup's book Holomorphic Functions of Several Variables (de Gruyter Studies in Mathematics 3), where they prove that a reduced complex space is irreducible iff its smooth points form a connected open subset (49.7 Corollary, page 194). And, last but not least, happy New Year to you and all our friends of MathOverflow ! - A corollary of this nice answer is the same result in the algebraic case, since by the argument in Shafarevich II.2.2, the local ring of a variety at a simple point embeds as a subring of a power series, hence is also a domain. –  roy smith Jan 1 '11 at 21:05 I am also stuck with this question and I tried to give a proof. Seeing @Georges' answer, I am starting to doubt my attempt. Referring to a book "Holomorphic functions and integral Representations in several complex variables" by Michael Range, the two following exercise problems (unfortunately!) inspired the attempts: [Page 31: E.2.13] Let $M_{1}$ and $M_{2}$ be closed connected complex manifolds of the region $D\subseteq \mathbb{C}^{n}$. If there exists $U$ a neighbourhood of $P\in M_{1}\cap M_{2}$ with $U\cap M_{1}=U\cap M_{2}$, then $M_{1}=M_{2}$. [Page 40: E.3.8] Let $A_{1}$ and $A_{2}$ be analytic sets, $P\in A_{1}\cap A_{2}$. If for each $U$ neighbourhood of $P$, $U\cap A_{1}\neq U\cap A_{2}$, then $P$ is a singular point of $A$. Remark: I think in E.3.8, it is implicitly implied that $A_{1}\neq A_{2}$. Therefore, if $V_{1}\cap V_{2}$ is not contained in $V_{s}$, then we may find $z\in V_{1}\cap V_{2}$ which is regular. Then by E.3.8, there exists a neighbourhood $U$ of $z$ such that $U\cap V_{1}=U\cap V_{2}$. By $E.2.13$, therefore $V_{1}=V_{2}=V$ which is not possible. So we are left with finishing two questions. Hope someone can help! N.B. So... if @Georges' argument and reasons are right, what's wrong with my argument? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392781853675842, "perplexity": 207.38250337195353}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990611.52/warc/CC-MAIN-20150728002310-00056-ip-10-236-191-2.ec2.internal.warc.gz"}
https://xplaind.com/407248/arithmetic-average-return
# Arithmetic Average Return Arithmetic average return is the return on investment calculated by simply adding the returns for sub-periods and then dividing it by total number of periods. It overstates the true return and is only appropriate for shorter time periods. The arithmetic average return is always higher than the other average return measure called the geometric average return. The arithmetic return ignores the compounding effect and order of returns and it is misleading when the investment returns are volatile. ## Formula Arithmetic average return can be calculated using the following formula: $$Arithmetic\ Average\ Return\\=\frac{Sum\ of\ Individual\ Returns}{Total\ Number\ of\ Returns}$$ It can be calculated using Excel AVERAGE function. ## Example Your university has created a $100 million endowment to fund financial assistance offered on merit and need-basis. The endowment return for first 5 years was 5%, 8%, -2%, 12% and 9% respectively. Let’s imagine all the return in the form of capital gains. The arithmetic average return will equal 6.4% i.e. (5% + 8% + (-2%) + 12% + 9%)/5. The investment value after 5 years will be$135.67 million as calculated below: $$Endowment\ Value\ after\ 5\ Years \\=100\ million\times(1+5\%)\times(1+8\%)\times(1-2\%)\times(1+12\%)\times(1+9\%) \\=135.67 million$$ However, the 6.4% arithmetic average return suggest the investment value will be \$145.09 million: $$Endowment\ Value\ (based\ on\ Arithmetic\ Average\ Return)\\=100\ million\times{(1+6.4\%)}^5\\=145.09\ million$$ Arithmetic average return overstates the return because it ignores the order of return. For example, the decline of 2% occurred in the endowment when it had grown by 5% and 8% in the previous years, but arithmetic average return doesn’t accommodate such compounding effect.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7066830992698669, "perplexity": 1498.3498025024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00073.warc.gz"}
http://www.ck12.org/algebra/Linear-Inequalities-in-Two-Variables/enrichment/Graphing-Linear-Inequalities-in-Two-Variables-Example-2/r1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Linear Inequalities in Two Variables % Progress Practice Linear Inequalities in Two Variables Progress % Graphing Linear Inequalities in Two Variables - Example 2 Graphing Linear Inequalities in Two Variables Given Standard Form
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534729480743408, "perplexity": 9544.569010485726}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430460544953.41/warc/CC-MAIN-20150501060904-00044-ip-10-235-10-82.ec2.internal.warc.gz"}
http://koreascience.or.kr/search.page?keywords=acute+toxicity&pageSize=10&pageNo=6
• Title, Summary, Keyword: acute toxicity ### Acute Toxicity Study on the Extract of Mori Fructus (상심자의 급성독성에 관한 연구) • Chang, Bo-Yoon;Kim, Seon-Beom;Lee, Mi-Kyeong;Kim, Sung-Yeon • Korean Journal of Pharmacognosy • / • v.43 no.2 • / • pp.179-183 • / • 2012 • Acute toxicity on the water extract of Mori Fructus was examined in male and female mice. The water extract of Mori Fructus was orally administered at a dose of 5 mg/kg, 50 mg/kg, 300 mg/kg and 2,000 mg/kg and had been observed for two weeks. No mortality and abnormal clinical signs were shown for the observation period. At the terminal sacrifice, there were no difference in net body weight gain, organ weight and gross pathological findings among the groups treated with different doses of the water extract of Mori Fructus. The results suggested that under the condition employed in this study $LD_{50}$ would be more than 2,000 mg/kg. All the data obtained the experiments lead to the water extract of Mori Fructus should have very low acute toxicity. ### AGE AND GENDER DIFFERENCES IN ACUTE TOXICITY AND BLOOD-BRAIN BARRIER OPENING INDUCED BY SOMAN • Kim, Yun-Bae • Proceedings of the Korea Environmental Mutagen Society Conference • / • / • pp.112-112 • / • 2002 ### Cardiovascular Manifestations and Clinical Course after Acute Carbon Monoxide Poisoning (급성 일산화탄소 중독에 의한 심혈관계 독성의 임상 양상 및 경과) • Lee, In Soo;Jung, Yoon Seok;Min, Young Gi;Kim, Gi Woon;Choi, Sang Cheon • Journal of The Korean Society of Clinical Toxicology • / • v.10 no.2 • / • pp.103-110 • / • 2012 • Purpose: The aim of this study was to evaluate the cardiovascular manifestations and clinical course in patients with acute carbon monoxide poisoning. Methods: A retrospective study was conducted over a 36 month period on consecutive patients who visited an emergency medical center and were diagnosed with acute carbon monoxide poisoning. A standardized data extraction protocol was performed on the selected patients. Results: A total of 293 patients were selected during the study period. Cardiac manifestations were observed in 35.2% (n=103) of the patients: hypotension in 11 patients (3.8%), ECG abnormalities in 44 patients (15.0%) and cardiac enzyme abnormalities in 103 patients (35.2%). Echo cardiography was performed on 56 patients with cardiac toxicity: 12 patients had abnormal results (5 patients with global hypokinesia and 7 patients with regional wall akinesia). Five patients died within 3 hours after ED admission, and the remaining patients were discharged alive. At 3 months after discharge, none of these patients had died.The SOFA scores in the severe cardiac toxicity group and non-severe cardiac toxicity group at the time of arrival were $2.53{\pm}2.29$ and $2.19{\pm}2.12$, respectively (p=0.860). Conclusion: Cardiovascular manifestations occur after acute CO poisoning at arateof 35.2%. Even those with severe cardiovascular toxicity recovered well within 10 days after admission. Therefore, the importance of cardiac toxicity after acute CO poisoning is not significant in itself in the clinical course, and the short-term prognosis of cardiac toxicity is unlikely to be unfavorable in acute CO poisoning. ### Application of simple and massive purification system of dsRNA in vivo for acute toxicity to Daphnia magna • CHOI, Wonkyun;LIM, Hye Song;KIM, Jin;RYU, Sung-Min;LEE, Jung Ro • Entomological research • / • v.48 no.6 • / • pp.533-539 • / • 2018 • The RNA interference (RNAi) has been considered as an important genetic tool and applied to develop a new living modified (LM) crop trait which is an improvement of nutrient quality or pest management. The RNAi of DvSnf7 has been used for resistance to LM maize and the Western Corn Rootworm which is a major agricultural pest for the US Corn Belt. Most of the environmental risk assessments (ERA) of double strand RNA (dsRNA) have been performed using in vitro transcript products, and not in vivo expressed product. A large amount of dsRNA was required for the acute toxicity assay of water fleas. Therefore development of massive dsRNA purification techniques is critical. Daphnia, a freshwater microcrustacean, is a model organism for studying cellular and molecular mechanism involved in life history traits and ecotoxicology. In this study, we established the massive dsRNA purification method using Escherichia coli and implemented acute toxicity assays to Daphnia magna. As a result, the present RNase A and DNase I, dsRNA was efficiently purified without any special techniques or equipment. Even though purified dsRNA existed during the acute toxicity test, lethality or abnormal behavior were not observed in D. magna. These results indicated that GFP and DvSnf7 dsRNA were not significantly affected to D. magna due to their lack of sequence matching in its genome. The purification method of dsRNA and the acute toxicity assay of water fleas using purified dsRNA would be suitable for the toxicological studies of LMOs to aquatic non-target organisms. ### Acute and subacute toxicity of folpet to fingerings of common carp, Cyprinus carpio and goldfish, Carassius auratus (잉어치어(稚魚)와 금붕어에 대한 folpet의 급성(急性) 및 아급성독성(亞急性毒性)에 관한 연구) • Heo, Gang-joon;Lee, Yong-soon;Lim, Yoon-kyu • Korean Journal of Veterinary Research • / • v.34 no.2 • / • pp.369-374 • / • 1994 • The acute and subacute toxicity of fungicide folpet was evaluated in fingerings of common carp, Cyprinus carpio and goldfish, Carrassius auratus. Dipping of fishes for acute toxicity was performed for a period of 24h, and the TLm value(median tolerance limit) was 1.52 ppm in common carp and 1.45 ppm in goldfish. Severe damages were observed in various organs and among them, clubbing of gill lamella, lytic degeneration and vacuolation of liver cells, and epithelial edema of renal tubules were relatively prominent. The most significant changes were hyperbasophilic foci of liver cells in subacute toxicity test and these can imply the possibility of hepatocarcinogenecity of folpet. ### Acute Toxicity Study on Sipjeondaebo-tang in Rats (SD 랫드를 이용한 십전대보탕의 급성 독성 연구) • Ma, Jin-Yeul;Huang, Dae-Sun;Lee, Nam-Hun;Ha, Hye-Kyung;Yu, Young-Beob;Shin, Hyeun-Kyoo • Journal of Physiology & Pathology in Korean Medicine • / • v.22 no.5 • / • pp.1192-1195 • / • 2008 • Sipjeondaebo-tang has been traditionally prescribed a medicine as a restorative. In this study, we investigated the acute toxicity about water-extracted Sipjeondaebo-tang. Thirty rats completed 14 days of oral Sipjeondaebo-tang at the respective doses of 0(control group), 2000 and 5000 mg/kg. We observed survival rates, general toxicity, change of body weight and autopsy. To be confirmed the data for the toxicity and safety problems of oriental medicine prescription. Compared with the control group, we could not find any toxic alteration in all treated groups (2000 and 5000 mg/kg). LD50 of Sipjeondaebo-tang was over 5000 mg/kg and it is very safe to SD rats. ### Acute Toxicity Study on Palmul-tang(Bawu-tang) in Mice (ICR마우스를 이용하여 팔물탕(八物湯)의 급성독성에 관한 연구) • Ma, Jin-Yeul;Huang, Dae-Sun;Yu, Young-Beob;Ha, Hye-Kyung;Shin, Hyun-Kyoo • The Korea Journal of Herbology • / • v.22 no.2 • / • pp.13-16 • / • 2007 • Objectives : Palmul-tang(Bawu-tang) has been traditionally prescribed a medicine as a restorative. Methods : In this study, we investigated the acute toxicity about water-extracted PalMul-tang(Bawu-tang). Twenty-five mice completed 14 days of oral Palmul-tang(Bawu-tang) at the respective doses of 0(control group), 2560, 3200, 4000 and 5000mg/kg. Results : We observed survival rates, general toxicity, change of body weight, and autopsy. Conclusions : To be confirmed the data for the toxicity and safety problems of oriental medicine prescription. Compared with the control group, we could not find any toxic alteration in all treated groups (2560, 3200, 4000 and 5000mg/kg). In conclusion, LD50 of Palmul-tang(Bawu-tang) was over 5000mg/kg and it is very safe to ICR mice. ### In vivo dosimetry and acute toxicity in breast cancer patients undergoing intraoperative radiotherapy as boost • Lee, Jason Joon Bock;Choi, Jinhyun;Ahn, Sung Gwe;Jeong, Joon;Lee, Ik Jae;Park, Kwangwoo;Kim, Kangpyo;Kim, Jun Won • / • v.35 no.2 • / • pp.121-128 • / • 2017 • Purpose: To report the results of a correlation analysis of skin dose assessed by in vivo dosimetry and the incidence of acute toxicity. This is a phase 2 trial evaluating the feasibility of intraoperative radiotherapy (IORT) as a boost for breast cancer patients. Materials and Methods: Eligible patients were treated with IORT of 20 Gy followed by whole breast irradiation (WBI) of 46 Gy. A total of 55 patients with a minimum follow-up of 1 month after WBI were evaluated. Optically stimulated luminescence dosimeter (OSLD) detected radiation dose delivered to the skin during IORT. Acute toxicity was recorded according to the Common Terminology Criteria for Adverse Events v4.0. Clinical parameters were correlated with seroma formation and maximum skin dose. Results: Median follow-up after IORT was 25.9 weeks (range, 12.7 to 50.3 weeks). Prior to WBI, only one patient developed acute toxicity. Following WBI, 30 patients experienced grade 1 skin toxicity and three patients had grade 2 skin toxicity. Skin dose during IORT exceeded 5 Gy in two patients: with grade 2 complications around the surgical scar in one patient who received 8.42 Gy. Breast volume on preoperative images (p = 0.001), ratio of applicator diameter and breast volume (p = 0.002), and distance between skin and tumor (p = 0.003) showed significant correlations with maximum skin dose. Conclusions: IORT as a boost was well-tolerated among Korean women without severe acute complication. In vivo dosimetry with OSLD can help ensure safe delivery of IORT as a boost. ### A Study on the Degradation and the Reduction of Acute Toxicity of Simazine Using Photolysis and Photocatalysis (광반응 및 광촉매 반응을 이용한 simazine의 분해 및 독서저감에 관한 연구) • Kim, Moon-Kyung;Oh, Ji-Yoon;Son, Hyun-Seok;Zoh, Kyung-Duk • Journal of Environmental Health Sciences • / • v.35 no.2 • / • pp.124-129 • / • 2009 • The photocatalysis degradation of simazine, s-triazine type herbicide was carried out using circulating photo reactor systems. In order to search for the effective method to mineralize this compound into environmentally compatible products, this study compared the removal efficiencies of simazine by changing various parameters. First, under the photocatalytic condition, simazine was more effectively degraded than by photolysis and $TiO_2$ only condition. With photocatalysis, 5 mg/l simazine was degraded to approximately 90% within 30 min, and completely degraded after 150 min. Ionic byproducts such as ${NO_2}^-$, ${NO_3}^-$, and $Cl^-$ were detected from the photocatalysis of simazine, however, the recoveries were poor, indicating the presence of organic intermediates rather than the mineralization of simazine during photocatalysis. Two bioassays using V. fischeri and D. magna were employed to measure the toxicity reduction in the reaction solutions treated by both photocatalysis and photolysis. Simazine and its photocatalysis treated water did not exert any significant toxicity to V. fischeri, marine bacterium. However, the acute toxicity test using D. magna indicates that initial acute toxicity ($EC_{50}$ = 57.30%) was completely reduced ($EC_{50}$ = 100%) after 150 min under both photocatalysis and photoysis of simazine. This results indicates that photocatalysis and photolysis of simazine reduced the acute toxicity through mineralization. ### The Acute and Chronic Toxicity Studies of Herbicide, Molinate to Waterfleas (Molinate의 물벼룩에 대한 급성 및 만성독성 연구) • Shin, On-Sup;Kim, Byung-Seok;Park, Yeon-Ki;Park, Kyung-Hoon;Lee, Je-Bong;Kyung, Kee-Sung;Ahn, Young-Joon • The Korean Journal of Pesticide Science • / • v.12 no.3 • / • pp.215-221 • / • 2008 • To assess the impact of molinate on freshwater aquatic organisms, acute and chronic toxicity studies for waterfleas were conducted. In acute toxicity studies for Daphnia magna, and Moina macrocopa, the 48-h $EC_{50}$ values were 11.4 and 8.3 mg/L respectively. And in reproduction toxicity studies for the same species, the NOEC's were 2.5 and 2.0 mg/L respectively. These results suggest that waterfleas have simillar sensitivity to molinate. On the other hand, the NOEC for 3-generation toxicity of moina macrocopa 0.16 mg/L, was much lower than those of acute values. This studies concludes that molinate has minimal risk to waterfleas in river.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5561515688896179, "perplexity": 23924.092934703327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00674.warc.gz"}
https://codegolf.codidact.com/posts/281542
Challenges # Merge two strings +8 −0 ## Challenge Given two strings a and b, return the shortest string s so that s starts with a and ends with b. ## Examples 'ABCDEF', 'EFGHI' -> 'ABCDEFGHI' 'AAAAAA', 'AAAAAAAA' -> 'AAAAAAAA' 'ABC', '123' -> 'ABC123' '', 'ABCDE' -> 'ABCDE' 'ABCD', 'ABCD' -> 'ABCD' '', '' -> '' Brownie points for beating my 26 in APL. Why does this post require moderator attention? Why should this post be closed? You need to specify if these strings are taken as program input or if they can be constants. For example in C, you can merge two string literals by just typing "ABC" "DEF". But to merge strings taken as input in run-time, it turns much more intricate. Lundin‭ 15 days ago The strings should either be function parameters or taken from input. rak1507‭ 15 days ago +4 −0 # APL(Dyalog Unicode), 26 bytes SBCS {⊃x/⍨⊃¨⍺∘⍷¨x←,∘⍵¨(⊂⍬),,\⍺} Try it on APLgolf! A dfn submission which takes the inputs as left and right argument. I took way too long to come up with this. Fun challenge. Why does this post require moderator attention? +3 −0 # Sed -E, 25 bytes Takes second input, , comma, first input (inputs cannot contain , commas). s/^(.*)(.*),(.*)\1\$/\3\2/ Why does this post require moderator attention? +3 −0 # BQN, 18 bytesSBCS {⊑(⊑𝕨⊸⍷)¨⊸/∾⟜𝕩¨↑𝕨} Run online! A direct translation of Razetime's APL solution (my attempted improvement ⊣∾{⊢´/𝕨⊸«⊸≡¨↑𝕩}↓⊢ turns out to be not at all correct). The BQN solution is much shorter mainly because it has Prefixes (↑) built in. Being able to filter with ⊸/ (see Before) also helps a lot. {⊑(⊑𝕨⊸⍷)¨⊸/∾⟜𝕩¨↑𝕨} # Function with left argument 𝕨 and right argument 𝕩 ↑𝕨 # All prefixes of 𝕨 ∾⟜𝕩¨ # Append 𝕩 after each one ⊸/ # Filter by... ( )¨ # On each string, 𝕨⊸⍷ # Where does 𝕨 appear as a substring? ⊑ # But I only care if it's the first one ⊑ # Then take the first Put together, the pattern ⊑𝕨⊸⍷ tests if 𝕨 is a prefix of the argument. Why does this post require moderator attention?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24781493842601776, "perplexity": 11405.392907146044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00210.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=29&t=49189&p=176559
## 2A.21 Annie Ye Posts: 106 Joined: Wed Sep 18, 2019 12:22 am ### 2A.21 2A.21 Give the ground-state electron configuration and number of unpaired electrons expected for each of the following ions: (a) Ca21; (b) In1; (c) Te22; (d) Ag1. Samuel Tzeng 1B Posts: 103 Joined: Sat Aug 24, 2019 12:15 am ### Re: 2A.21 a.[Ar] b.[Kr]5s^2 4d^10 c.[Xe] d.[Ar]3d^8 no unpaired electrons for all Sjeffrey_1C Posts: 108 Joined: Wed Feb 20, 2019 12:17 am ### Re: 2A.21 You can tell that all of these have 0 unpaired electrons because the last orbitals are full in each case. Subashni Rajiv 1K Posts: 101 Joined: Wed Sep 18, 2019 12:16 am ### Re: 2A.21 If the last orbital has an odd number of electrons in the configuration, then this is an indication that there is an unpaired electron. Anthony Hatashita 4H Posts: 103 Joined: Wed Sep 18, 2019 12:21 am ### Re: 2A.21 All of them have no unpaired electrons because the shells are filled, but there are some cases where 3d is filled with 8 electrons, leaving 2 unpaired electrons due to Hund's rule I believe.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327903747558594, "perplexity": 7028.864937947754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00199.warc.gz"}
https://www.nature.com/articles/s41563-021-00983-8?error=cookies_not_supported&code=d31e89cf-4b73-4a4b-bc9f-0d29f6a22269
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Thermal chiral anomaly in the magnetic-field-induced ideal Weyl phase of Bi1−xSbx ## Abstract The chiral anomaly is the predicted breakdown of chiral symmetry in a Weyl semimetal with monopoles of opposite chirality when an electric field is applied parallel to a magnetic field. It occurs because of charge pumping between monopoles of opposite chirality. Experimental observation of this fundamental effect is plagued by concerns about the current pathways. Here we demonstrate the thermal chiral anomaly, energy pumping between monopoles, in topological insulator bismuth–antimony alloys driven into an ideal Weyl semimetal state by a Zeeman field, with the chemical potential pinned at the Weyl points and in the absence of any trivial Fermi surface pockets. The experimental signature is a large enhancement of the thermal conductivity in an applied magnetic field parallel to the thermal gradient. This work demonstrates both pumping of energy and charge between the two Weyl points of opposite chirality and that they are related by the Wiedemann–Franz law. ## Access options from\$8.99 All prices are NET prices. ## Data availability The data generated and analysed in this study are available within the paper and its Supplementary Information. Further data are available from the corresponding author on reasonable request. ## References 1. 1. Weyl, H. Elektron und gravitation. Z. Phys. 53, 330–352 (1929). 2. 2. Nielsen, H. B. & Ninomiya, M. The Adler–Bell–Jackiw anomaly and Weyl fermions in a crystal. Phys. Lett. 130B, 389–396 (1983). 3. 3. Christenson, J. H., Cronin, J. W., Fitch, V. L. & Turlay, R. Evidence for the 2π decay of the K20 meson. Phys. Rev. Lett. 13, 138–140 (1964). 4. 4. Arnold, F. et al. Negative magnetoresistance without well-defined chirality in the Weyl semimetal TaP. Nat. Commun. 7, 116157 (2016). 5. 5. Huang, X. et al. Observation of the chiral-anomaly-induced negative magnetoresistance in 3D Weyl semimetal TaAs. Phys. Rev. X 5, 031023 (2015). 6. 6. Zhang, C.-L. et al. Signatures of the Adler–Bell–Jackiw chiral anomaly in a Weyl fermion semimetal. Nat. Commun. 7, 10735 (2016). 7. 7. Spivak, N. Z. & Andreev, A. V. Magnetotransport phenomena related to the chiral anomaly in Weyl semimetals. Phys. Rev. B 93, 085107 (2016). 8. 8. Liang, S. et al. Experimental tests of the chiral anomaly magnetoresistance in the Dirac–Weyl semimetals Na3Bi and GdPtBi. Phys. Rev. X 8, 031002 (2018). 9. 9. Li, Q. et al. Chiral magnetic effect in ZrTe5. Nat. Phys. 12, 3648 (2016). 10. 10. Li, H. et al. Negative magnetoresistance in Dirac semimetal Cd3As2. Nat. Commun. 7, 10301 (2016). 11. 11. Guo, S. T. et al. Large transverse Hall-like signal in topological Dirac semimetal Cd3As2. Sci. Rep. 6, 27487 (2016). 12. 12. Li, Y. et al. Field-induced resistivity plateau and unsaturated negative magnetoresistance in topological semimetal TaSb2. Phys. Rev. B 94, 121115(R) (2016). 13. 13. Li, Y. P. et al. A negative magnetoresistance in topological semimetals of transition-metal dipnictides with nontrivial Z2 indices. Preprint at https://arxiv.org/abs/1603.04056 (2016). 14. 14. Luo, Y. K. et al. Anomalous electronic structure and magnetoresistance in TaAs2. Sci. Rep. 6, 27294 (2016). 15. 15. Shen, B., Deng, X. Y., Kotliar, G. & Ni, N. Fermi surface topology and negative longitudinal magnetoresistance observed in the semimetal NbAs2. Phys. Rev. B 93, 195119 (2016). 16. 16. Baker, D. R. & Heremans, J. P. The linear geometrical magnetoresistance effect: influence of geometry and material composition. Phys. Rev. B 59, 13927 (1999). 17. 17. Hu, J. S., Rosenbaum, T. F. & Betts, J. B. Current jets, disorder, and linear magnetoresistance in the silver chalcogenides. Phys. Rev. Lett. 95, 186603 (2005). 18. 18. Li, Y. et al. Negative magnetoresistance in Weyl semimetals NbAs andNbP: intrinsic chiral anomaly and extrinsic effects. Front. Phys. 12, 127205 (2017). 19. 19. dos Reis, R. D. et al. On the search for the chiral anomaly in Weyl semimetals: the negative longitudinal magnetoresistance. New J. Phys. 18, 085006 (2016). 20. 20. Noothoven van Goor, J. M. Donors and Acceptors in Bismuth Philips Research Report Suppl. 4 (Philips, 1971). 21. 21. Jin, H. et al. The phonon-induced diamagnetic force and its effect on the lattice thermal conductivity. Nat. Mater. 14, 601–606 (2015). 22. 22. Das, K. & Agarwal, A. Thermal and gravitational chiral anomaly induced magneto-transport in Weyl semimetals. Phys. Rev. Res. 2, 013088 (2020). 23. 23. Gallo, C. F., Chandrasekhar, B. S. & Sutter, P. H. Transport properties of bismuth single crystals. J. Appl. Phys. 34, 144–152 (1963). 24. 24. Andreev, A. V. & Spivak, B. Z. Longitudinal negative magnetoresistance and magnetotransport phenomena on conventional and topological conductors. Phys. Rev. Lett. 120, 026601 (2018). 25. 25. Schindler, C. et al. Anisotropic electrical and thermal magnetotransport in the magnetic semimetal GdPtBi. Phys. Rev. B 101, 125119 (2020). 26. 26. Gooth, J. et al. Experimental signatures of the mixed axial–gravitational anomaly in the Weyl semimetal NbP. Nature 547, 324–327 (2017). 27. 27. Tolman, R. C. & Ehrenfest, P. Temperature equilibrium in a static gravitational field. Phys. Rev. 36, 1791–1798 (1930). 28. 28. Luttinger, J. M. Theory of thermal transport coefficients. Phys. Rev. 135, A1505–A1514 (1964). 29. 29. Hsieh, D. et al. A topological Dirac insulator in a quantum spin Hall phase. Nature 452, 970–974 (2008). 30. 30. Vandaele, K., Otsuka, M., Hasegawa, Y. & Heremans, J. P. Confinement effects, surface effects, and transport in Bi and Bi1−xSbx semiconducting and semimetallic nanowires. J. Phys. Condens. Matter 30, 403001 (2018). 31. 31. Liu, Y. & Allen, R. E. Electronic structure of the semimetals Bi and Sb. Phys. Rev. B 52, 1566–1577 (1995). 32. 32. Cucka, P. & Barrett, C. S. The crystal structure of Bi and of solid solutions of Pb, Sn, Sb and Te in Bi. Acta Crystallogr. 15, 865–872 (1962). 33. 33. Mendez, E. E., Misu, A. & Dresselhaus, M. S. Pressure-dependent magnetoreflection studies of Bi and Bi1−xSbx alloys. Phys. Rev. B 24, 639–648 (1981). 34. 34. Brandt, N. B., Svistova, E. A. & Semenov, M. V. Electron transitions in antimony-rich bismuth–antimony alloys in strong magnetic fields. Sov. Phys. JETP 32, 238 (1971). 35. 35. Şahin, C. & Flatté, M. E. Tunable giant spin hall conductivities in a atrong spin-orbit semimetal: Bi1−xSbx. Phys. Rev. Lett. 114, 107201 (2015). 36. 36. Cohen, M. H. & Blount, E. I. The g-factor and de Haas–van Alphen effect of electrons in bismuth. Philos. Mag. 5, 115–126 (1960). 37. 37. Smith, G. E., Baraff, G. A. & Rowell, J. M. Effective g-factor of electrons and holes in bismuth. Phys. Rev. 135, A1118 (1964). 38. 38. Vecchi, M. P., Pereira, J. R. & Dresselhaus, M. S. Anomalies in the magnetoreflection spectrum of bismuth in the low-quantum-number limit. Phys. Rev. B 14, 298–317 (1976). 39. 39. Heremans, J. P., Shayegan, M., Dresselhaus, M. S. & Issi, J.-P. High magnetic field thermal conductivity measurements in graphite intercalation compounds. Phys. Rev. B 26, 3338–3346 (1982). 40. 40. Kagan, V. D. & Red’ko, N. A. Phonon thermal conductivity of bismuth alloys. Sov. Phys. JETP 73, 664–671 (1991). 41. 41. Argyres, P. N. & Adams, E. N. Longitudinal magnetoresistance in the quantum limit. Phys. Rev. 104, 900–908 (1956). ## Acknowledgements This work was supported by CEM and NSF MRSEC under grant numbers DMR-2011876 (to D.V., W.Z., N.T., J.P.H.) and DMR-1420451 (all authors). The authors acknowledge useful discussions with M. A. H. Vozmediano. R. Ripley edited the text and contributed to the illustrations. ## Author information Authors ### Contributions The experiments were designed and carried out by D.V. and J.P.H. The theory was carried out by W.Z., C.Ş., M.E.F., N.T. and J.P.H. All contributed to the integration of theory and experiment and in writing the manuscript. ### Corresponding author Correspondence to Joseph P. Heremans. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Materials thanks Kamran Behnia, Qiang Li, Binghai Yan and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary information ### Supplementary Information Supplementary figures and tables. ## Rights and permissions Reprints and Permissions Vu, D., Zhang, W., Şahin, C. et al. Thermal chiral anomaly in the magnetic-field-induced ideal Weyl phase of Bi1−xSbx. Nat. Mater. (2021). https://doi.org/10.1038/s41563-021-00983-8
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810367584228516, "perplexity": 8952.154078330264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00619.warc.gz"}
https://socratic.org/questions/how-do-you-solve-for-ka-when-you-only-have-molarity-of-the-acid-and-ph
Chemistry Topics # How do you solve for Ka when you only have molarity of the acid and pH? Jun 1, 2015 You start by using the pH of the solution to determine the concentration of the hydronium ions, ${H}_{3} {O}^{+}$. $\left[{H}_{3} {O}^{+}\right] = {10}^{- P {H}_{\text{sol}}}$ The general dissociation equation for a weak acid looks like this $H {A}_{\left(a q\right)} + {H}_{2} {O}_{\left(l\right)} r i g h t \le f t h a r p \infty n s {H}_{3} {O}_{\left(a q\right)}^{+} + {A}_{\left(a q\right)}^{-}$ By definition, the acid dissociation constant, ${K}_{a}$, will be equal to ${K}_{a} = \frac{\left[{H}_{3} {O}^{+}\right] \cdot \left[{A}^{-}\right]}{\left[H A\right]}$ If you have a $1 : 1$ mole ratio between the acid and the hydronium ions, and between the hydronium ions and the conjugate base, ${A}^{-}$, then the concentration of the latter will be equal to that of the hydronium ions. $\left[{A}^{-}\right] = \left[{H}_{3} {O}^{+}\right]$ Since you know the molarity of the acid, ${K}_{a}$ will be ${K}_{a} = \frac{{\left[{H}_{3} {O}^{+}\right]}^{2}}{\left[H A\right]}$ ##### Impact of this question 8078 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355150461196899, "perplexity": 1576.6255700370707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00374.warc.gz"}
https://iacr.org/cryptodb/data/author.php?authorkey=334
## CryptoDB ### Alexander May #### Publications Year Venue Title 2022 EUROCRYPT We address Partial Key Exposure attacks on CRT-RSA on secret exponents $d_p, d_q$ with small public exponent $e$. For constant $e$ it is known that the knowledge of half of the bits of one of $d_p, d_q$ suffices to factor the RSA modulus $N$ by Coppersmith's famous {\em factoring with a hint} result. We extend this setting to non-constant $e$. Somewhat surprisingly, our attack shows that RSA with $e$ of size $N^{\frac 1 {12}}$ is most vulnerable to Partial Key Exposure, since in this case only a third of the bits of both $d_p, d_q$ suffices to factor $N$ in polynomial time, knowing either most significant bits (MSB) or least significant bits (LSB). Let $ed_p = 1 + k(p-1)$ and $ed_q = 1 + \ell(q-1)$. On the technical side, we find the factorization of $N$ in a novel two-step approach. In a first step we recover $k$ and $\ell$ in polynomial time, in the MSB case completely elementary and in the LSB case using Coppersmith's lattice-based method. We then obtain the prime factorization of $N$ by computing the root of a univariate polynomial modulo $kp$ for our known $k$. This can be seen as an extension of Howgrave-Graham's {\em approximate divisor} algorithm to the case of {\em approximate divisor multiples} for some known multiple $k$ of an unknown divisor $p$ of $N$. The point of {\em approximate divisor multiples} is that the unknown that is recoverable in polynomial time grows linearly with the size of the multiple $k$. Our resulting Partial Key Exposure attack with known MSBs is completely rigorous, whereas in the LSB case we rely on a standard Coppersmith-type heuristic. We experimentally verify our heuristic, thereby showing that in practice we reach our asymptotic bounds already using small lattice dimensions. Thus, our attack is highly practical. 2022 EUROCRYPT With the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates. We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis already for cryptographic instances of medium security level (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to dimension 2918 (before: 1938). Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process. For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC. For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired. 2022 TOSC We study quantum period finding algorithms such as Simon and Shor (and its variant Ekerå-Håstad). For a periodic function f these algorithms produce – via some quantum embedding of f – a quantum superposition ∑x |x〉 |f(x)〉, which requires a certain amount of output qubits that represent |f(x)〉. We show that one can lower this amount to a single output qubit by hashing f down to a single bit in an oracle setting.Namely, we replace the embedding of f in quantum period finding circuits by oracle access to several embeddings of hashed versions of f. We show that on expectation this modification only doubles the required amount of quantum measurements, while significantly reducing the total number of qubits. For example, for Simon’s algorithm that finds periods in f : Fn2 → Fn2 our hashing technique reduces the required output qubits from n down to 1, and therefore the total amount of qubits from 2n to n + 1. We also show that Simon’s algorithm admits real world applications with only n + 1 qubits by giving a concrete realization of a hashed version of the cryptographic Even-Mansour construction. Moreover, for a variant of Simon’s algorithm on Even-Mansour that requires only classical queries to Even-Mansour we save a factor of (roughly) 4 in the qubits.Our oracle-based hashed version of the Ekerå-Håstad algorithm for factoring n-bit RSA reduces the required qubits from (3/2 + o(1))n down to (1/2+ o(1))n. 2022 CRYPTO In a so-called partial key exposure attack one obtains some information about the secret key, e.g. via some side-channel leakage. This information might be a certain fraction of the secret key bits (erasure model) or some erroneous version of the secret key (error model). The goal is to recover the secret key from the leaked information. There is a common belief that, as opposed to e.g. the RSA cryptosystem, most post-quantum cryptosystems are usually resistant against partial key exposure attacks. We strongly question this belief by constructing partial key exposure attacks on code-based, multivariate, and lattice-based schemes (BIKE, Rainbow and NTRU). Our attacks exploit the redundancy that modern PQ cryptosystems inherently use for efficiency reasons. The application and development of techniques from information set decoding plays a crucial role for achieving our results. On the theoretical side, we show non-trivial information leakage bounds that allow for a polynomial time key recovery attack. As an example, for all schemes the knowledge of a constant fraction of the secret key bits suffices to reconstruct the full key in polynomial time. Even if we no longer insist on polynomial time attacks, most of our attacks extend well and remain feasible up to large erasure and error rates. In the case of BIKE for example we obtain attack complexities around 60 bits when half of the secret key bits are erased, or a quarter of the secret key bits are faulty. Our results show that even highly error-prone key leakage of modern PQ cryptosystems may lead to full secret key recoveries. 2021 CRYPTO The LWE problem with its ring variants is today the most prominent candidate for building efficient public key cryptosystems resistant to quantum computers. NTRU-type cryptosystems use an LWE-type variant with small max-norm secrets, usually with ternary coefficients from the set $\{-1,0,1\}$. The presumably best attack on these schemes is a hybrid attack that combines lattice reduction techniques with Odlyzko's Meet-in-the-Middle approach. Odlyzko's algorithm is a classical combinatorial attack that for key space size $\S$ runs in time $\S^{0.5}$. We substantially improve on this Meet-in-the-Middle approach, using the representation technique developed for subset sum algorithms. Asymptotically, our heuristic Meet-in-the-Middle attack runs in time roughly $\S^{0.25}$, which also beats the $\S^{\frac 1 3}$ complexity of the best known quantum algorithm. For the round-3 NIST post-quantum encryptions NTRU and NTRU Prime we obtain non-asymptotic instantiations of our attack with complexity roughly $\S^{0.3}$. As opposed to other combinatorial attacks, our attack benefits from larger LWE field sizes $q$, as they are often used in modern lattice-based signatures. For example, for BLISS and GLP signatures we obtain non-asymptotic combinatorial attacks around $\S^{0.28}$. Our attacks do not invalidate the security claims of the aforementioned schemes. However, they establish improved combinatorial upper bounds for their security. We leave it is an open question whether our new Meet-in-the-Middle attack in combination with lattice reduction can be used to speed up the hybrid attack. 2021 ASIACRYPT Let $(N,e)$ be an RSA public key, where $N=pq$ is the product of equal bitsize primes $p,q$. Let $d_p, d_q$ be the corresponding secret CRT-RSA exponents. Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first {\em Partial Key Exposure} attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time. Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$. 2020 EUROCRYPT We propose two heuristic polynomial memory collision finding algorithms for the low Hamming weight discrete logarithm problem in any abelian group $G$. The first one is a direct adaptation of the Becker-Coron-Joux (BCJ) algorithm for subset sum to the discrete logarithm setting. The second one significantly improves on this adaptation for all possible weights using a more involved application of the representation technique together with some new Markov chain analysis. In contrast to other low weight discrete logarithm algorithms, our second algorithm's time complexity interpolates to Pollard's $|G|^{\frac 1 2}$ bound for general discrete logarithm instances. We also introduce a new heuristic subset sum algorithm with polynomial memory that improves on BCJ's $2^{0.72n}$ time bound for random subset sum instances $a_1, \ldots, a_n, t \in \Z_{2^n}$. Technically, we introduce a novel nested collision finding for subset sum -- inspired by the NestedRho algorithm from Crypto '16 -- that recursively produces collisions. We first show how to instantiate our algorithm with run time $2^{0.649n}$. Using further tricks, we are then able to improve its complexity down to $2^{0.645n}$. 2018 CRYPTO The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension k in time $2^{\frac{4}{3} \frac{k}{\log k} }$ and memory $2^{\frac{2}{3} \frac{k}{\log k} }$. Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm. 2017 TOSC We study a generalization of the k-list problem, also known as the Generalized Birthday problem. In the k-list problem, one starts with k lists of binary vectors and has to find a set of vectors – one from each list – that sum to the all-zero target vector. In our generalized Approximate k-list problem, one has to find a set of vectors that sum to a vector of small Hamming weight ω. Thus, we relax the condition on the target vector and allow for some error positions. This in turn helps us to significantly reduce the size of the starting lists, which determines the memory consumption, and the running time as a function of ω. For ω = 0, our algorithm achieves the original k-list run-time/memory consumption, whereas for ω = n/2 it has polynomial complexity. As in the k-list case, our Approximate k-list algorithm is defined for all k = 2m,m &gt; 1. Surprisingly, we also find an Approximate 3-list algorithm that improves in the runtime exponent compared to its 2-list counterpart for all 0 &lt; ω &lt; n/2. To the best of our knowledge this is the first such improvement of some variant of the notoriously hard 3-list problem. As an application of our algorithm we compute small weight multiples of a given polynomial with more flexible degree than with Wagner’s algorithm from Crypto 2002 and with smaller time/memory consumption than with Minder and Sinclair’s algorithm from SODA 2009. 2017 PKC 2017 CRYPTO 2017 ASIACRYPT 2015 EUROCRYPT 2012 EUROCRYPT 2012 ASIACRYPT 2011 ASIACRYPT 2010 PKC 2010 CRYPTO 2009 ASIACRYPT 2009 PKC 2008 PKC 2008 ASIACRYPT 2007 CRYPTO 2007 JOFC 2006 ASIACRYPT 2006 PKC 2005 EUROCRYPT 2005 EUROCRYPT 2004 CRYPTO 2004 PKC 2004 PKC 2003 CRYPTO 2002 CRYPTO PKC 2022 Eurocrypt 2020 Asiacrypt 2017 Asiacrypt 2016 Crypto 2016 PKC 2016 Asiacrypt 2015 Eurocrypt 2014 Crypto 2014 PKC 2013 Crypto 2012 PKC 2011 Eurocrypt 2011 Eurocrypt 2010 PKC 2008 Asiacrypt 2007 Eurocrypt 2007 Eurocrypt 2006 PKC 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623471617698669, "perplexity": 1226.8114454544461}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00744.warc.gz"}
http://openstudy.com/updates/4f71d354e4b07738f5adfc79
Here's the question you clicked on: 55 members online • 0 viewing ## LydiaBreez 3 years ago How long it will take for \$300 to double when invested at 6% annual interest compounded twice a year? Answer: A)10.3 years B)11.7 years C)12.5 years D)13.1 years E)13.7 years This is my last question Delete Cancel Submit • This Question is Closed 1. satellite73 • 3 years ago Best Response You've already chosen the best response. 0 $600=300(1.06)^x$ solve for x 2. satellite73 • 3 years ago Best Response You've already chosen the best response. 0 no that is wrong 3. satellite73 • 3 years ago Best Response You've already chosen the best response. 0 $600=300(1+\frac{.06}{2})^{2x}$ solve for x 4. satellite73 • 3 years ago Best Response You've already chosen the best response. 0 $600=300(1.03)^{2x}$ $2=(1.03)^{2x}$ and right away you see that the 300 and 600 were not really part of the problem doubling time just means doubling time 5. satellite73 • 3 years ago Best Response You've already chosen the best response. 0 $(1.03)^{2x}=2$ $2x=\frac{\ln(2)}{\ln(1.03)}$ $x=\frac{\ln(2)}{2\ln(1.03)}$ then a calculator 6. satellite73 • 3 years ago Best Response You've already chosen the best response. 0 i get 11.7 7. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988600015640259, "perplexity": 12146.642993924743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00178-ip-10-171-96-226.ec2.internal.warc.gz"}
https://ljahum.top/hgame2021/
Contents # HGAME 2021网络攻防大赛crypto wp ## EncryptedChats Description Switch 的病友 Million 来监狱探监… 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Switch: 你好老伙汁 Million: 你好 Switch: 所以为什么这家伙在这里 Liki: 因为我是来记录你们谈话内容的 Million: 好吧 Million: 不过我可以提一个要求吗 Liki: ? Million: 我们...换一个群聊 Switch: 好啊! 我看看, 就换到加法群聊吧! Switch: 喂, 帮忙选一个质数 g 吧 Liki: ??, 12602983924735419868428783329859102652072837431735895060811258460532600319539509800915989811879506790207025505003183121812480524033163157114086741486989697 Million: 这位女士,可以劳烦您再为我们选择一个质数 p 吗 Liki: ???...那就, 30567260905179651419358486099834315837354102714690253338851161207042846254351374572818884286661092938876675032728700590336029243619773064402923830209873155153338320502164587381848849791304214084993139233581072431814555885408821184652544361671134564827265516331283076223247829980225591857643487356406284913560960657053777612115591241983729716542192518684003840806442329098770424504275465756739925434019460351138273272559738332984560095465809481270198689251655392941966835733947437503158486731906649716026200661065054914445245468517406404904444261196826370252359102324767986314473183183059212009545789665906197844518119 吧, 够大了吗? Million: 好的, 我选好我的 a 了, 那么 A = 6407001517522031755461029087358686699246016691953286745456203144289666065160284103094131027888246726980488732095429549592118968601737506427099198442788626223019135982124788211819831979642738635150279126917220901861977041911299607913392143290015904211117118451848822390856596017775995010697100627886929406512483565105588306151304249791558742229557096175320767054998573953728418896571838697779621641522372719890056962681223595931519174265357487072296679757688238385898442549594049002467756836225770565740860731932911280385359763772064721179733418453824127593878917184915316616399071722555609838785743384947882543058635 # A = g ^ a % p = pow(g, a, p) Switch: okay, b 也选好了, B = 5522084830673448802472379641008428434072040852768290130448235845195771339187395942646105104638930576247008845820145438300060808178610210847444428530002142556272450436372497461222761977462182452947513887074829637667167313239798703720635138224358712513217604569884276513251617003838008296082768599917178457307640326380587295666291524388123169244965414927588882003753247085026455845320527874258783530744522455308596065597902210653744845305271468086224187208396213207085588031362747352905905343508092625379341493584570041786625506585600322965052668481899375651376670219908567608009443985857358126335247278232020255467723 # B = g ^ b % p = pow(g, b, p) Liki: ???? Million: {'iv': 'd3811beb5cd2a4e1e778207ab541082b', 'encrypted_flag': '059e9c216bcc14e5d6901bcf651bee361d9fe42f225bc0539935671926e6c092'} Switch: {'iv': 'b4259ed79d050dabc7eab0c77590a6d0', 'encrypted_flag': 'af3fe410a6927cc227051f587a76132d668187e0de5ebf0608598a870a4bbc89'} Million: 再见伙汁 Switch: 再见 Liki: ????? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad from Crypto.Util.number import long_to_bytes import hashlib import gmpy2 as gp from binascii import a2b_hex A = 6407001517522031755461029087358686699246016691953286745456203144289666065160284103094131027888246726980488732095429549592118968601737506427099198442788626223019135982124788211819831979642738635150279126917220901861977041911299607913392143290015904211117118451848822390856596017775995010697100627886929406512483565105588306151304249791558742229557096175320767054998573953728418896571838697779621641522372719890056962681223595931519174265357487072296679757688238385898442549594049002467756836225770565740860731932911280385359763772064721179733418453824127593878917184915316616399071722555609838785743384947882543058635 B = 5522084830673448802472379641008428434072040852768290130448235845195771339187395942646105104638930576247008845820145438300060808178610210847444428530002142556272450436372497461222761977462182452947513887074829637667167313239798703720635138224358712513217604569884276513251617003838008296082768599917178457307640326380587295666291524388123169244965414927588882003753247085026455845320527874258783530744522455308596065597902210653744845305271468086224187208396213207085588031362747352905905343508092625379341493584570041786625506585600322965052668481899375651376670219908567608009443985857358126335247278232020255467723 p = 30567260905179651419358486099834315837354102714690253338851161207042846254351374572818884286661092938876675032728700590336029243619773064402923830209873155153338320502164587381848849791304214084993139233581072431814555885408821184652544361671134564827265516331283076223247829980225591857643487356406284913560960657053777612115591241983729716542192518684003840806442329098770424504275465756739925434019460351138273272559738332984560095465809481270198689251655392941966835733947437503158486731906649716026200661065054914445245468517406404904444261196826370252359102324767986314473183183059212009545789665906197844518119 g = 12602983924735419868428783329859102652072837431735895060811258460532600319539509800915989811879506790207025505003183121812480524033163157114086741486989697 x1 = gp.invert(g,p)*A%p x2 = gp.invert(g,p)*B%p key1 = x1*x2*g%p shared_secret = key1 sha1 = hashlib.sha1() sha1.update(str(shared_secret).encode('ascii')) key = sha1.digest()[:16] iv1 = a2b_hex('d3811beb5cd2a4e1e778207ab541082b') iv2= a2b_hex('b4259ed79d050dabc7eab0c77590a6d0') data1 = a2b_hex('059e9c216bcc14e5d6901bcf651bee361d9fe42f225bc0539935671926e6c092') data2 = a2b_hex('af3fe410a6927cc227051f587a76132d668187e0de5ebf0608598a870a4bbc89') decrypt1 = AES.new(key,AES.MODE_CBC,iv1) decrypt2 = AES.new(key,AES.MODE_CBC,iv2) flag1 = decrypt1.decrypt(data1) flag2 = decrypt2.decrypt(data2) print(flag1,flag2) # hgame{AdD!tiVe-Gr0up~DH_K3y+eXch@nge^4nd=A3S} ## 夺宝大冒险2 lfsr基础 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 from icecream import * from pwn import * # nc 30607 sh = remote('182.92.108.71', 30607) ans = [] for i in range(10): buf = sh.recvuntil('guess:') sh.sendline('-1') sh.recvuntil('Wrong, the secret is ') buf = int(sh.recvuntil('\n')[:-1]) # print(buf) ans.append(buf) print(ans) bstr = '' for i in ans: ic(i, bin(i)[2:].rjust(4, '0')) bstr += bin(i)[2:].rjust(4, '0') s = bstr ic(bstr,len(s)) init = int(bstr[:40],2) ic(init) class LXFIQNN(): def __init__(self, init, mask, length): self.init = init self.mask = mask self.lengthmask = 2**(length+1)-1 def next(self): nextdata = (self.init << 1) & self.lengthmask i = self.init & self.mask & self.lengthmask output = 0 while i != 0: output ^= (i & 1) i = i >> 1 nextdata ^= output self.init = nextdata return output def random(self, nbit): output = 0 for _ in range(nbit): output <<= 1 output |= self.next() return output prng = LXFIQNN(init, 0b1011001010001010000100001000111011110101, 40) for i in range(81): secret = prng.random(4) sh.sendline(str(secret)) print(sh.recvuntil('guess')) sh.interactive() #hgame{lfsr_121a111y^use-in&crypto} ## 夺宝大冒险1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 from icecream import * from libnum import * from pwn import * # print(sh.recvline()) def t1(sh): s = sh.recvline().decode() ic(s) s =s.split(',') print(((s[0][1:]), (s[1][:-1]))) a,c = (int(s[0][1:]),int(s[1][:-2])) m1 = int(sh.recvline()) m2 = int(sh.recvline()) b = (m2-a*m1)%c # ic(b,int(sh.recvline())) # ic(b) sh.sendline(str(b)) def t2(sh): c = int(sh.recvline()) m = [0]+[int(sh.recvline()) for i in range(3)] a = (m[2]-m[3])*(invmod(m[1]-m[2],c)) a %= c b = m[2]-a*m[1] b %= c # ic(a, sh.recvline()) # ic(b, sh.recvline()) # ic(a,b) sh.sendline(str(a)) sh.sendline(str(b)) def t3(sh): m = [123]+[int(sh.recvline()) for i in range(7)] ic(m, len(m)) # tmp1 = -(m[7]-m[6])*(m[5]-m[4])-(m[6]-m[5])*(m[6]-m[5]) tmp1 = abs((m[4]-m[3])*(m[2]-m[1])-(m[3]-m[2])*(m[3]-m[2])) tmp2 = abs((m[7]-m[6])*(m[5]-m[4])-(m[6]-m[5])*(m[6]-m[5])) ic(tmp1, tmp2) tmp = gcd(tmp1, tmp2) # ic(tmp, int(sh.recvline())) # ic(tmp) sh.sendline(str(tmp)) # print(tmp1 % .cgen, tmp2 % gen.c) # nc 182.92.108.71 30641 def main(): tot =0 while 1: tot+=1 try: sh = remote('182.92.108.71', 30641) t1(sh) t2(sh) t3(sh) buf = (sh.recvall()) print(buf) if b'win' in buf: print(f'try {tot} times') break except: pass if __name__ == '__main__': main() ''' [+] Receiving all data: Done (56B) [*] Closed connection to 182.92.108.71 port 30641 b'win\nhgame{Cracking^prng_Linear)Congruential&Generators}\n' try 46 times '''
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2642403542995453, "perplexity": 7896.344007045392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00019.warc.gz"}
http://mathoverflow.net/questions/33877/hochschild-cohomology-of-a-and-of-mod-a/33889
# Hochschild (co)homology of A and of Mod_A Let A be an algebra (or dg algebra). Where can I find a proof of HH_*(A) = HH_*(Mod_A) and HH^*(A) = HH^*(Mod_A)? (And does this hold for any A?) Here Mod_A is, e.g., the category of left A-modules. One reason why this is interesting/important/useful is because many categories which arise "in nature" are of the form Mod_A. For example, there is a theorem of Bondal and van den Bergh which states that derived categories of a large class of varieties (I forget their exact hypotheses) are equivalent to Mod_A for some A. Dyckerhoff also proved that categories of matrix factorizations are of this form. By mirror symmetry, Fukaya-type categories should be of this form as well... Anyway, so to compute HH of such a category, it suffices to find this A and then compute HH(A). I think that it generally(?) should be easier to compute HH of an algebra than HH of a category. (Of course finding this A can be a very nontrivial task.) - Sorry for not providing any background. Perhaps this is bad etiquette. I just started a discussion about this at meta: tea.mathoverflow.net/discussion/564/… –  Kevin H. Lin Jul 30 '10 at 1:30 You absolutely don't have to write a tutorial on Hochschild (co)homology, but, at least, you should explain the notation, just like you would in a paper. –  Victor Protsak Jul 30 '10 at 6:00 I've now expanded the text of my question :-) –  Kevin H. Lin Jul 30 '10 at 23:23 sorry to ask for more, but how do you compute HH of a category? take some sort of cyclic nerve? –  Sean Tilson Jul 31 '10 at 5:13 $\mathcal{Nat}(\mathrm{Id},[n])$ –  Aaron Bergman Jul 31 '10 at 18:54 Basically this follows from the fact that the derived category of bimodules over two algebras is equivalent to the (suitably defined) functor category between the derived category of modules of each algebra. Say, Toen's paper on derived Morita equivalence. Then, the identity functor is given by the algebra itself interpreted as a bimodule, so the Hochschild cohomology is $\mathrm{Ext}^i_{A-A}(A,A)$. You can compute this using the bar resolution and a quick calculation gives you the usual definition of Hochschild cohomology. - Very nice. It seems like everything is answered very nicely by Toen's Morita theory. –  Kevin H. Lin Jul 31 '10 at 0:04 are the bar resolution and hochschild complex really the same? i thought that the differentials were a little different. One is a simplicial set and the other is cyclic. Or are you only using the bar resolution to compute the ext group? –  Sean Tilson Jul 31 '10 at 5:17 The latter. It'e Lemma 9.1.3 of Weibel, for example. –  Aaron Bergman Jul 31 '10 at 13:17 This is the answer for cohomology. What is the corresponding answer for homology? –  Kevin H. Lin Apr 5 '11 at 22:57 Presumably you're taking derived tensor products in the endofunctor category, but I'm not sure of a reference. It must be in Lurie or Ben-Zvi-Francis-Nadler. –  Aaron Bergman Apr 6 '11 at 4:11 I guess it follows from results in [Lowen, Wendy; Van den Bergh, Michel. Hochschild cohomology of abelian categories and ringed spaces. Adv. Math. 198 (2005), no. 1, 172--221. MR2183254 (2007d:18017)] For algebras $A$, at least, it follows more simply from the fact that the categories $\mathrm{Mod}(A)$ and $A$ are Morita equivalent. That must have been proved by Mitchell or Freyd... - Despite having tagged this question with "morita-theory", I don't really know anything about Morita theory. In particular, I don't know what it means for an algebra to be Morita equivalent to a category. –  Kevin H. Lin Jul 30 '10 at 2:09 @Kevin: an algebra can be seen as a linear category with one object, so let's do that. Next, if $C$ is a linear category, its modules are the functors $C\to\mathrm{Vect}$, and they form a category ${}_C\mathrm{Mod}$. Now, wo linear categories $C$ and $C'$ are Morita equivalent if they have equivalent module categories ${}_C\mathrm{Mod}$ and ${}_{C'}\mathrm{Mod}$. Finally, under any sensible definition, Hochschild cohomology is invariant under Morita equivalences. –  Mariano Suárez-Alvarez Jul 30 '10 at 2:12 OK. Cool. Thank you!! –  Kevin H. Lin Jul 30 '10 at 2:17 @Mariano: In your second paragraph, did you mean abelian algebras? If not, what about, say, the path algebra of the A_2 quiver? –  Kevin Walker Jul 30 '10 at 2:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9633582234382629, "perplexity": 654.8193074657631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465487.60/warc/CC-MAIN-20150226074105-00247-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electric-fields-in-parallel-plates.174498/
# Electric Fields in Parallel Plates 1. Jun 19, 2007 ### salman213 1. The magnitude of the electric field between two plates of a parallel plate capicitor is 4.7 x 10^4 N/C, if the charge on each plate increases by a a factor of 3, what happens to the electric field? increase by a factor of 3 or 9? decrease by a factor of 3 or 9? not effected? 2. Electric field for plates = V/d 3. Thate the only equation i know and we did not learn any equation that reprensets charges on each plate so im totally confused about how to relate the question... maybe logically? Umm since the charge on each plate increases by 3 the electric field will get stronger by the same factor of 3? 2. Jun 19, 2007 ### cepheid Staff Emeritus I think you're on the right track, but you have to SHOW it. Hint: what's the definition of capacitance? 3. Jun 19, 2007 ### salman213 so i have to use formulas? is therer a formula for te charge on a plate..? cause i only know E = v/d and this ha snothing to do with increasing or decreasing distance or voltage, it has to do with charges on the plate,... :s 4. Jun 19, 2007 ### salman213 from what i found in my notes i also have this equation Ee = qV but that q represents the charge on an electron between the two plates so thats why i was thinking its more of a logical question that i dont get rather than a mathematical proof .. but maybe it is i dont know...any advice? 5. Jun 19, 2007 ### salman213 ok i found Q= CV on the net now i guess that means since E = V/d if E is increased then so does V and in turn so does Q ? cause its not being divided ... therefore it increases by a factor of 3 as well? 6. Jun 20, 2007 ### cepheid Staff Emeritus Yeah basically. The distance between plates doesn't change, therefore the capacitance doesn't, and tripling the charge means tripling the voltage means tripling the electric field.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8673315644264221, "perplexity": 1265.373871910667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/spacetime-diagram.16587/
# Spacetime Diagram 1. Mar 19, 2004 ### Severian596 I would apprecite it if someone could please check my spacetime diagram for the following scenario. I appreciate any help. I'm tutoring myself based on texts and have no instructor to ask. Scenario: S frame of reference with S' superimposed. S' is moving at velocity $$v=3/4 c$$ in the +x direction with respect to S. Therefore $$\Delta y= \Delta y' = 0$$, and $$\Delta z= \Delta z' = 0$$ I calculated $$\gamma=1.5$$ (with a little approximation). I calculated angle $$\theta=33.75$$ by taking 45 degrees times 3/4 (Beta of S'). Is length l correct? If I instead superimposed S on S' would the distance of $$\Delta ct$$ along the superimposed ct axis be 0.66 and the length of $$\Delta ct'$$ be 1? Thanks for any help!! 2. Mar 20, 2004 ### pmb_phy I believe that you did this incorrectly. Notice that each point on the ct' axis represents the origin of the S' frame at different values of ct. i.e. the ct' axis is the worldline traced out buy the origin of the S' frame. So the Abscissa of each event on this line is x and the ordinate is ct where x = vt = (v/c)(ct) = beta*(ct) If you rotated the spacetime diagram and plotted x as a function of ct then the ct' axis would have a slope of v/c. If theta is the angle the ct' axis makes with the ct axis then tan (theta) = v/c and therefore theta = arctan(v/c) = arctan (3/4) = 36.87 degrees 3. Mar 22, 2004 ### Severian596 pmb_phy your willingness to help me learn this topic is truly a blessing. I've created a brand new diagram and thrown super-imposing spacetime diagrams to the wind. I've instead taken the more conservative route to help me learn the relationships between S and S'. Here's a link to the page...use IE if possible. If you don't get an image you'll have to look at the less attractive second link: http://copperplug.no-ip.org/homesite/Spacetime_files/Spacetime_frames.htm http://copperplug.no-ip.org/homesite/Spacetime_files/Spacetime_gif_1.gif (don't neglect the Zoom button in the lower right of the first link, it's extremely nice) The problem was a bit more difficult than a more basic "event X at the origin" problem. In this problem I placed some time prior to the event A, and therefore had to plot all of the events using $$\gamma$$ converstions for length and time. If my graphs are correct, the event order changes for S' from A/D simultaneous and B/C simultaneous to D, A, B, C in that order. Is that right!? And if so is this because S' is moving toward planet L and observes planet L's time speeding up? If anyone could confirm or deny it's validity I'd appreciate it :) Last edited: Mar 22, 2004 4. Mar 25, 2004 ### Severian596 In continuing my research in SR I found that this is indeed NOT correct. You cannot determine a sequence of events just by assuming the lowest (ct) coordinate values happen first. You must accound for "light lag" and trace a 45 degree line from the event to the worldline of the reference frame to find their proper order! It's most succintly put this way: "we cannot make any pair of events change their order simply by changing frames!" 5. Mar 25, 2004 ### pmb_phy That is incorrect. I'm being lazy and not taking a close look at your diagrams (too much to think about and my eyes are straining to see it). But I think I know what you're getting at. And you can determine the sequence of events by assuming the lowest values happen first. Whether there is light signals flying around is irrelevant in regards to determining the sequence of events. This sounds like a nice idea for a web page on my site. Let me try and whip one up tommorow. 6. Mar 26, 2004 ### Severian596 Please notice the button in the VERY far lower right corner that says "Fit In Window." This offers zooms to the image of up to 1200%, that should be big enough for ya! Your statement is very interesting, pmb_phy. It has me wondering what it means for a sequence of events to occur. For a boring example let's have A throw a ball to B at event C, and B catches the ball at event D. Due to causality the events can NEVER change order in ANY frame of reference, am I right? Otherwise we could manipulate a theoretical traveler's velocity so that event D happens before event C. But I'm reading as i'm posting this... I've found something that clears up the statement I made in my last post. I think it can be summed up like this (straight from the text): timelike separation (positive interval) events must occur in the same order in all frames. spacelike separation (negative interval) events can occur in different orders in different frames because they're causally disconnected. null separation events must have a null separation in all frames. 7. Mar 26, 2004 ### pmb_phy I did. It was still hard to read. I probably have my screen settings on different settings than you do Cool. By the way, you can call me Pete. For me it means something like this. At t = -1 event occurred at x = 6, at t = +1 even B occured at x = 12. So event A happened before event B. For it to be possible for the sequence of events to be different in different frames of reference the events must have a spacelike spacetime seperation. you gave an example where the events have a timelike spacetime seperation. Me too. [:-)] Note that for the sequence of events to be frame dependant they cannot occur at the same place. [/quote] Exactly! If you try drawing this on a spacetime diagram then it will become clear. I'lll try to make one later today or this weekend. Pete 8. Mar 26, 2004 ### Severian596 Sweet, thanks Pete. I'm brad btw. Pleeeeased to meet you! 9. Mar 26, 2004 ### pmb_phy I'll try to get to this in the weekend but I pulled my back out and am having problems sitting in this chair. In the meantime I suggest that you work with the spacetime diagrams and try to convince yourself of all this by working with some examples that you create. Don't do any math. Just draw pictures. Note that lines of simultaneity are lines which are parallel to the x-axis. So events above such a line come later and below these lines events are "before" Working with spacetime diagrams is nice since when you master them you can solve problems of a qualitative nature such as this one in a speedier fashion. You just picture the spacetime diagram and you solve it like that. Mind you its not easy but its worth doing. I don't think I'll ever master these because just when I thought I knew all aspects of them it turns out there was something I didn't know! :-) Pete
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5215011239051819, "perplexity": 959.7992419235868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720471.17/warc/CC-MAIN-20161020183840-00284-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.scholars.northwestern.edu/en/publications/enhanced-average-thermoelectric-figure-of-merit-of-n-type-pbte-su
# Enhanced average thermoelectric figure of merit of n-type PbTe1-xIx-MgTe Priyanka Jood, Michihiro Ohta*, Masaru Kunii, Xiaokai Hu, Hirotaka Nishiate, Atsushi Yamamoto, Mercouri G. Kanatzidis *Corresponding author for this work Research output: Contribution to journalArticlepeer-review 54 Scopus citations ## Abstract The thermoelectric properties of sintered samples of n-type PbTe1-xIx-yMgTe (x = 0.0012-0.006; y = 0 and 1%) were investigated over the temperature range of 300 K to 900 K. Scanning electron microscopy revealed two different length scales of grains in samples with higher I and MgTe contents, while a homogenous microstructure for samples with a lower dopant content. Transmission electron microscopy revealed ubiquitous spherical nanoprecipitates in PbTe1-xIx with MgTe and nanoscale disk like precipitates in both, PbTe1-xIx with and without MgTe. The nanostructured PbTe showed higher Seebeck coefficients than expected values. We also observed a slower rate of increase in the electrical resistivity with rising temperature in PbTe1-xIx-yMgTe below ∼550 K, leading to a higher thermoelectric power factor. The nanostructures and mixed microstructures scatter phonons, reducing the lattice thermal conductivity as low as 0.4 W K-1 m-1 at 600 K. A high ZT of 1.2 at 700 K was achieved as well as a high average ZT of 0.8 was observed in PbTe0.996I0.004-1 mol% MgTe for a cold-side temperature of 303 K and a hot-side temperature of 873 K. Original language English (US) 10401-10408 8 Journal of Materials Chemistry C 3 40 https://doi.org/10.1039/c5tc01652e Published - 2015 ## ASJC Scopus subject areas • Chemistry(all) • Materials Chemistry ## Fingerprint Dive into the research topics of 'Enhanced average thermoelectric figure of merit of n-type PbTe1-xIx-MgTe'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8782594203948975, "perplexity": 11383.924018006066}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00241.warc.gz"}
https://worldwidescience.org/topicpages/s/surrogate+endpoint+biomarker.html
#### Sample records for surrogate endpoint biomarker 1. CFTR biomarkers : Time for promotion to surrogate end-point? NARCIS (Netherlands) De Boeck, K.; Kent, L.; Davies, J.; Derichs, N.; Amaral, M.; Rowe, S. M.; Middleton, P.; de Jonge, Hendrik; Bronsveld, I.; Wilschanski, M.; Melotti, P.; Danner-Boucher, I.; Boerner, S.; Fajac, I.; Southern, K.; de Nooijer, R. A.; Bot, A.; de Rijke, Y.; de Wachter, E.; Leal, T.; Vermeulen, F.; Hug, M. J.; Rault, G.; Nguyen-Khoa, T.; Barreto, C.; Proesmans, M.; Sermet-Gaudelus, I. 2013-01-01 In patients with cystic fibrosis, cystic fibrosis transmembrane conductance regulator (CFTR) biomarkers, such as sweat chloride concentration and/or nasal potential difference, are used as end-points of efficacy in phase-III clinical trials with the disease modifying drugs ivacaftor (VX-770), VX809 2. CFTR biomarkers: Time for promotion to surrogate end-point? NARCIS (Netherlands) K. de Boeck; L. Kent; J. Davies (J.); N. Derichs; M.D. Amaral (Margarida); S.M. Rowe (S.); P. Middleton (P.); H.R. de Jonge (Hugo); I. Bronsveld (Inez); M. Wilschanski (Michael); P. Melotti; I. Danner-Boucher (I.); S. Boerner (S.); I. Fajac; K. Southern; R.A. de Nooijer; A.G. Bot (Alice); Y.B. de Rijke (Yolanda); E. de Wachter (E.); T. Leal (Teresinha); F. Vermeulen; M. Hug; G. Rault (G.); T. Nguyen-Khoa (T.); C. Barreto (C.); W. Proesmans (Willem); I. Sermet-Gaudelus (I.) 2013-01-01 textabstractIn patients with cystic fibrosis, cystic fibrosis transmembrane conductance regulator (CFTR) biomarkers, such as sweat chloride concentration and/or nasal potential difference, are used as end-points of efficacy in phase-III clinical trials with the disease modifying drugs ivacaftor (VX- 3. CFTR biomarkers: Time for promotion to surrogate end-point? NARCIS (Netherlands) K. de Boeck; L. Kent; J. Davies (J.); N. Derichs; M.D. Amaral (Margarida); S.M. Rowe (S.); P. Middleton (P.); H.R. de Jonge (Hugo); I. Bronsveld (Inez); M. Wilschanski (Michael); P. Melotti; I. Danner-Boucher (I.); S. Boerner (S.); I. Fajac; K. Southern; R.A. de Nooijer; A.G. Bot (Alice); Y.B. de Rijke (Yolanda); E. de Wachter (E.); T. Leal (Teresinha); F. Vermeulen; M. Hug; G. Rault (G.); T. Nguyen-Khoa (T.); C. Barreto (C.); W. Proesmans (Willem); I. Sermet-Gaudelus (I.) 2013-01-01 textabstractIn patients with cystic fibrosis, cystic fibrosis transmembrane conductance regulator (CFTR) biomarkers, such as sweat chloride concentration and/or nasal potential difference, are used as end-points of efficacy in phase-III clinical trials with the disease modifying drugs ivacaftor (VX- 4. CFTR biomarkers : Time for promotion to surrogate end-point? NARCIS (Netherlands) De Boeck, K.; Kent, L.; Davies, J.; Derichs, N.; Amaral, M.; Rowe, S. M.; Middleton, P.; de Jonge, Hendrik; Bronsveld, I.; Wilschanski, M.; Melotti, P.; Danner-Boucher, I.; Boerner, S.; Fajac, I.; Southern, K.; de Nooijer, R. A.; Bot, A.; de Rijke, Y.; de Wachter, E.; Leal, T.; Vermeulen, F.; Hug, M. J.; Rault, G.; Nguyen-Khoa, T.; Barreto, C.; Proesmans, M.; Sermet-Gaudelus, I. 2013-01-01 In patients with cystic fibrosis, cystic fibrosis transmembrane conductance regulator (CFTR) biomarkers, such as sweat chloride concentration and/or nasal potential difference, are used as end-points of efficacy in phase-III clinical trials with the disease modifying drugs ivacaftor (VX-770), VX809 5. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema DEFF Research Database (Denmark) Lassere, Marissa N; Johnson, Kent R; Boers, Maarten 2007-01-01 of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. CONCLUSION: Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery...... endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. RESULTS: The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation... 6. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema DEFF Research Database (Denmark) Lassere, Marissa N; Johnson, Kent R; Boers, Maarten 2007-01-01 to develop a hierarchical schema that systematically evaluates and ranks the surrogacy status of biomarkers and surrogates; and to obtain feedback from stakeholders. METHODS: After a systematic search of Medline and Embase on biomarkers, surrogate (outcomes, endpoints, markers, indicators), intermediate...... endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. RESULTS: The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation...... of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. CONCLUSION: Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery... 7. Definitions and validation criteria for biomarkers and surrogate endpoints: development and testing of a quantitative hierarchical levels of evidence schema DEFF Research Database (Denmark) Lassere, Marissa N; Johnson, Kent R; Boers, Maarten 2007-01-01 OBJECTIVE: There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Our objective was to review the literature on biomarkers and surrogates to d... 8. Biomarkers and surrogate endpoints for normal-tissue effects of radiation therapy: the importance of dose-volume effects Science.gov (United States) Bentzen, Søren M.; Parliament, Matthew; Deasy, Joseph O.; Dicker, Adam; Curran, Walter J.; Williams, Jacqueline P.; Rosenstein, Barry S. 2012-01-01 Biomarkers are of interest for predicting or monitoring normal tissue toxicity of radiation therapy. Advances in molecular radiobiology provide novel leads in the search for normal tissue biomarkers with sufficient sensitivity and specificity to become clinically useful. This paper reviews examples of studies of biomarkers as predictive markers, as response markers or as surrogate endpoints for radiation side-effects. Single nucleotide polymorphisms (SNPs) are briefly discussed in the context of candidate gene and genome wide association studies. The importance of adjusting for radiation dose distribution in normal tissue biomarker studies is underlined. Finally, research priorities in this field are identified and discussed. PMID:20171510 9. Surrogate endpoints and emerging surrogate endpoints for risk reduction of cardiovascular disease. Science.gov (United States) Rasnake, Crystal M; Trumbo, Paula R; Heinonen, Therese M 2008-02-01 This article reviews surrogate endpoints and emerging biomarkers that were discussed at the annual "Cardiovascular Biomarkers and Surrogate Endpoints" symposium cosponsored by the US Food and Drug Administration (FDA) and the Montreal Heart Institute. The FDA's Center for Food Safety and Applied Nutrition (CFSAN) uses surrogate endpoints in its scientific review of a substance/disease relationship for a health claim. CFSAN currently recognizes three validated surrogate endpoints: blood pressure, blood total cholesterol, and blood low-density lipoprotein (LDL) concentration in its review of a health claim for cardiovascular disease (CVD). Numerous potential surrogate endpoints of CVD are being evaluated as the pathophysiology of heart disease is becoming better understood. However, these emerging biomarkers need to be validated as surrogate endpoints before they are used by CFSAN in the evaluation of a CVD health claim. 10. Surrogate Endpoints in Suicide Research Science.gov (United States) Wortzel, Hal S.; Gutierrez, Peter M.; Homaifar, Beeta Y.; Breshears, Ryan E.; Harwood, Jeri E. 2010-01-01 Surrogate endpoints frequently substitute for rare outcomes in research. The ability to learn about completed suicides by investigating more readily available and proximate outcomes, such as suicide attempts, has obvious appeal. However, concerns with surrogates from the statistical science perspective exist, and mounting evidence from… 11. Biomarkers and Surrogate Endpoints in Uveitis: The Impact of Quantitative Imaging. Science.gov (United States) Denniston, Alastair K; Keane, Pearse A; Srivastava, Sunil K 2017-05-01 Uveitis is a major cause of sight loss across the world. The reliable assessment of intraocular inflammation in uveitis ('disease activity') is essential in order to score disease severity and response to treatment. In this review, we describe how 'quantitative imaging', the approach of using automated analysis and measurement algorithms across both standard and emerging imaging modalities, can develop objective instrument-based measures of disease activity. This is a narrative review based on searches of the current world literature using terms related to quantitative imaging techniques in uveitis, supplemented by clinical trial registry data, and expert knowledge of surrogate endpoints and outcome measures in ophthalmology. Current measures of disease activity are largely based on subjective clinical estimation, and are relatively insensitive, with poor discrimination and reliability. The development of quantitative imaging in uveitis is most established in the use of optical coherence tomographic (OCT) measurement of central macular thickness (CMT) to measure severity of macular edema (ME). The transformative effect of CMT in clinical assessment of patients with ME provides a paradigm for the development and impact of other forms of quantitative imaging. Quantitative imaging approaches are now being developed and validated for other key inflammatory parameters such as anterior chamber cells, vitreous haze, retinovascular leakage, and chorioretinal infiltrates. As new forms of quantitative imaging in uveitis are proposed, the uveitis community will need to evaluate these tools against the current subjective clinical estimates and reach a new consensus for how disease activity in uveitis should be measured. The development, validation, and adoption of sensitive and discriminatory measures of disease activity is an unmet need that has the potential to transform both drug development and routine clinical care for the patient with uveitis. 12. Biomarkers and Surrogate Endpoints: How and When might They Impact Drug Development? Directory of Open Access Journals (Sweden) Chetan D. Lathia 2002-01-01 Full Text Available As the pharmaceutical industry starts developing novel molecules developed based on molecular biology principles and a better understanding of the human genome, it becomes increasingly important to develop early indicators of activity and/or toxicity. Biomarkers are measurements based on molecular pharmacology and/or pathophysiology of the disease being evaluated that may assist with decision-making in various phases of drug development. The utility of biomarkers in the development of drugs is described in this review. Additionally, the utility of pharmacokinetic data in drug development is described. Development of biomarkers may help reduce the cost of drug development by allowing key decisions earlier in the drug development process. Additionally, biomarkers may be used to select patients who have a high likelihood of benefit or they could be used by clinicians to evaluate the potential for efficacy after start of treatment. 13. Biomarkers and Surrogate Endpoints: How and When might They Impact Drug Development? OpenAIRE Chetan D. Lathia 2002-01-01 As the pharmaceutical industry starts developing novel molecules developed based on molecular biology principles and a better understanding of the human genome, it becomes increasingly important to develop early indicators of activity and/or toxicity. Biomarkers are measurements based on molecular pharmacology and/or pathophysiology of the disease being evaluated that may assist with decision-making in various phases of drug development. The utility of biomarkers in the development of drugs i... 14. The use of surrogate endpoints in regulating medicines for cardio-renal disease: opinions of stakeholders. Directory of Open Access Journals (Sweden) Bauke Schievink Full Text Available AIM: There is discussion whether medicines can be authorized on the market based on evidence from surrogate endpoints. We assessed opinions of different stakeholders on this topic. METHODS: We conducted an online questionnaire that targeted various stakeholder groups (regulatory agencies, pharmaceutical industry, academia, relevant public sector organisations and medical specialties (cardiology or nephrology vs. other. Participants were enrolled through purposeful sampling. We inquired for conditions under which surrogate endpoints can be used, the validity of various cardio-renal biomarkers and new approaches for biomarker use. RESULTS: Participants agreed that surrogate endpoints can be used when the surrogate is scientifically valid (5-point Likert response format, mean score: 4.3, SD: 0.9 or when there is an unmet clinical need (mean score: 3.8, SD: 1.2. Industry participants agreed to a greater extent than regulators and academics. However, out of four proposed surrogates (blood pressure (BP, HbA1c, albuminuria, CRP for cardiovascular outcomes or end-stage renal disease, only use of BP for cardiovascular outcomes was deemed moderately accurate (mean: 3.6, SD: 1.1. Specialists in cardiology or nephrology tended to be more positive about the use of surrogate endpoints. CONCLUSION: Stakeholders in drug development do not oppose to the use of surrogate endpoints in drug marketing authorization, but most surrogates are not considered valid. To solve this impasse, increased efforts are required to validate surrogate endpoints and to explore alternative ways to use them. 15. Evaluation of COPD Longitudinally to Identify Predictive Surrogate End-points (ECLIPSE) DEFF Research Database (Denmark) Vestbo, J; Anderson, W; Coxson, H O 2008-01-01 computed tomography, biomarker measurement (in blood, sputum, urine and exhaled breath condensate), health outcomes, body impedance, resting oxygen saturation and 6-min walking distance. Evaluation of COPD Longitudinally to Identify Predictive Surrogate End-points is the largest study attempting to better... 16. Biomarkers of intermediate endpoints in environmental and occupational health DEFF Research Database (Denmark) Knudsen, Lisbeth E; Hansen, Ase M 2007-01-01 The use of biomarkers in environmental and occupational health is increasing due to increasing demands on information about health risks from unfavourable exposures. Biomarkers provide information about individual loads. Biomarkers of intermediate endpoints benefit in comparison with biomarkers...... of exposure from the fact that they are closer to the adverse outcome in the pathway from exposure to health effects and may provide powerful information for intervention. Some biomarkers are specific, e.g., DNA and protein adducts, while others are unspecific like the cytogenetic biomarkers of chromosomal...... health effect from the result of the measurement has been performed for the cytogenetic biomarkers showing a predictive value of high levels of CA and increased risk of cancer. The use of CA in future studies is, however, limited by the laborious and sensitive procedure of the test and lack of trained... 17. A causal framework for surrogate endpoints with semi-competing risks data. Science.gov (United States) Ghosh, Debashis 2012-10-01 In this note, we address the problem of surrogacy using a causal modelling framework that differs substantially from the potential outcomes model that pervades the biostatistical literature. The framework comes from econometrics and conceptualizes direct effects of the surrogate endpoint on the true endpoint. While this framework can incorporate the so-called semi-competing risks data structure, we also derive a fundamental non-identifiability result. Relationships to existing causal modelling frameworks are also discussed. 18. Surrogate endpoints for EDSS worsening in multiple sclerosis. A meta-analytic approach. Science.gov (United States) Sormani, M P; Bonzano, L; Roccatagliata, L; Mancardi, G L; Uccelli, A; Bruzzi, P 2010-07-27 To evaluate whether the effects on potential surrogate endpoints, such as MRI markers and relapses, observed in trials of experimental treatments are able to predict the effects of these treatments on disability progression as defined in relapsing-remitting multiple sclerosis (RRMS) trials. We used a pooled analysis of all the published randomized controlled clinical trials in RRMS reporting data on Expanded Disability Status Scale (EDSS) worsening and relapses or MRI lesions or both. We extracted data on relapses, MRI lesions, and the proportion of progressing patients. A regression analysis weighted on trial size and duration was performed to study the relationship between the treatment effect observed in each trial on relapses and MRI lesions and the observed treatment effect on EDSS worsening. A set of 19 randomized double-blind controlled trials in RRMS were identified, for a total of 44 arms, 25 contrasts, and 10,009 patients. A significant correlation was found between the effect of treatments on relapses and the effect of treatments on EDSS worsening: the adjusted R(2) value of the weighted regression was 0.71. The correlation between the treatment effect on MRI lesions and EDSS worsening was slightly weaker (R(2) = 0.57) but significant. These findings support the use of commonly used surrogate markers of EDSS worsening as endpoints in multiple sclerosis clinical trials. Further research is warranted to validate surrogate endpoints at the individual level rather than at the trial level, to draw important conclusions in the management of the individual patient. 19. An evaluation of culture results during treatment for tuberculosis as surrogate endpoints for treatment failure and relapse. Directory of Open Access Journals (Sweden) Patrick P J Phillips Full Text Available It is widely acknowledged that new regimens are urgently needed for the treatment of tuberculosis. The primary endpoint in the Phase III trials is a composite outcome of failure at the end of treatment or relapse after stopping treatment. Such trials are usually both long and expensive. Valid surrogate endpoints measured during or at the end of treatment could dramatically reduce both the time and cost of assessing the effectiveness of new regimens. The objective of this study was to evaluate sputum culture results on solid media during treatment as surrogate endpoints for poor outcome. Data were obtained from twelve randomised controlled trials conducted by the British Medical Research Council in the 1970s and 80s in East Africa and East Asia, consisting of 6974 participants and 49 different treatment regimens. The month two culture result was shown to be a poor surrogate in East Africa but a good surrogate in Hong Kong. In contrast, the month three culture was a good surrogate in trials conducted in East Africa but not in Hong Kong. As well as differences in location, ethnicity and probable strain of Mycobacteria tuberculosis, Hong Kong trials more often evaluated regimens with rifampicin throughout and intermittent regimens, and patients in East African trials more often presented with extensive cavitation and were slower to convert to culture negative during treatment. An endpoint that is a summary measure of the longitudinal profile of culture results over time or that is able to detect the presence of M. tuberculosis later in treatment is more likely to be a better endpoint for a phase II trial than a culture result at a single time point and may prove to be an acceptable surrogate. More data are needed before any endpoint can be used as a surrogate in a confirmatory phase III trial. 20. Imaging readouts as biomarkers or surrogate parameters for the assessment of therapeutic interventions Energy Technology Data Exchange (ETDEWEB) Rudin, Markus [University of Zuerich/ETH Zuerich, Institute for Biomedical Engineering, Zuerich (Switzerland); University of Zuerich, Institute for Pharmacology and Toxicology, Zuerich (Switzerland) 2007-10-15 Surrogate markers and biomarkers based on imaging readouts providing predictive information on clinical outcome are of increasing importance in the preclinical and clinical evaluation of novel therapies. They are primarily used in studies designed to establish evidence that the therapeutic principle is valid in a representative patient population or in an individual. A critical step in the development of (imaging) surrogates is validation: correlation with established clinical endpoints must be demonstrated. Biomarkers must not fulfill such stringent validation criteria; however, they should provide insight into mechanistic aspects of the therapeutic intervention (proof-of-mechanism) or document therapy efficacy with prognostic quality with regard to the long-term clinical outcome (proof of concept). Currently used imaging biomarkers provide structural, physiological and metabolic information. Novel imaging approaches annotate structure with molecular signatures that are tightly linked to the pathophysiology or to the therapeutic principle. These cellular and molecular imaging methods yield information on drug biodistribution, receptor expression and occupancy, and/or intra- and intercellular signaling. The design of novel target-specific imaging probes is closely related to the development of the therapeutic agents and should be considered early in the discovery phase. Significant technical and regulatory hurdles have to be overcome to foster the use of imaging biomarkers for clinical drug evaluation. (orig.) 1. On the relationship between the causal-inference and meta-analytic paradigms for the validation of surrogate endpoints. Science.gov (United States) Alonso, Ariel; Van der Elst, Wim; Molenberghs, Geert; Buyse, Marc; Burzykowski, Tomasz 2015-03-01 The increasing cost of drug development has raised the demand for surrogate endpoints when evaluating new drugs in clinical trials. However, over the years, it has become clear that surrogate endpoints need to be statistically evaluated and deemed valid, before they can be used as substitutes of "true" endpoints in clinical studies. Nowadays, two paradigms, based on causal-inference and meta-analysis, dominate the scene. Nonetheless, although the literature emanating from these paradigms is wide, till now the relationship between them has largely been left unexplored. In the present work, we discuss the conceptual framework underlying both approaches and study the relationship between them using theoretical elements and the analysis of a real case study. Furthermore, we show that the meta-analytic approach can be embedded within a causal-inference framework on the one hand and that it can be heuristically justified why surrogate endpoints successfully evaluated using this approach will often be appealing from a causal-inference perspective as well, on the other. A newly developed and user friendly R package Surrogate is provided to carry out the evaluation exercise. 2. Recommendations for the development of rare disease drugs using the accelerated approval pathway and for qualifying biomarkers as primary endpoints. Science.gov (United States) Kakkis, Emil D; O'Donovan, Mary; Cox, Gerald; Hayes, Mark; Goodsaid, Federico; Tandon, P K; Furlong, Pat; Boynton, Susan; Bozic, Mladen; Orfali, May; Thornton, Mark 2015-02-10 For rare serious and life-threatening disorders, there is a tremendous challenge of transforming scientific discoveries into new drug treatments. This challenge has been recognized by all stakeholders who endorse the need for flexibility in the regulatory review process for novel therapeutics to treat rare diseases. In the United States, the best expression of this flexibility was the creation of the Accelerated Approval (AA) pathway. The AA pathway is critically important for the development of treatments for diseases with high unmet medical need and has been used extensively for drugs used to treat cancer and infectious diseases like HIV.In 2012, the AA provisions were amended to enhance the application of the AA pathway to expedite the development of drugs for rare disorders under the Food and Drug Administration Safety and Innovation Act (FDASIA). FDASIA, among many provisions, requires the development of a more relevant FDA guidance on the types of evidence that may be acceptable in support of using a novel surrogate endpoint. The application of AA to rare diseases requires more predictability to drive greater access to appropriate use of AA for more rare disease treatments that might not be developed otherwise.This white paper proposes a scientific framework for assessing biomarker endpoints to enhance the development of novel therapeutics for rare and devastating diseases currently without adequate treatment and is based on the opinions of experts in drug development and rare disease patient groups. Specific recommendations include: 1) Establishing regulatory rationale for increased AA access in rare disease programs; 2) Implementing a Biomarker Qualification Request Process to provide the opportunity for an early determination of biomarker acceptance; and 3) A proposed scientific framework for qualifying biomarkers as primary endpoints. The paper's final section highlights case studies of successful examples that have incorporated biomarker endpoints into 3. Early Proctoscopy is a Surrogate Endpoint of Late Rectal Toxicity in Prostate Cancer Treated With Radiotherapy Energy Technology Data Exchange (ETDEWEB) Ippolito, Edy; Massaccesi, Mariangela; Digesu, Cinzia; Deodato, Francesco [Radiotherapy Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Macchia, Gabriella, E-mail: gmacchia@rm.unicatt.it [Radiotherapy Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Pirozzi, Giuseppe Antonio [Endoscopy Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Cilla, Savino [Medical Physics Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Cuscuna, Daniele; Di Lallo, Alessandra [Urology Unit, General Hospital A. Cardarelli, Campobasso (Italy); Mattiucci, Gian Carlo; Mantini, Giovanna [Department of Radiotherapy, Policlinico Universitario Agostino Gemelli, Universita Cattolica del S. Cuore, Rome (Italy); Pacelli, Fabio [Surgery Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Valentini, Vincenzo; Cellini, Numa [Department of Radiotherapy, Policlinico Universitario Agostino Gemelli, Universita Cattolica del S. Cuore, Rome (Italy); Ingrosso, Marcello [Endoscopy Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Morganti, Alessio Giuseppe [Radiotherapy Unit, Fondazione di Ricerca e Cura Giovanni Paolo II, Universita Cattolica del S. Cuore, Campobasso (Italy); Department of Radiotherapy, Policlinico Universitario Agostino Gemelli, Universita Cattolica del S. Cuore, Rome (Italy) 2012-06-01 Purpose: To predict the grade and incidence of late clinical rectal toxicity through short-term (1 year) mucosal alterations. Methods and Materials: Patients with prostate adenocarcinoma treated with curative or adjuvant radiotherapy underwent proctoscopy a year after the course of radiotherapy. Mucosal changes were classified by the Vienna Rectoscopy Score (VRS). Late toxicity data were analyzed according to the Kaplan-Meier method. Comparison between prognosis groups was performed by log-rank analysis. Results: After a median follow-up time of 45 months (range, 18-99), the 3-year incidence of grade {>=}2 rectal late toxicity according to the criteria of the European Organization for Research and Treatment of Cancer and the Radiation Therapy Oncology Group was 24%, with all patients (24/24; 100%) experiencing rectal bleeding. The occurrence of grade {>=}2 clinical rectal late toxicity was higher in patients with grade {>=}2 (32% vs. 15 %, p = 0.02) or grade {>=}3 VRS telangiectasia (47% vs. 17%, p {<=} 0.01) and an overall VRS score of {>=}2 (31% vs. 16 %, p = 0.04) or {>=}3 (48% vs. 17%, p = 0.01) at the 1-year proctoscopy. Conclusions: Early proctoscopy (1 year) predicts late rectal bleeding and therefore can be used as a surrogate endpoint for late rectal toxicity in studies aimed at reducing this frequent complication. 4. Relative Biological Effectiveness of HZE Particles for Chromosomal Exchanges and Other Surrogate Cancer Risk Endpoints Science.gov (United States) Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.; George, Kerry A.; Cucinotta, Francis A. 2016-01-01 The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictions of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Comparisons of the resulting model parameters to those used in the NASA radiation quality factor function are discussed. PMID:27111667 5. Relative Biological Effectiveness of HZE Particles for Chromosomal Exchanges and Other Surrogate Cancer Risk Endpoints. Directory of Open Access Journals (Sweden) Eliedonna Cacao Full Text Available The biological effects of high charge and energy (HZE particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE factors. In this report we make detailed RBE predictions of the charge number and energy dependence of RBE's using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE's are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (10 are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE's against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Comparisons of the resulting model parameters to those used in the NASA radiation quality factor function are discussed. 6. Exploring the relationship between the causal-inference and meta-analytic paradigms for the evaluation of surrogate endpoints. Science.gov (United States) Van der Elst, Wim; Molenberghs, Geert; Alonso, Ariel 2016-04-15 Nowadays, two main frameworks for the evaluation of surrogate endpoints, based on causal-inference and meta-analysis, dominate the scene. Earlier work showed that the metrics of surrogacy introduced in both paradigms are related, although in a complex way that is difficult to study analytically. In the present work, this relationship is further examined using simulations and the analysis of a case study. The results indicate that the extent to which both paradigms lead to similar conclusions regarding the validity of the surrogate, depends on a complex interplay between multiple factors like the ratio of the between and within trial variability and the unidentifiable correlations between the potential outcomes. All the analyses were carried out using the newly developed R package Surrogate, which is freely available via CRAN. 7. Early Change in Proteinuria as a Surrogate Endpoint for Kidney Disease Progression: An Individual Patient Meta-analysis Science.gov (United States) Inker, Lesley A.; Levey, Andrew S.; Pandya, Kruti; Stoycheff, Nicholas; Okparavero, Aghogho; Greene, Tom 2014-01-01 Background It is controversial whether proteinuria is a valid surrogate endpoint for randomized trials in chronic kidney disease. Study Design Meta-analysis of individual patient level data. Setting & Population Individual patient data on 9008 patients from 32 randomized trials evaluating five intervention types. Selection Criteria for Studies Randomized controlled trials of kidney disease progression until 2007 with measurements of proteinuria both at baseline and during the first year of follow-up, with at least one further year of follow-up for the clinical outcome. Predictor Early change in proteinuria. Outcomes Doubling of serum creatinine, end stage renal disease or death. Results Early decline in proteinuria was associated with a lower risk of the clinical outcome (pooled HR, 0.74 per 50% reduction in proteinuria); this association was stronger at higher levels of baseline proteinuria. Pooled estimates for the proportion of treatment effect on the clinical outcome explained by early decline in proteinuria ranged from −7.0% (95% CI, −40.6% to 26.7%) to 43.9% (95% CI, 25.3% to 62.6%) across five intervention types. The direction of the pooled treatment effects on early change in proteinuria agreed with the direction of the treatment effect on the clinical outcome for all 5 intervention types, with the magnitudes of the pooled treatment effects on the two endpoints agreeing for 4 of the 5 intervention types. The pooled treatment effects on both endpoints were simultaneously stronger at higher levels of proteinuria. However, statistical power was insufficient to determine if differences in treatment effects on the clinical outcome corresponded to differences in treatment effects on proteinuria between individual studies. Limitations Limited variety of interventions tested and low statistical power for many chronic kidney disease clinical trials. Conclusions These results provide new evidence supporting the use of an early reduction in proteinuria as a 8. Biomarkers of Host Response Predict Primary End-Point Radiological Pneumonia in Tanzanian Children with Clinical Pneumonia: A Prospective Cohort Study. Directory of Open Access Journals (Sweden) Laura K Erdman Full Text Available Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance.We studied children with WHO-defined clinical pneumonia (n = 155 within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30, other infiltrates (n = 31, or normal chest x-ray (n = 94. Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis.Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5-98.8, 80.8% specificity (72.6-87.1, positive likelihood ratio 4.9 (3.4-7.1, negative likelihood ratio 0 9. Biomarker report from the phase II lamotrigine trial in secondary progressive MS - neurofilament as a surrogate of disease progression. Directory of Open Access Journals (Sweden) Sharmilee Gnanapavan Full Text Available OBJECTIVE: Lamotrigine trial in SPMS was a randomised control trial to assess whether partial blockade of sodium channels has a neuroprotective effect. The current study was an additional study to investigate the value of neurofilament (NfH and other biomarkers in predicting prognosis and/or response to treatment. METHODS: SPMS patients who attended the NHNN or the Royal Free Hospital, UK, eligible for inclusion were invited to participate in the biomarker study. Primary outcome was whether lamotrigine would significantly reduce detectable serum NfH at 0-12, 12-24 and 0-24 months compared to placebo. Other serum/plasma and CSF biomarkers were also explored. RESULTS: Treatment effect by comparing absolute changes in NfH between the lamotrigine and placebo group showed no difference, however based on serum lamotrigine adherence there was significant decline in NfH (NfH 12-24 months p=0.043, Nfh 0-24 months p=0.023. Serum NfH correlated with disability: walking times, 9-HPT (non-dominant hand, PASAT, z-score, MSIS-29 (psychological and EDSS and MRI cerebral atrophy and MTR. Other biomarkers explored in this study were not found to be significantly associated, aside from that of plasma osteopontin. CONCLUSIONS: The relations between NfH and clinical scores of disability and MRI measures of atrophy and disease burden support NfH being a potential surrogate endpoint complementing MRI in neuroprotective trials and sample sizes for such trials are presented here. We did not observe a reduction in NfH levels between the Lamotrigine and placebo arms, however, the reduction in serum NfH levels based on lamotrigine adherence points to a possible neuroprotective effect of lamotrigine on axonal degeneration. 10. Emerging treatments in management of prostate cancer: biomarker validation and endpoints for immunotherapy clinical trial design Directory of Open Access Journals (Sweden) Slovin SF 2013-12-01 Full Text Available Susan F SlovinGenitourinary Oncology Service, Sidney Kimmel Center for Prostate and Urologic Cancers, Memorial Sloan-Kettering Cancer Center, New York, NY, USAAbstract: The rapidly emerging field of immunotherapy and the development of novel immunologic agents that have been approved in melanoma and successfully studied in lung cancer, kidney cancer, and prostate cancer have mandated that there be uniformity in clinical trial analysis beyond conventional survival endpoints and imaging. This includes some measure of determining whether the immunologic target is hit and how the treatment has impacted on the immune system in toto. While melanoma is leading the field towards these ends, there is some doubt that not all of the recent successes with immune therapies, for example, checkpoint inhibitors, will be effective for every cancer, and that the toxicities may also be different depending on the malignancy. This review serves to elucidate the current issues facing clinical investigators who perform immunologic trials targeted at patients with prostate cancer and discusses the challenges in assessing the right immunologic endpoints to demonstrate biologic/immunologic targeting leading to clinical benefit.Keywords: sipuleucel-T, prostate-specific antigen, prostate cancer, biomarkers, monoclonal antibodies, vaccines, cellular therapy 11. Automated Device for Asynchronous Extraction of RNA, DNA, or Protein Biomarkers from Surrogate Patient Samples. Science.gov (United States) Bitting, Anna L; Bordelon, Hali; Baglia, Mark L; Davis, Keersten M; Creecy, Amy E; Short, Philip A; Albert, Laura E; Karhade, Aditya V; Wright, David W; Haselton, Frederick R; Adams, Nicholas M 2016-12-01 Many biomarker-based diagnostic methods are inhibited by nontarget molecules in patient samples, necessitating biomarker extraction before detection. We have developed a simple device that purifies RNA, DNA, or protein biomarkers from complex biological samples without robotics or fluid pumping. The device design is based on functionalized magnetic beads, which capture biomarkers and remove background biomolecules by magnetically transferring the beads through processing solutions arrayed within small-diameter tubing. The process was automated by wrapping the tubing around a disc-like cassette and rotating it past a magnet using a programmable motor. This device recovered biomarkers at ~80% of the operator-dependent extraction method published previously. The device was validated by extracting biomarkers from a panel of surrogate patient samples containing clinically relevant concentrations of (1) influenza A RNA in nasal swabs, (2) Escherichia coli DNA in urine, (3) Mycobacterium tuberculosis DNA in sputum, and (4) Plasmodium falciparum protein and DNA in blood. The device successfully extracted each biomarker type from samples representing low levels of clinically relevant infectivity (i.e., 7.3 copies/µL of influenza A RNA, 405 copies/µL of E. coli DNA, 0.22 copies/µL of TB DNA, 167 copies/µL of malaria parasite DNA, and 2.7 pM of malaria parasite protein). © 2015 Society for Laboratory Automation and Screening. 12. Cadmium phytotoxicity: Quantitative sensitivity relationships between classical endpoints and antioxidative enzyme biomarkers Energy Technology Data Exchange (ETDEWEB) Rosa Correa, Albertina Xavier da [Centro de Ciencias Tecnologicas da Terra e do Mar, Universidade do Vale do Itajai, Rua Uruguai, 458, 88302-202 Itajai SC (Brazil); Roerig, Leonardo Rubi [Centro de Ciencias Tecnologicas da Terra e do Mar, Universidade do Vale do Itajai, Rua Uruguai, 458, 88302-202 Itajai SC (Brazil); Verdinelli, Miguel A. [Centro de Ciencias Tecnologicas da Terra e do Mar, Universidade do Vale do Itajai, Rua Uruguai, 458, 88302-202 Itajai SC (Brazil); Cotelle, Sylvie [Centre des Sciences de l' Environnement, Universite de Metz, 57000 Metz (France); Ferard, Jean-Francois [Centre des Sciences de l' Environnement, Universite de Metz, 57000 Metz (France); Radetski, Claudemir Marcos [Centro de Ciencias Tecnologicas da Terra e do Mar, Universidade do Vale do Itajai, Rua Uruguai, 458, 88302-202 Itajai SC (Brazil)]. E-mail: radetski@univali.br 2006-03-15 In this work, cadmium phytotoxicity and quantitative sensitivity relationships between different hierarchical endpoints in plants cultivated in a contaminated soil were studied. Thus, germination rate, biomass growth and antioxidative enzyme activity (i.e. superoxide dismutase, peroxidase, catalase and glutathione reductase) in three terrestrial plants (Avena sativa L., Brassica campestris L. cv. Chinensis, Lactuca sativa L. cv. hanson) were analyzed. Plant growth tests were carried out according to an International Standard Organization method and the results were analyzed by ANOVA followed by Williams' test. The concentration of Cd{sup 2+} that had the smallest observed significant negative effect (LOEC) on plant biomass was 6.25, 12.5 and 50 mg Cd/kg dry soil for lettuce, oat and Chinese cabbage, respectively. Activity of all enzymes studied increased significantly compared to enzyme activity in plant controls. For lettuce, LOEC values (mg Cd/kg dry soil) for enzymic activity ranged from 0.05 (glutathione reductase) to 0.39 (catalase). For oat, LOEC values (mg Cd/kg dry soil) ranged from 0.19 (for superoxide dismutase and glutathione reductase) to 0.39 (for catalase and peroxidase). For Chinese cabbage, LOEC values (mg Cd/kg dry soil) ranged from 0.19 (peroxidase, catalase and glutathione reductase) to 0.39 (superoxide dismutase). Classical (i.e. germination and biomass) and biochemical (i.e. enzyme activity) endpoints were compared to establish a sensitivity ranking, which was: enzyme activity > biomass > germination rate. For cadmium-soil contamination, the determination of quantitative sensitivity relationships (QSR) between classical and antioxidative enzyme biomarkers showed that the most sensitive plant species have, generally, the lowest QSR values. 13. Pathologic complete response and disease-free survival are not surrogate endpoints for 5-year survival in rectal cancer: an analysis of 22 randomized trials Science.gov (United States) Borgonovo, Karen; Cabiddu, Mary; Ghilardi, Mara; Lonati, Veronica; Barni, Sandro 2017-01-01 Background We performed a literature-based analysis of randomized clinical trials to assess the pathologic complete response (pCR) (ypT0N0 after neoadjuvant therapy) and 3-year disease-free survival (DFS) as potential surrogate endpoints for 5-year overall survival (OS) in rectal cancer treated with neoadjuvant (chemo)radiotherapy (CT)RT. Methods A systematic literature search of PubMed, EMBASE, the Web of Science, SCOPUS, CINAHL, and the Cochrane Library was performed. Treatment effects on 3-year DFS and 5-year OS were expressed as rates of patients alive (%), and those on pCR as differences in pCR rates (∆pCR%). A weighted regression analysis was performed at individual- and trial-level to test the association between treatment effects on surrogate (∆pCR% and ∆3yDFS) and the main clinical outcome (∆5yOS). Results Twenty-two trials involving 10,050 patients, were included in the analysis. The individual level surrogacy showed that the pCR% and 3-year DFS were poorly correlated with 5-year OS (R=0.52; 95% CI, 0.31–0.91; P=0.002; and R=0.60; 95% CI, 0.36–1; P=0.002). The trial-level surrogacy analysis confirmed that the two treatment effects on surrogates (∆pCR% and ∆3yDFS) are not strong surrogates for treatment effects on 5-year OS % (R=0.2; 95% CI, −0.29–0.78; P=0.5 and R=0.64; 95% CI, 0.29–1; P=0.06). These findings were confirmed in neoadjuvant CTRT studies but not in phase III trials were 3-year DFS could still represent a valid surrogate. Conclusions This analysis does not support the use of pCR and 3-year DFS% as appropriate surrogate endpoints for 5-year OS% in patients with rectal cancer treated with neoadjuvant therapy. 14. Importance of glomerular filtration rate change as surrogate endpoint for the future incidence of end-stage renal disease in general Japanese population: community-based cohort study. Science.gov (United States) Kanda, Eiichiro; Usui, Tomoko; Kashihara, Naoki; Iseki, Chiho; Iseki, Kunitoshi; Nangaku, Masaomi 2017-09-07 Because of the necessity for extended period and large costs until the event occurs, surrogate endpoints are indispensable for implementation of clinical studies to improve chronic kidney disease (CKD) patients' prognosis. Subjects with serum creatinine level for a baseline period over 1-3 years were enrolled (n = 69,238) in this community-based prospective cohort study in Okinawa, Japan, and followed up for 15 years. The endpoint was end-stage renal disease (ESRD). The percent of estimated glomerular filtration rate (%eGFR) change was calculated on the basis of the baseline period. Subjects had a mean ± SD age, 55.59 ± 14.69 years; eGFR, 80.15 ± 21.15 ml/min/1.73 m(2). Among the subjects recruited, 15.81% had a low eGFR (changes over 2 or 3 years in the high- and low-eGFR groups. The specificities and positive predictive values for ESRD based on a cutoff value of %eGFR change of less than -30% over 2 or 3 years were high in the high- and low-eGFR groups. %eGFR change tends to be associated with the risk of ESRD. %eGFR change of less than -30% over 2 or 3 years can be a candidate surrogate endpoint for ESRD in the general Japanese population. 15. Plasma matrix metalloproteinase 9 as an early surrogate biomarker of advanced colorectal neoplasia. Science.gov (United States) Gimeno-García, Antonio Z; Triñanes, Javier; Quintero, Enrique; Salido, Eduardo; Nicolás-Pérez, David; Adrián-de-Ganzo, Zaida; Alarcón-Fernández, Onofre; Abrante, Beatriz; Romero, Rafael; Carrillo, Marta; Ramos, Laura; Alonso, Inmaculada; Ortega, Juan; Jiménez, Alejandro 2016-01-01 Matrix metalloproteinases (MMPs) are overexpressed at different stages of colorectal carcinogenesis and could serve as early surrogate biomarkers of colorectal neoplasia. To assess the utility of plasma MMP2 and MMP9 levels in the detection of advanced colorectal neoplasia and their correlation with tissue levels. We analysed blood and tissue samples from patients with non-advanced adenomas (n=25), advanced adenomas (n=25), colorectal cancer (n=25) and healthy controls (n=75). Plasma and tissue gelatinase levels were determined by Luminex XMAP technology and gelatin zymography. Receiver operating characteristic (ROC) curve analysis was used to calculate the optimum cut-off for the detection of advanced colorectal neoplasia. Plasma MMP2 levels were similar between groups whatever the type of lesion. Plasma MMP9 levels were significantly higher in patients with neoplastic lesions than in healthy controls (median 292.3ng/ml vs. 139.08ng/ml, P<0.001). MMP9 levels were also higher in colorectal cancer than in non-advanced adenomas (median 314.6ng/ml vs. 274.3ng/ml, P=0.03). There was a significant correlation between plasma and tissue levels of MMP9 (r=0.5, P<0.001). The plasma MMP9 cut-off range with the highest diagnostic accuracy was between 173ng/ml and 204ng/ml (AUC=0.80 [95% CI: 0.72-0.86], P<0.001; sensitivity, 80-86% and specificity, 57-67%). Plasma MMP9 could be a surrogate biomarker for the early detection of advanced colorectal neoplasia, although its diagnostic performance could be increased by combination with other biomarkers. Copyright © 2015 Elsevier España, S.L.U. y AEEH y AEG. All rights reserved. 16. Aligning strategies for using EEG as a surrogate biomarker: a review of preclinical and clinical research. Science.gov (United States) Leiser, Steven C; Dunlop, John; Bowlby, Mark R; Devilbiss, David M 2011-06-15 Electroencephalography (EEG) and related methodologies offer the promise of predicting the likelihood that novel therapies and compounds will exhibit clinical efficacy early in preclinical development. These analyses, including quantitative EEG (e.g. brain mapping) and evoked/event-related potentials (EP/ERP), can provide a physiological endpoint that may be used to facilitate drug discovery, optimize lead or candidate compound selection, as well as afford patient stratification and Go/No-Go decisions in clinical trials. Currently, the degree to which these different methodologies hold promise for translatability between preclinical models and the clinic have not been well summarized. To address this need, we review well-established and emerging EEG analytic approaches that are currently being integrated into drug discovery programs throughout preclinical development and clinical research. Furthermore, we present the use of EEG in the drug development process in the context of a number of major central nervous system disorders including Alzheimer's disease, schizophrenia, depression, attention deficit hyperactivity disorder, and pain. Lastly, we discuss the requirements necessary to consider EEG technologies as a biomarker. Many of these analyses show considerable translatability between species and are used to predict clinical efficacy from preclinical data. Nonetheless, the next challenge faced is the selection and validation of EEG endpoints that provide a set of robust and translatable biomarkers bridging preclinical and clinical programs. 17. Acetylcholinesterase from Human Erythrocytes as a Surrogate Biomarker of Lead Induced Neurotoxicity Directory of Open Access Journals (Sweden) Vivek Kumar Gupta 2015-01-01 Full Text Available Lead induced neurotoxicity in the people engaged in different occupations has received wide attention but very little studies have been carried out to monitor occupational neurotoxicity directly due to lead exposure using biochemical methods. In the present paper an endeavour has been made in order to assess the lead mediated neurotoxicity by in vitro assay of the activity of acetylcholinesterase (AChE from human erythrocytes in presence of different concentrations of lead. The results suggested that the activity of this enzyme was localized in membrane bound fraction and it was found to be highly stable up to 30 days when stored at −20°C in phosphate buffer (50 mM, pH 7.4 containing 0.2% Triton X-100. The erythrocyte’s AChE exhibited Km for acetylcholinesterase to be 0.1 mM. Lead caused sharp inhibition of the enzyme and its IC50 value was computed to be 1.34 mM. The inhibition of the enzyme by lead was found to be of uncompetitive type (Ki value, 3.6 mM which negatively influenced both the Vmax and the enzyme-substrate binding affinity. Taken together, these results indicate that AChE from human erythrocytes could be exploited as a surrogate biomarker of lead induced neurotoxicity particularly in the people occupationally exposed to lead. 18. Biomarkers in chronic obstructive pulmonary disease DEFF Research Database (Denmark) Sin, Don D; Vestbo, Jørgen 2009-01-01 Currently, with exception of lung function tests, there are no well validated biomarkers or surrogate endpoints that can be used to establish efficacy of novel drugs for chronic obstructive pulmonary disease (COPD). However, the lung function test is not an ideal surrogate for short-term drug... 19. Proposal for levels of evidence schema for validation of a soluble biomarker reflecting damage endpoints in rheumatoid arthritis, psoriatic arthritis, and ankylosing spondylitis, and recommendations for study design DEFF Research Database (Denmark) Maksymowych, Walter P; Fitzgerald, Oliver; Wells, George A 2009-01-01 arthritis (RA), psoriatic arthritis (PsA), and ankylosing spondylitis (AS). We also aimed to generate consensus on minimum standards for the design of longitudinal studies aimed at validating biomarkers. METHODS: Before the meeting, the Soluble Biomarker Working Group prepared a preliminary framework...... and discussed various models for association and prediction related to the statistical strength domain. In addition, 3 Delphi exercises addressing longitudinal study design for RA, PsA, and AS were conducted within the working group and members of the Assessments in SpondyloArthritis International Society (ASAS...... Biomarker Group has successfully formulated a levels of evidence scheme and a study design template that will provide guidance to conduct validation studies in the setting of soluble biomarkers proposed to replace the measurement of damage endpoints in RA, PsA, and AS.... 20. Testing of the preliminary OMERACT validation criteria for a biomarker to be regarded as reflecting structural damage endpoints in rheumatoid arthritis clinical trials: the example of C-reactive protein DEFF Research Database (Denmark) Keeling, Stephanie O; Landewe, Robert; van der Heijde, Desiree; 2007-01-01 OBJECTIVE: A list of 14 criteria for guiding the validation of a soluble biomarker as reflecting structural damage endpoints in rheumatoid arthritis (RA) clinical trials was drafted by an international working group after a Delphi consensus exercise. C-reactive protein (CRP), a soluble biomarker... 1. Testing of the preliminary OMERACT validation criteria for a biomarker to be regarded as reflecting structural damage endpoints in rheumatoid arthritis clinical trials: the example of C-reactive protein DEFF Research Database (Denmark) Keeling, Stephanie O; Landewe, Robert; van der Heijde, Desiree 2007-01-01 OBJECTIVE: A list of 14 criteria for guiding the validation of a soluble biomarker as reflecting structural damage endpoints in rheumatoid arthritis (RA) clinical trials was drafted by an international working group after a Delphi consensus exercise. C-reactive protein (CRP), a soluble biomarker......-based survey. RESULTS: Minimal data were extracted from the literature pertaining to those criteria listed under the category of truth. Ratings for strength of evidence were moderate to low ( 2. Early pregnancy prediction of preeclampsia in nulliparous women, combining clinical risk and biomarkers: the Screening for Pregnancy Endpoints (SCOPE) international cohort study. Science.gov (United States) Kenny, Louise C; Black, Michael A; Poston, Lucilla; Taylor, Rennae; Myers, Jenny E; Baker, Philip N; McCowan, Lesley M; Simpson, Nigel A B; Dekker, Gus A; Roberts, Claire T; Rodems, Kelline; Noland, Brian; Raymundo, Michael; Walker, James J; North, Robyn A 2014-09-01 More than half of all cases of preeclampsia occur in healthy first-time pregnant women. Our aim was to develop a method to predict those at risk by combining clinical factors and measurements of biomarkers in women recruited to the Screening for Pregnancy Endpoints (SCOPE) study of low-risk nulliparous women. Forty-seven biomarkers identified on the basis of (1) association with preeclampsia, (2) a biological role in placentation, or (3) a role in cellular mechanisms involved in the pathogenesis of preeclampsia were measured in plasma sampled at 14 to 16 weeks' gestation from 5623 women. The cohort was randomly divided into training (n=3747) and validation (n=1876) cohorts. Preeclampsia developed in 278 (4.9%) women, of whom 28 (0.5%) developed early-onset preeclampsia. The final model for the prediction of preeclampsia included placental growth factor, mean arterial pressure, and body mass index at 14 to 16 weeks' gestation, the consumption of ≥3 pieces of fruit per day, and mean uterine artery resistance index. The area under the receiver operator curve (95% confidence interval) for this model in training and validation cohorts was 0.73 (0.70-0.77) and 0.68 (0.63-0.74), respectively. A predictive model of early-onset preeclampsia included angiogenin/placental growth factor as a ratio, mean arterial pressure, any pregnancy loss preeclampsia in populations of mixed parity and risk. In nulliparous women, combining multiple biomarkers and clinical data provided modest prediction of preeclampsia. © 2014 American Heart Association, Inc. 3. The role of vascular biomarkers for primary and secondary prevention. A position paper from the European Society of Cardiology Working Group on peripheral circulation DEFF Research Database (Denmark) Vlachopoulos, Charalambos; Xaplanteris, Panagiotis; Aboyans, Victor 2015-01-01 While risk scores are invaluable tools for adapted preventive strategies, a significant gap exists between predicted and actual event rates. Additional tools to further stratify the risk of patients at an individual level are biomarkers. A surrogate endpoint is a biomarker that is intended as a s... 4. Use of Surrogate end points in HTA Directory of Open Access Journals (Sweden) Mangiapane, Sandra 2009-08-01 Full Text Available The different actors involved in health system decision-making and regulation have to deal with the question which are valid parameters to assess the health value of health technologies.So called surrogate endpoints represent in the best case preliminary steps in the casual chain leading to the relevant outcome (e. g. mortality, morbidity and are not usually directly perceptible by patients. Surrogate endpoints are not only used in trials of pharmaceuticals but also in studies of other technologies. Their use in the assessment of the benefit of a health technology is however problematic. In this report we intend to answer the following research questions: Which criteria need to be fulfilled for a surrogate parameter to be considered a valid endpoint? Which methods have been described in the literature for the assessment of the validity of surrogate endpoints? Which methodological recommendations concerning the use of surrogate endpoints have been made by international HTA agencies? Which place has been given to surrogate endpoints in international and German HTA reports? For this purpose, we choose three different approaches. Firstly, we conduct a review of the methodological literature dealing with the issue of surrogate endpoints and their validation. Secondly, we analyse current methodological guidelines of HTA agencies members of the International network of agencies for Health Technology Assessment (INAHTA as well as of agencies concerned with assessments for reimbursement purposes. Finally, we analyse the outcome parameter used in a sample of HTA reports available for the public. The analysis of methodological guidelines shows a very cautious position of HTA institutions regarding the use of surrogate endpoints in technology assessment. Surrogate endpoints have not been prominently used in HTA reports. None of the analysed reports based its conclusions solely on the results of surrogate endpoints. The analysis of German HTA reports shows a 5. Circulating Biomarkers for Duchenne Muscular Dystrophy Science.gov (United States) Aartsma-Rus, Annemieke; Spitali, Pietro 2015-01-01 Abstract Duchenne muscular dystrophy is the most common form of muscular dystrophy. Genetic and biochemical research over the years has characterized the cause, pathophysiology and development of the disease providing several potential therapeutic targets and/or biomarkers. High throughput – omic technologies have provided a comprehensive understanding of the changes occurring in dystrophic muscles. Murine and canine animal models have been a valuable source to profile muscles and body fluids, thus providing candidate biomarkers that can be evaluated in patients. This review will illustrate known circulating biomarkers that could track disease progression and response to therapy in patients affected by Duchenne muscular dystrophy. We present an overview of the transcriptomic, proteomic, metabolomics and lipidomic biomarkers described in literature. We show how studies in muscle tissue have led to the identification of serum and urine biomarkers and we highlight the importance of evaluating biomarkers as possible surrogate endpoints to facilitate regulatory processes for new medicinal products. PMID:27858763 6. Screening for chronic kidney disease of uncertain aetiology in Sri Lanka: usability of surrogate biomarkers over dipstick proteinuria OpenAIRE Ratnayake, Samantha; Badurdeen, Zeid; Nanayakkara, Nishantha; Abeysekara, Tilak; Ratnatunga, Neelakanthi; Kumarasiri, Ranjith 2017-01-01 Background The use of dipstick proteinuria to screen Chronic Kidney Disease of uncertain aetiology (CKDu) in Sri Lanka is a recently debated matter of dispute. The aim of this study was to assess the suitability of biomarkers: serum creatinine, cystatin C and urine albumin to creatinine ratio (ACR) for screening CKDu in Sri Lanka. Methods Forty-four male CKDu patients and 49 healthy males from a CKDu-endemic region were selected. Meanwhile, 25 healthy males from a non-endemic region were sele... 7. Evaluation of Circulating Tumor Cells and Related Events as Prognostic Factors and Surrogate Biomarkers in Advanced NSCLC Patients Receiving First-Line Systemic Treatment Directory of Open Access Journals (Sweden) Laura Muinelo-Romay 2014-01-01 Full Text Available In the present study we investigated the prognostic value of Circulating Tumour Cells (CTC and their utility for therapy monitoring in non-small cell lung cancer (NSCLC. A total of 43 patients newly diagnosed with NSCLC were prospectively enrolled. Blood samples were obtained before the 1st, 2nd and 5th cycles of chemotherapy and analyzed using CellSearch technology. Both CTC and CTC-related objects (not morphological standard or broken epithelial cells were counted. At baseline 18 (41.9% patients were positive for intact CTC count and 10 (23.2% of them had ≥5 CTC, while CK positive events were found in 79.1% of patients. The group of patients with CTC ³5 at baseline presented worse PFS and OS than those with <5 CTC (p = 0.034 and p = 0.008, respectively. Additionally, high levels of total CK positive events were associated with poor prognosis in the group of patients with <5 CTC. Regarding therapy monitoring, patients presenting increased levels of CTC during the treatment demonstrated lower OS and PFS rates. All these data supported the value of CTC as a prognostic biomarker and as a surrogate indicator of chemotherapy effectiveness in advanced NSCLC patients, with the additional value of analyzing other “objects” such as apoptotic CTC or CK fragments to guide the clinical management of these patients. 8. Evaluation of Circulating Tumor Cells and Related Events as Prognostic Factors and Surrogate Biomarkers in Advanced NSCLC Patients Receiving First-Line Systemic Treatment Energy Technology Data Exchange (ETDEWEB) Muinelo-Romay, Laura; Vieito, Maria; Abalo, Alicia; Alonso Nocelo, Marta; Barón, Francisco; Anido, Urbano; Brozos, Elena; Vázquez, Francisca; Aguín, Santiago; Abal, Miguel; López López, Rafael, E-mail: rafael.lopez.lopez@sergas.es [Translational Medical Oncology, Health Research Institute of Santiago (IDIS), Complexo Hospitalario Universitario de Santiago de Compostela (SERGAS), Trav. Choupana s/n 15706 Santiago de Compostela (Spain) 2014-01-21 In the present study we investigated the prognostic value of Circulating Tumour Cells (CTC) and their utility for therapy monitoring in non-small cell lung cancer (NSCLC). A total of 43 patients newly diagnosed with NSCLC were prospectively enrolled. Blood samples were obtained before the 1st, 2nd and 5th cycles of chemotherapy and analyzed using CellSearch technology. Both CTC and CTC-related objects (not morphological standard or broken epithelial cells) were counted. At baseline 18 (41.9%) patients were positive for intact CTC count and 10 (23.2%) of them had ≥5 CTC, while CK positive events were found in 79.1% of patients. The group of patients with CTC ≥5 at baseline presented worse PFS and OS than those with <5 CTC (p = 0.034 and p = 0.008, respectively). Additionally, high levels of total CK positive events were associated with poor prognosis in the group of patients with <5 CTC. Regarding therapy monitoring, patients presenting increased levels of CTC during the treatment demonstrated lower OS and PFS rates. All these data supported the value of CTC as a prognostic biomarker and as a surrogate indicator of chemotherapy effectiveness in advanced NSCLC patients, with the additional value of analyzing other “objects” such as apoptotic CTC or CK fragments to guide the clinical management of these patients. 9. Screening for chronic kidney disease of uncertain aetiology in Sri Lanka: usability of surrogate biomarkers over dipstick proteinuria. Science.gov (United States) Ratnayake, Samantha; Badurdeen, Zeid; Nanayakkara, Nishantha; Abeysekara, Tilak; Ratnatunga, Neelakanthi; Kumarasiri, Ranjith 2017-06-19 The use of dipstick proteinuria to screen Chronic Kidney Disease of uncertain aetiology (CKDu) in Sri Lanka is a recently debated matter of dispute. The aim of this study was to assess the suitability of biomarkers: serum creatinine, cystatin C and urine albumin to creatinine ratio (ACR) for screening CKDu in Sri Lanka. Forty-four male CKDu patients and 49 healthy males from a CKDu-endemic region were selected. Meanwhile, 25 healthy males from a non-endemic region were selected as an absolute control. The diagnostic accuracy of each marker was compared using the above three study groups. In receiver operating characteristics (ROC) plots for creatinine, cystatin C and ACR, values of area under the curve (AUC) were 0.926, 0.920 and 0.737 respectively when CKDu was compared to non-endemic control. When CKDu was compared to endemic control, AUCs of above three analytes were distinctly lower as 0.718, 0.808 and 0.678 respectively. Cystatin C exhibited the highest sensitivity for CKDu when analyzed against both control groups where respective sensitivities were 0.75 against endemic control and 0.89 against non-endemic control. ROC-optimal cutoff limits of creatinine, cystatin C and ACR in CKDu vs non-endemic control were 89.0 μmol/L, 1.01 mg/L and 6.06 mg/g-Cr respectively, whereas in CKDu vs endemic control the respective values were 111.5 μmol/L, 1.22 mg/L and 12.66 mg/g-Cr. Amongst the three biomarkers evaluated in this study, our data suggest that Cystatin C is the most accurate functional marker in detecting CKDu in endemic regions, yet the high cost hinders its usability on general population. Creatinine is favorable over dipstick proteinuria owing to its apparent accuracy and cost efficiency, while having the ability to complement the kidney damage marker (ACR) in screening. ACR may not be favorable as a standalone screening marker in place of dipstick proteinuria due to its significant decline in sensitivity against the CKDu-endemic population. However 10. Testing of the preliminary OMERACT validation criteria for a biomarker to be regarded as reflecting structural damage endpoints in rheumatoid arthritis clinical trials: the example of C-reactive protein DEFF Research Database (Denmark) Keeling, Stephanie O; Landewe, Robert; van der Heijde, Desiree; 2007-01-01 OBJECTIVE: A list of 14 criteria for guiding the validation of a soluble biomarker as reflecting structural damage endpoints in rheumatoid arthritis (RA) clinical trials was drafted by an international working group after a Delphi consensus exercise. C-reactive protein (CRP), a soluble biomarker...... of individual criteria in the draft set. METHODS: A systematic literature review was conducted to elicit evidence in support of each specific criterion composing the 14-criteria draft set. A summary of the key literature findings per criterion was presented to both the working group and to participants......-based survey. RESULTS: Minimal data were extracted from the literature pertaining to those criteria listed under the category of truth. Ratings for strength of evidence were moderate to low ( 11. Biomarkers and sustainable innovation in cardiovascular drug development: lessons from near and far afield. Science.gov (United States) Medford, Russell M; Dagi, T Forcht; Rosenson, Robert S; Offermann, Margaret K 2013-05-01 Future innovative therapies targeting cardiovascular disease (CVD) have the potential to improve health outcomes and to contain rising healthcare costs. Unsustainable increases in the size, cost and duration of clinical trial programs necessary for regulatory approval, however, threaten the entire innovation enterprise. Rising costs for clinical trials are due in large part to increasing demands for hard cardiovascular clinical endpoints as measures of therapeutic efficacy. The development and validation of predictive and surrogate biomarkers, as laboratory or other objective measures predictive or reflective of clinical endpoints, are an important part of the solution to this challenge. This review will discuss insights applicable to CVD derived from the use of predictive biomarkers in oncologic drug development, the evolving role of high density lipoprotein (HDL) in CVD drug development and the impact biomarkers and surrogates have on the continued investment from multiple societal sources critical for innovative CVD drug discovery and development. 12. Biomarker method validation in anticancer drug development. Science.gov (United States) Cummings, J; Ward, T H; Greystoke, A; Ranson, M; Dive, C 2008-02-01 Over recent years the role of biomarkers in anticancer drug development has expanded across a spectrum of applications ranging from research tool during early discovery to surrogate endpoint in the clinic. However, in Europe when biomarker measurements are performed on samples collected from subjects entered into clinical trials of new investigational agents, laboratories conducting these analyses become subject to the Clinical Trials Regulations. While these regulations are not specific in their requirements of research laboratories, quality assurance and in particular assay validation are essential. This review, therefore, focuses on a discussion of current thinking in biomarker assay validation. Five categories define the majority of biomarker assays from 'absolute quantitation' to 'categorical'. Validation must therefore take account of both the position of the biomarker in the spectrum towards clinical end point and the level of quantitation inherent in the methodology. Biomarker assay validation should be performed ideally in stages on 'a fit for purpose' basis avoiding unnecessarily dogmatic adherence to rigid guidelines but with careful monitoring of progress at the end of each stage. These principles are illustrated with two specific examples: (a) absolute quantitation of protein biomarkers by mass spectrometry and (b) the M30 and M65 ELISA assays as surrogate end points of cell death. 13. Proposal for levels of evidence schema for validation of a soluble biomarker reflecting damage endpoints in rheumatoid arthritis, psoriatic arthritis, and ankylosing spondylitis, and recommendations for study design DEFF Research Database (Denmark) Maksymowych, W.P.; Fitzgerald, O.; Wells, G.A. 2009-01-01 arthritis (RA), psoriatic arthritis (PsA), and ankylosing spondylitis (AS). We also aimed to generate consensus on minimum standards for the design of longitudinal studies aimed at validating biomarkers. METHODS: Before the meeting, the Soluble Biomarker Working Group prepared a preliminary framework...... and discussed various models for association and prediction related to the statistical strength domain. In addition, 3 Delphi exercises addressing longitudinal study design for RA, PsA, and AS were conducted within the working group and members of the Assessments in SpondyloArthritis International Society (ASAS... 14. BACE-1, PS-1 and sAPPβ Levels Are Increased in Plasma from Sporadic Inclusion Body Myositis Patients: Surrogate Biomarkers among Inflammatory Myopathies Science.gov (United States) Catalán-García, Marc; Garrabou, Glòria; Morén, Constanza; Guitart-Mampel, Mariona; Gonzalez-Casacuberta, Ingrid; Hernando, Adriana; Gallego-Escuredo, Jose Miquel; Yubero, Dèlia; Villarroya, Francesc; Montero, Raquel; O-Callaghan, Albert Selva; Cardellach, Francesc; Grau, Josep Maria 2015-01-01 Sporadic inclusion body myositis (sIBM) is a rare disease that is difficult to diagnose. Muscle biopsy provides three prominent pathological findings: inflammation, mitochondrial abnormalities and fibber degeneration, represented by the accumulation of protein depots constituted by β-amyloid peptide, among others. We aim to perform a screening in plasma of circulating molecules related to the putative etiopathogenesis of sIBM to determine potential surrogate biomarkers for diagnosis. Plasma from 21 sIBM patients and 20 age- and gender-paired healthy controls were collected and stored at −80°C. An additional population of patients with non-sIBM inflammatory myopathies was also included (nine patients with dermatomyositis and five with polymyositis). Circulating levels of inflammatory cytokines (interleukin [IL]-6 and tumor necrosis factor [TNF]-α), mitochondrial-related molecules (free plasmatic mitochondrial DNA [mtDNA], fibroblast growth factor-21 [FGF-21] and coenzyme-Q10 [CoQ]) and amyloidogenic-related molecules (beta-secretase-1 [BACE-1], presenilin-1 [PS-1], and soluble Aβ precursor protein [sAPPβ]) were assessed with magnetic bead–based assays, real-time polymerase chain reaction, enzyme-linked immunosorbent assay (ELISA) and high-pressure liquid chromatography (HPLC). Despite remarkable trends toward altered plasmatic expression of inflammatory and mitochondrial molecules (increased IL-6, TNF-α, circulating mtDNA and FGF-21 levels and decreased content in CoQ), only amyloidogenic degenerative markers including BACE-1, PS-1 and sAPPβ levels were significantly increased in plasma from sIBM patients compared with controls and other patients with non-sIBM inflammatory myopathies (p < 0.05). Inflammatory, mitochondrial and amyloidogenic degeneration markers are altered in plasma of sIBM patients confirming their etiopathological implication in the disease. Sensitivity and specificity analysis show that BACE-1, PS-1 and sAPPβ represent a good 15. The role of vascular biomarkers for primary and secondary prevention. A position paper from the European Society of Cardiology Working Group on peripheral circulation: Endorsed by the Association for Research into Arterial Structure and Physiology (ARTERY) Society. Science.gov (United States) Vlachopoulos, Charalambos; Xaplanteris, Panagiotis; Aboyans, Victor; Brodmann, Marianne; Cífková, Renata; Cosentino, Francesco; De Carlo, Marco; Gallino, Augusto; Landmesser, Ulf; Laurent, Stéphane; Lekakis, John; Mikhailidis, Dimitri P; Naka, Katerina K; Protogerou, Athanasios D; Rizzoni, Damiano; Schmidt-Trucksäss, Arno; Van Bortel, Luc; Weber, Thomas; Yamashina, Akira; Zimlichman, Reuven; Boutouyrie, Pierre; Cockcroft, John; O'Rourke, Michael; Park, Jeong Bae; Schillaci, Giuseppe; Sillesen, Henrik; Townsend, Raymond R 2015-08-01 While risk scores are invaluable tools for adapted preventive strategies, a significant gap exists between predicted and actual event rates. Additional tools to further stratify the risk of patients at an individual level are biomarkers. A surrogate endpoint is a biomarker that is intended as a substitute for a clinical endpoint. In order to be considered as a surrogate endpoint of cardiovascular events, a biomarker should satisfy several criteria, such as proof of concept, prospective validation, incremental value, clinical utility, clinical outcomes, cost-effectiveness, ease of use, methodological consensus, and reference values. We scrutinized the role of peripheral (i.e. not related to coronary circulation) noninvasive vascular biomarkers for primary and secondary cardiovascular disease prevention. Most of the biomarkers examined fit within the concept of early vascular aging. Biomarkers that fulfill most of the criteria and, therefore, are close to being considered a clinical surrogate endpoint are carotid ultrasonography, ankle-brachial index and carotid-femoral pulse wave velocity; biomarkers that fulfill some, but not all of the criteria are brachial ankle pulse wave velocity, central haemodynamics/wave reflections and C-reactive protein; biomarkers that do no not at present fulfill essential criteria are flow-mediated dilation, endothelial peripheral arterial tonometry, oxidized LDL and dysfunctional HDL. Nevertheless, it is still unclear whether a specific vascular biomarker is overly superior. A prospective study in which all vascular biomarkers are measured is still lacking. In selected cases, the combined assessment of more than one biomarker may be required. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved. 16. Breast Cancer Survival Defined by the ER/PR/HER2 Subtypes and a Surrogate Classification according to Tumor Grade and Immunohistochemical Biomarkers Directory of Open Access Journals (Sweden) Carol A. Parise 2014-01-01 Full Text Available Introduction. ER, PR, and HER2 are routinely available in breast cancer specimens. The purpose of this study is to contrast breast cancer-specific survival for the eight ER/PR/HER2 subtypes with survival of an immunohistochemical surrogate for the molecular subtype based on the ER/PR/HER2 subtypes and tumor grade. Methods. We identified 123,780 cases of stages 1–3 primary female invasive breast cancer from California Cancer Registry. The surrogate classification was derived using ER/PR/HER2 and tumor grade. Kaplan-Meier survival analysis and Cox proportional hazards modeling were used to assess differences in survival and risk of mortality for the ER/PR/HER2 subtypes and surrogate classification within each stage. Results. The luminal B/HER2− surrogate classification had a higher risk of mortality than the luminal B/HER2+ for all stages of disease. There was no difference in risk of mortality between the ER+/PR+/HER2− and ER+/PR+/HER2+ in stage 3. With one exception in stage 3, the ER-negative subtypes all had an increased risk of mortality when compared with the ER-positive subtypes. Conclusions. Assessment of survival using ER/PR/HER2 illustrates the heterogeneity of HER2+ subtypes. The surrogate classification provides clear separation in survival and adjusted mortality but underestimates the wide variability within the subtypes that make up the classification. 17. In vivo evaluation of battery-operated light-emitting diode-based photodynamic therapy efficacy using tumor volume and biomarker expression as endpoints Science.gov (United States) Mallidi, Srivalleesha; Mai, Zhiming; Rizvi, Imran; Hempstead, Joshua; Arnason, Stephen; Celli, Jonathan; Hasan, Tayyaba 2015-01-01 Abstract. In view of the increase in cancer-related mortality rates in low- to middle-income countries (LMIC), there is an urgent need to develop economical therapies that can be utilized at minimal infrastructure institutions. Photodynamic therapy (PDT), a photochemistry-based treatment modality, offers such a possibility provided that low-cost light sources and photosensitizers are available. In this proof-of-principle study, we focus on adapting the PDT light source to a low-resource setting and compare an inexpensive, portable, battery-powered light-emitting diode (LED) light source with a standard, high-cost laser source. The comparison studies were performed in vivo in a xenograft murine model of human squamous cell carcinoma subjected to 5-aminolevulinic acid-induced protoporphyrin IX PDT. We observed virtually identical control of the tumor burden by both the LED source and the standard laser source. Further insights into the biological response were evaluated by biomarker analysis of necrosis, microvessel density, and hypoxia [carbonic anhydrase IX (CAIX) expression] among groups of control, LED-PDT, and laser-PDT treated mice. There is no significant difference in the percent necrotic volume and CAIX expression in tumors that were treated with the two different light sources. These encouraging preliminary results merit further investigations in orthotopic animal models of cancers prevalent in LMICs. PMID:25909707 18. In vivo evaluation of battery-operated light-emitting diode-based photodynamic therapy efficacy using tumor volume and biomarker expression as endpoints. Science.gov (United States) Mallidi, Srivalleesha; Mai, Zhiming; Rizvi, Imran; Hempstead, Joshua; Arnason, Stephen; Celli, Jonathan; Hasan, Tayyaba 2015-04-01 In view of the increase in cancer-related mortality rates in low- to middle-income countries (LMIC), there is an urgent need to develop economical therapies that can be utilized at minimal infrastructure institutions. Photodynamic therapy (PDT), a photochemistry-based treatment modality, offers such a possibility provided that low-cost light sources and photosensitizers are available. In this proof-of-principle study, we focus on adapting the PDT light source to a low-resource setting and compare an inexpensive, portable, battery-powered light-emitting diode (LED) light source with a standard, high-cost laser source. The comparison studies were performed in vivo in a xenograft murine model of human squamous cell carcinoma subjected to 5-aminolevulinic acid-induced protoporphyrin IX PDT. We observed virtually identical control of the tumor burden by both the LED source and the standard laser source. Further insights into the biological response were evaluated by biomarker analysis of necrosis, microvessel density, and hypoxia [carbonic anhydrase IX (CAIX) expression] among groups of control, LED-PDT, and laser-PDT treated mice. There is no significant difference in the percent necrotic volume and CAIX expression in tumors that were treated with the two different light sources. These encouraging preliminary results merit further investigations in orthotopic animal models of cancers prevalent in LMICs. 19. Permutation criteria to evaluate multiple clinical endpoints in a proof-of-concept study : lessons from Pre-RELAX-AHF NARCIS (Netherlands) Davison, Beth A.; Cotter, Gad; Sun, Hengrui; Chen, Li; Teerlink, John R.; Metra, Marco; Felker, G. Michael; Voors, Adriaan A.; Ponikowski, Piotr; Filippatos, Gerasimos; Greenberg, Barry; Teichman, Sam L.; Unemori, Elaine; Koch, Gary G. 2011-01-01 Clinically relevant endpoints cannot be routinely targeted with reasonable power in a small study. Hence, proof-of-concept studies are often powered to a primary surrogate endpoint. However, in acute heart failure (AHF) effects on surrogates have not translated into clinical benefit in confirmatory 20. Molecular pathology endpoints useful for aging studies. Science.gov (United States) Niedernhofer, L J; Kirkland, J L; Ladiges, W 2017-05-01 The first clinical trial aimed at targeting fundamental processes of aging will soon be launched (TAME: Targeting Aging with Metformin). In its wake is a robust pipeline of therapeutic interventions that have been demonstrated to extend lifespan or healthspan of preclinical models, including rapalogs, antioxidants, anti-inflammatory agents, and senolytics. This ensures that if the TAME trial is successful, numerous additional clinical trials are apt to follow. But a significant impediment to these trials remains the question of what endpoints should be measured? The design of the TAME trial very cleverly skirts around this based on the fact that there are decades of data on metformin in humans, providing unequaled clarity of what endpoints are most likely to yield a positive outcome. But for a new chemical entity, knowing what endpoints to measure remains a formidable challenge. For economy's sake, and to achieve results in a reasonable time frame, surrogate markers of lifespan and healthy aging are desperately needed. This review provides a comprehensive analysis of molecular endpoints that are currently being used as indices of age-related phenomena (e.g., morbidity, frailty, mortality) and proposes an approach for validating and prioritizing these endpoints. Copyright © 2016 Elsevier B.V. All rights reserved. 1. Biomarkers in Barrett's esophagus. Science.gov (United States) Reid, Brian J; Blount, Patricia L; Rabinovitch, Peter S 2003-04-01 future. Biopsy repositories are now readily available for phase 3 studies that can evaluate and compare biomarkers. There are initiatives for multi-institutional Barrett's Centers of Excellence that could provide rapid progress in biomarker evaluation. In addition to new candidate biomarkers, the human genome project has provided high-throughput methodologies and methods for computer analysis of data, which can provide the volume and quality control required for clinically useful biomarkers. Currently, 17p (p53) LOH has progressed the furthest among molecular biomarkers. The authors do not recommend its routine clinical use at the present time, however. Finally, it is likely that clinicians will want to follow the results of clinical treatment-response studies and epidemiologic studies that evaluate relationship between clinical interventions or environmental risk and protective factors and surrogate endpoints, especially if the endpoints are progessing well along the phases of biomarker validation. These studies are likely to be of clinical interest because they may becoming the basis for randomized clinical trials to prevent cancer in BE. 2. Lessons from ECLIPSE: a review of COPD biomarkers. Science.gov (United States) Faner, Rosa; Tal-Singer, Ruth; Riley, John H; Celli, Bartolomé; Vestbo, Jørgen; MacNee, William; Bakke, Per; Calverley, Peter M A; Coxson, Harvey; Crim, Courtney; Edwards, Lisa D; Locantore, Nick; Lomas, David A; Miller, Bruce E; Rennard, Stephen I; Wouters, Emiel F M; Yates, Julie C; Silverman, Edwin K; Agusti, Alvar 2014-07-01 The Evaluation of COPD Longitudinally to Identify Predictive Surrogate End-points (ECLIPSE) study was a large 3-year observational controlled multicentre international study aimed at defining clinically relevant subtypes of chronic obstructive pulmonary disease (COPD) and identifying novel biomarkers and genetic factors. So far, the ECLIPSE study has produced more than 50 original publications and 75 communications to international meetings, many of which have significantly influenced our understanding of COPD. However, because there is not one paper reporting the biomarker results of the ECLIPSE study that may serve as a reference for practising clinicians, researchers and healthcare providers from academia, industry and government agencies interested in COPD, we decided to write a review summarising the main biomarker findings in ECLIPSE. 3. Choosing the best endpoint DEFF Research Database (Denmark) Christensen, Erik 2008-01-01 Design and endpoints of clinical trials in hepatocellular carcinoma. Llovet JM, Di Bisceglie AM, Bruix J, Kramer BS, Lencioni R, Zhu AX, Sherman M, Schwartz M, Lotze M, Talwalkar J, Gores GJ; for the Panel of Experts in HCC-Design Clinical Trials. The design of clinical trials in hepatocellular c... 4. Biomarker-based adaptive trials for patients with glioblastoma--lessons from I-SPY 2. Science.gov (United States) Alexander, Brian M; Wen, Patrick Y; Trippa, Lorenzo; Reardon, David A; Yung, Wai-Kwan Alfred; Parmigiani, Giovanni; Berry, Donald A 2013-08-01 The traditional clinical trials infrastructure may not be ideally suited to evaluate the numerous therapeutic hypotheses that result from the increasing number of available targeted agents combined with the various methodologies to molecularly subclassify patients with glioblastoma. Additionally, results from smaller screening studies are rarely translated to successful larger confirmatory studies, potentially related to a lack of efficient control arms or the use of unvalidated surrogate endpoints. Streamlining clinical trials and providing a flexible infrastructure for biomarker development is clearly needed for patients with glioblastoma. The experience developing and implementing the I-SPY studies in breast cancer may serve as a guide to developing such trials in neuro-oncology. 5. High levels of biomarkers of collagen remodeling are associated with increased mortality in COPD DEFF Research Database (Denmark) Sand, Jannie M B; Leeming, Diana J; Byrjalsen, Inger 2016-01-01 immunoassays measuring serological neo-epitopes produced by proteolytic cleavage associated with degradation of collagen type I, III, IV, and VI, elastin, and biglycan, and formation of collagen type VI as well as fibrinogen and C-reactive protein were used. Multivariate models were used to assess...... with mortality in COPD and measured neo-epitopes originating from ECM proteins associated with lung tissue remodeling. METHODS: Biomarkers of ECM remodeling were assessed in a subpopulation (n = 1000) of the Evaluation of COPD Longitudinally to Identify Predictive Surrogate End-points (ECLIPSE) cohort. Validated...... the prognostic value of these biomarkers. RESULTS: Thirty subjects (3.0 %) died during follow-up. Non-survivors were older, had reduced exercise capacity, increased dyspnea score, and included fewer current smokers. All collagen biomarkers were significantly elevated in non-survivors compared to survivors... 6. Alzheimer's Disease Cerebrospinal Fluid and Neuroimaging Biomarkers: Diagnostic Accuracy and Relationship to Drug Efficacy. Science.gov (United States) Khan, Tapan K; Alkon, Daniel L 2015-01-01 Widely researched Alzheimer's disease (AD) biomarkers include in vivo brain imaging with PET and MRI, imaging of amyloid plaques, and biochemical assays of Aβ 1 - 42, total tau, and phosphorylated tau (p-tau-181) in cerebrospinal fluid (CSF). In this review, we critically evaluate these biomarkers and discuss their clinical utility for the differential diagnosis of AD. Current AD biomarker tests are either highly invasive (requiring CSF collection) or expensive and labor-intensive (neuroimaging), making them unsuitable for use in the primary care, clinical office-based setting, or to assess drug efficacy in clinical trials. In addition, CSF and neuroimaging biomarkers continue to face challenges in achieving required sensitivity and specificity and minimizing center-to-center variability (for CSF-Aβ 1 - 42 biomarkers CV = 26.5% ; http://www.alzforum.org/news/conference-coverage/paris-standardization-hurdle-spinal-fluid-imaging-markers). Although potentially useful for selecting patient populations for inclusion in AD clinical trials, the utility of CSF biomarkers and neuroimaging techniques as surrogate endpoints of drug efficacy needs to be validated. Recent trials of β- and γ-secretase inhibitors and Aβ immunization-based therapies in AD showed no significant cognitive improvements, despite changes in CSF and neuroimaging biomarkers. As we learn more about the dysfunctional cellular and molecular signaling processes that occur in AD, and how these processes are manifested in tissues outside of the brain, new peripheral biomarkers may also be validated as non-invasive tests to diagnose preclinical and clinical AD. 7. Biomarkers-A General Review. Science.gov (United States) Aronson, Jeffrey K; Ferner, Robin E 2017-03-17 A biomarker is a biological observation that substitutes for and ideally predicts a clinically relevant endpoint or intermediate outcome that is more difficult to observe. The use of clinical biomarkers is easier and less expensive than direct measurement of the final clinical endpoint, and biomarkers are usually measured over a shorter time span. They can be used in disease screening, diagnosis, characterization, and monitoring; as prognostic indicators; for developing individualized therapeutic interventions; for predicting and treating adverse drug reactions; for identifying cell types; and for pharmacodynamic and dose-response studies. To understand the value of a biomarker, it is necessary to know the pathophysiological relationship between the biomarker and the relevant clinical endpoint. Good biomarkers should be measurable with little or no variability, should have a sizeable signal to noise ratio, and should change promptly and reliably in response to changes in the condition or its therapy. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc. 8. Modulation of biologic endpoints by topical difluoromethylornithine (DFMO), in subjects at high-risk for nonmelanoma skin cancer. Science.gov (United States) Einspahr, Janine G; Nelson, Mark A; Saboda, Kathylynn; Warneke, James; Bowden, G Timothy; Alberts, David S 2002-01-01 More than one million new skin cancers are diagnosed yearly in the United States creating the need for effective primary and chemopreventive strategies to reduce the incidence, morbidity, and mortality associated with skin cancer. Skin chemoprevention trials often focus on subjects at high risk of nonmelanoma skin cancers and include biological endpoints like number of actinic keratoses (AK) and measures of cell proliferation, apoptosis, and p53 expression and/or mutation. Difluoromethylornithine (DFMO), an irreversible inhibitor of ornithine decarboxylase, suppresses increased polyamine synthesis and inhibits tumors in models of skin carcinogenesis. Thus, DFMO is a good candidate chemopreventive agent in humans at increased risk of NMSC. We reported previously results of a randomized, placebo-controlled trial of topical DFMO in 48 participants with AK. In this study there was a significant reduction in the number of AK (23.5%; P = 0.001) and the polyamine, spermidine (26%, P = 0.04; Alberts, D. S. et al. Cancer Epidemiol. Biomark. Prev., 9: 1281-2186, 2000). In skin biopsies from the same study, we demonstrate that topical DFMO significantly reduces the percentage of p53-positive cells (22%; P = 0.04); however, there were no significant changes in proliferating cell nuclear antigen or apoptotic indices, or in the frequency of p53 mutations (25% at baseline, 21% after placebo, and 26% after DFMO). We conclude that inhibition of the premalignant AK lesions as well as a reduction in the expression of p53 and in spermidine concentrations may serve as surrogate endpoint biomarkers of DFMO and possibly other topically administered skin cancer chemopreventive agents. 9. Birds as biodiversity surrogates DEFF Research Database (Denmark) Larsen, Frank Wugt; Bladt, Jesper Stentoft; Balmford, Andrew 2012-01-01 1. Most biodiversity is still unknown, and therefore, priority areas for conservation typically are identified based on the presence of surrogates, or indicator groups. Birds are commonly used as surrogates of biodiversity owing to the wide availability of relevant data and their broad popular...... appeal. However, some studies have found birds to perform relatively poorly as indicators. We therefore ask how the effectiveness of this approach can be improved by supplementing data on birds with information on other taxa. 2. Here, we explore two strategies using (i) species data for other taxa...... areas identified on the basis of birds alone performed well in representing overall species diversity where birds were relatively speciose compared to the other taxa in the data sets. Adding species data for one taxon increased surrogate effectiveness better than adding genus- and family-level data... Science.gov (United States) Frank, Michael I [Dublin, CA 2010-02-02 A self-contained source of gamma-ray and neutron radiation suitable for use as a radiation surrogate for weapons-grade plutonium is described. The source generates a radiation spectrum similar to that of weapons-grade plutonium at 5% energy resolution between 59 and 2614 keV, but contains no special nuclear material and emits little .alpha.-particle radiation. The weapons-grade plutonium radiation surrogate also emits neutrons having fluxes commensurate with the gamma-radiation intensities employed. 11. Systemic, local and imaging biomarkers of brain injury: more needed, and better use of those already established? Directory of Open Access Journals (Sweden) Keri Linda Carpenter 2015-02-01 Full Text Available Much progress has been made over the past two decades in the treatment of severe acute brain injury, including traumatic brain injury and subarachnoid haemorrhage, resulting in a higher proportion of patients surviving with better outcomes. This has arisen from a combination of factors. These include improvements in procedures at the scene (pre-hospital and in the hospital emergency department, advances in neuromonitoring in the intensive care unit, both continuously at the bedside and intermittently in scans, evolution and refinement of protocol-driven therapy for better management of patients, and advances in surgical procedures and rehabilitation. Nevertheless, many patients still experience varying degrees of long-term disabilities post-injury with consequent demands on carers and resources, and there is room for improvement. Biomarkers are a key aspect of neuromonitoring. A broad definition of a biomarker is any observable feature that can be used to inform on the state of the patient, e.g. a molecular species, a feature on a scan, or a monitoring characteristic e.g. cerebrovascular pressure reactivity index. Biomarkers are usually quantitative measures, which can be utilised in diagnosis and monitoring of response to treatment. They are thus crucial to the development of therapies and may be utilised as surrogate endpoints in Phase II clinical trials. To date, there is no specific drug treatment for acute brain injury, and many seemingly promising agents emerging from pre-clinical animal models have failed in clinical trials. Large Phase III studies of clinical outcomes are costly, consuming time and resources. It is therefore important that adequate Phase II clinical studies with informative surrogate endpoints are performed employing appropriate biomarkers. In this article we review some of the available systemic, local and imaging biomarkers and technologies relevant in acute brain injury patients, and highlight gaps in the current 12. Correlation between the genotoxicity endpoints measured by two different genotoxicity assays: comet assay and CBMN assay OpenAIRE Carina Ladeira; Susana Viegas; Manuel C. Gomes 2015-01-01 The cytokinesis-block micronucleus cytome (CBMN) assay is a comprehensive system for measuring DNA damage; cytostasis and cytotoxicity-DNA damage events are scored specifically in once-divided binucleated cells. The endpoints possible to be measured are micronuclei (MN), a biomarker of chromosome breakage and/or whole chromosome loss, nucleoplasmic bridges (NPB), a biomarker of DNA misrepair and/or telomere end-fusions, and nuclear buds (NBUD), a biomarker of elimination of amplified DNA and/... 13. Surrogate Markers of Abdominal Aortic Aneurysm Progression. Science.gov (United States) Wanhainen, Anders; Mani, Kevin; Golledge, Jonathan 2016-02-01 The natural course of many abdominal aortic aneurysms (AAA) is to gradually expand and eventually rupture and monitoring the disease progression is essential to their management. In this publication, we review surrogate markers of AAA progression. AAA diameter remains the most widely used and important marker of AAA growth. Standardized reporting of reproducible methods of measuring AAA diameter is essential. Newer imaging assessments, such as volume measurements, biomechanical analyses, and functional and molecular imaging, as well as circulating biomarkers, have potential to add important information about AAA progression. Currently, however, there is insufficient evidence to recommend their routine use in clinical practice. 14. Endpoints in pediatric pain studies NARCIS (Netherlands) M. van Dijk (Monique); I. Ceelie (Ilse); D. Tibboel (Dick) 2010-01-01 textabstractAssessing pain intensity in (preverbal) children is more difficult than in adults. Tools to measure pain are being used as primary endpoints [e.g., pain intensity, time to first (rescue) analgesia, total analgesic consumption, adverse effects, and long-term effects] in studies on the eff 15. Receiver Operating Characteristic (ROC to Determine Cut-Off Points of Biomarkers in Lung Cancer Patients Directory of Open Access Journals (Sweden) Heidi L. Weiss 2004-01-01 Full Text Available The role of biomarkers in disease prognosis continues to be an important investigation in many cancer studies. In order for these biomarkers to have practical application in clinical decision making regarding patient treatment and follow-up, it is common to dichotomize patients into those with low vs. high expression levels. In this study, receiver operating characteristic (ROC curves, area under the curve (AUC of the ROC, sensitivity, specificity, as well as likelihood ratios were calculated to determine levels of growth factor biomarkers that best differentiate lung cancer cases versus control subjects. Selected cut-off points for p185erbB-2 and EGFR membrane appear to have good discriminating power to differentiate control tissues versus uninvolved tissues from patients with lung cancer (AUC = 89% and 90%, respectively; while AUC increased to at least 90% for selected cut-off points for p185erbB-2 membrane, EGFR membrane, and FASE when comparing between control versus carcinoma tissues from lung cancer cases. Using data from control subjects compared to patients with lung cancer, we presented a simple and intuitive approach to determine dichotomized levels of biomarkers and validated the value of these biomarkers as surrogate endpoints for cancer outcome. 16. Developments in Surrogating Methods Directory of Open Access Journals (Sweden) Hans van Dormolen 2005-11-01 Full Text Available In this paper, I would like to talk about the developments in surrogating methods for preservation. My main focus will be on the technical aspects of preservation surrogates. This means that I will tell you something about my job as Quality Manager Microfilming for the Netherlands’ national preservation program, Metamorfoze, which is coordinated by the National Library. I am responsible for the quality of the preservation microfilms, which are produced for Metamorfoze. Firstly, I will elaborate on developments in preservation methods in relation to the following subjects: · Preservation microfilms · Scanning of preservation microfilms · Preservation scanning · Computer Output Microfilm. In the closing paragraphs of this paper, I would like to tell you something about the methylene blue test. This is an important test for long-term storage of preservation microfilms. Also, I will give you a brief report on the Cellulose Acetate Microfilm Conference that was held in the British Library in London, May 2005. 17. Establishing a group of endpoints to support collective operations without specifying unique identifiers for any endpoints Energy Technology Data Exchange (ETDEWEB) Archer, Charles J.; Blocksom, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanghon 2016-02-02 A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification. 18. Biomarkers in Airway Diseases Directory of Open Access Journals (Sweden) Janice M Leung 2013-01-01 Full Text Available The inherent limitations of spirometry and clinical history have prompted clinicians and scientists to search for surrogate markers of airway diseases. Although few biomarkers have been widely accepted into the clinical armamentarium, the authors explore three sources of biomarkers that have shown promise as indicators of disease severity and treatment response. In asthma, exhaled nitric oxide measurements can predict steroid responsiveness and sputum eosinophil counts have been used to titrate anti-inflammatory therapies. In chronic obstructive pulmonary disease, inflammatory plasma biomarkers, such as fibrinogen, club cell secretory protein-16 and surfactant protein D, can denote greater severity and predict the risk of exacerbations. While the multitude of disease phenotypes in respiratory medicine make biomarker development especially challenging, these three may soon play key roles in the diagnosis and management of airway diseases. 19. Objective biomarkers of balance and gait for Parkinson's disease using body-worn sensors. Science.gov (United States) Horak, Fay B; Mancini, Martina 2013-09-15 Balance and gait impairments characterize the progression of Parkinson's disease (PD), predict the risk of falling, and are important contributors to reduced quality of life. Advances in technology of small, body-worn, inertial sensors have made it possible to develop quick, objective measures of balance and gait impairments in the clinic for research trials and clinical practice. Objective balance and gait metrics may eventually provide useful biomarkers for PD. In fact, objective balance and gait measures are already being used as surrogate endpoints for demonstrating clinical efficacy of new treatments, in place of counting falls from diaries, using stop-watch measures of gait speed, or clinical balance rating scales. This review summarizes the types of objective measures available from body-worn sensors. The metrics are organized based on the neural control system for mobility affected by PD: postural stability in stance, postural responses, gait initiation, gait (temporal-spatial lower and upper body coordination and dynamic equilibrium), postural transitions, and freezing of gait. However, the explosion of metrics derived by wearable sensors during prescribed balance and gait tasks, which are abnormal in individuals with PD, do not yet qualify as behavioral biomarkers, because many balance and gait impairments observed in PD are not specific to the disease, nor have they been related to specific pathophysiologic biomarkers. In the future, the most useful balance and gait biomarkers for PD will be those that are sensitive and specific for early PD and are related to the underlying disease process. 20. Differences in surrogate threshold effect estimates between original and simplified correlation-based validation approaches. Science.gov (United States) Schürmann, Christoph; Sieben, Wiebke 2016-03-30 Surrogate endpoint validation has been well established by the meta-analytical correlation-based approach as outlined in the seminal work of Buyse et al. (Biostatistics, 2000). Surrogacy can be assumed if strong associations on individual and study levels can be demonstrated. Alternatively, if an effect on a true endpoint is to be predicted from a surrogate endpoint in a new study, the surrogate threshold effect (STE, Burzykowski and Buyse, Pharmaceutical Statistics, 2006) can be used. In practice, as individual patient data (IPD) are hard to obtain, some authors use only aggregate data and perform simplified regression analyses. We are interested in to what extent such simplified analyses are biased compared with the ones from a full model with IPD. To this end, we conduct a simulation study with IPD and compute STEs from full and simplified analyses for varying data situations in terms of number of studies, correlations, variances and so on. In the scenarios considered, we show that, for normally distributed patient data, STEs derived from ordinary (weighted) linear regression generally underestimate STEs derived from the original model, whereas meta-regression often results in overestimation. Therefore, if individual data cannot be obtained, STEs from meta-regression may be used as conservative alternatives, but ordinary (weighted) linear regression should not be used for surrogate endpoint validation. Copyright © 2015 John Wiley & Sons, Ltd. 1. Surrogate Modeling for Geometry Optimization DEFF Research Database (Denmark) Rojas Larrazabal, Marielba de la Caridad; Abraham, Yonas; Holzwarth, Natalie; 2009-01-01 A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used.......A new approach for optimizing the nuclear geometry of an atomic system is described. Instead of the original expensive objective function (energy functional), a small number of simpler surrogates is used.... 2. Behavioral endpoints for radiation injury Science.gov (United States) Rabin, B. M.; Joseph, J. A.; Hunt, W. A.; Dalton, T. B.; Kandasamy, S. B.; Harris, A. H.; Ludewig, B. 1994-10-01 The relative behavioral effectiveness of heavy particles was evaluated. Using the taste aversion paradigm in rats, the behavioral toxicity of most types of radiation (including 20Ne and 40Ar) was similar to that of 60Co photons. Only 56Fe and 93Nb particles and fission neutrons were significantly more effective. Using emesis in ferrets as the behavioral endpoint, 56Fe particles and neutrons were again the most effective; however, 60Co photons were significantly more effective than 18 MeV electrons. These results suggest that LET does not completely predict behavioral effectiveness. Additionally, exposing rats to 10 cGy of 56Fe particles attenuated amphetamine-induced taste aversion learning. This behavior is one of a broad class of behaviors which depends on the integrity of the dopaminergic system and suggests the possibility of alterations in these behaviors following exposure to heavy particles in a space radiation environment. 3. Reduction of adverse effects by a mushroom product, active hexose correlated compound (AHCC) in patients with advanced cancer during chemotherapy--the significance of the levels of HHV-6 DNA in saliva as a surrogate biomarker during chemotherapy. Science.gov (United States) Ito, Toshinori; Urushima, Hayato; Sakaue, Miki; Yukawa, Sayoko; Honda, Hatsumi; Hirai, Kei; Igura, Takumi; Hayashi, Noriyuki; Maeda, Kazuhisa; Kitagawa, Toru; Kondo, Kazuhiro 2014-01-01 Chemotherapy improves the outcome of cancer treatment, but patients are sometimes forced to discontinue chemotherapy or drop out of a clinical trial due to adverse effects, such as gastrointestinal disturbances and suppression of bone marrow function. The objective of this study was to evaluate the safety and effectiveness of a mushroom product, active hexose correlated compound (AHCC), on chemotherapy-induced adverse effects and quality of life (QOL) in patients with cancer. Twenty-four patients with cancer received their first cycle of chemotherapy without AHCC and then received their second cycle with AHCC. During chemotherapy, we weekly evaluated adverse effects and QOL via a blood test, EORTC QLQ-C30 questionnaire, and DNA levels of herpes virus type 6 (HHV-6) in saliva. The DNA levels of HHV-6 were significantly increased after chemotherapy. Interestingly, administration of AHCC significantly decreased the levels of HHV-6 in saliva during chemotherapy and improved not only QOL scores in the EORTC QLQ-C30 questionnaire but also hematotoxicity and hepatotoxicity. These findings suggest that salivary HHV-6 levels may be a good biomarker of QOL in patients during chemotherapy, and that AHCC may have a beneficial effect on chemotherapy-associated adverse effects and QOL in patients with cancer undergoing chemotherapy. 4. Carotid intimal-media thickness as a surrogate for cardiovascular disease events in trials of HMG-CoA reductase inhibitors Directory of Open Access Journals (Sweden) Morgan Timothy 2005-03-01 Full Text Available Abstract Background Surrogate measures for cardiovascular disease events have the potential to increase greatly the efficiency of clinical trials. A leading candidate for such a surrogate is the progression of intima-media thickness (IMT of the carotid artery; much experience has been gained with this endpoint in trials of HMG-CoA reductase inhibitors (statins. Methods and Results We examine two separate systems of criteria that have been proposed to define surrogate endpoints, based on clinical and statistical arguments. We use published results and a formal meta-analysis to evaluate whether progression of carotid IMT meets these criteria for HMG-CoA reductase inhibitors (statins. IMT meets clinical-based criteria to serve as a surrogate endpoint for cardiovascular events in statin trials, based on relative efficiency, linkage to endpoints, and congruency of effects. Results from a meta-analysis and post-trial follow-up from a single published study suggest that IMT meets established statistical criteria by accounting for intervention effects in regression models. Conclusion Carotid IMT progression meets accepted definitions of a surrogate for cardiovascular disease endpoints in statin trials. This does not, however, establish that it may serve universally as a surrogate marker in trials of other agents. 5. ENDPOINT PROTECTION SECURITY SYSTEM FOR AN ENTERPRISE OpenAIRE Ruotsalainen, Petri 2013-01-01 The thesis subscriber was Metso Shared Services Ltd. The objective was to find out if Microsoft Forefront Endpoint Protection 2010 (FEP) would be secure and cost-effective enough system to fulfill the requirements of the company’s endpoint protection security system. Microsoft FEP was compared and benchmarked with some other most significant endpoint protection products based on the requirements and definitions of the subscriber. The comparison and evaluation were based on investigation a... 6. Biological surrogate end-points in cancer trials: potential uses, benefits and pitfalls. NARCIS (Netherlands) Cooper, R.; Kaanders, J.H.A.M. 2005-01-01 New technologies have led to the development of an increasing number of targeted therapies and interest in combining these with conventional therapy to provide individualised patient treatments. New drug or treatment regimens must, however, undergo rigorous testing under strictly controlled conditio 7. COPD association and repeatability of blood biomarkers in the ECLIPSE cohort Directory of Open Access Journals (Sweden) Dickens Jennifer A 2011-11-01 Full Text Available Abstract Background There is a need for biomarkers to better characterise individuals with COPD and to aid with the development of therapeutic interventions. A panel of putative blood biomarkers was assessed in a subgroup of the Evaluation of COPD Longitudinally to Identify Surrogate Endpoints (ECLIPSE cohort. Methods Thirty-four blood biomarkers were assessed in 201 subjects with COPD, 37 ex-smoker controls with normal lung function and 37 healthy non-smokers selected from the ECLIPSE cohort. Biomarker repeatability was assessed using baseline and 3-month samples. Intergroup comparisons were made using analysis of variance, repeatability was assessed through Bland-Altman plots, and correlations between biomarkers and clinical characteristics were assessed using Spearman correlation coefficients. Results Fifteen biomarkers were significantly different in individuals with COPD when compared to former or non-smoker controls. Some biomarkers, including tumor necrosis factor-α and interferon-γ, were measurable in only a minority of subjects whilst others such as C-reactive protein showed wide variability over the 3-month replication period. Fibrinogen was the most repeatable biomarker and exhibited a weak correlation with 6-minute walk distance, exacerbation rate, BODE index and MRC dyspnoea score in COPD subjects. 33% (66/201 of the COPD subjects reported at least 1 exacerbation over the 3 month study with 18% (36/201 reporting the exacerbation within 30 days of the 3-month visit. CRP, fibrinogen interleukin-6 and surfactant protein-D were significantly elevated in those COPD subjects with exacerbations within 30 days of the 3-month visit compared with those individuals that did not exacerbate or whose exacerbations had resolved. Conclusions Only a few of the biomarkers assessed may be useful in diagnosis or management of COPD where the diagnosis is based on airflow obstruction (GOLD. Further analysis of more promising biomarkers may reveal 8. A proposed panel of biomarkers of healthy ageing. Science.gov (United States) Lara, Jose; Cooper, Rachel; Nissan, Jack; Ginty, Annie T; Khaw, Kay-Tee; Deary, Ian J; Lord, Janet M; Kuh, Diana; Mathers, John C 2015-09-15 studies of human ageing, in health surveys of older people and as outcomes in intervention studies that aim to promote healthy ageing. Further, the inclusion of the same common panel of measures of healthy ageing in diverse study designs and populations may enhance the value of those studies by allowing the harmonisation of surrogate endpoints or outcome measures, thus facilitating less equivocal comparisons between studies and the pooling of data across studies. 9. Trends in Qualifying Biomarkers in Drug Safety. Consensus of the 2011 Meeting of the Spanish Society of Clinical Pharmacology Science.gov (United States) Agúndez, José A. G.; del Barrio, Jaime; Padró, Teresa; Stephens, Camilla; Farré, Magí; Andrade, Raúl J.; Badimon, Lina; García-Martín, Elena; Vilahur, Gemma; Lucena, M. Isabel 2012-01-01 In this paper we discuss the consensus view on the use of qualifying biomarkers in drug safety, raised within the frame of the XXIV meeting of the Spanish Society of Clinical Pharmacology held in Málaga (Spain) in October, 2011. The widespread use of biomarkers as surrogate endpoints is a goal that scientists have long been pursuing. Thirty years ago, when molecular pharmacogenomics evolved, we anticipated that these genetic biomarkers would soon obviate the routine use of drug therapies in a way that patients should adapt to the therapy rather than the opposite. This expected revolution in routine clinical practice never took place as quickly nor with the intensity as initially expected. The concerted action of operating multicenter networks holds great promise for future studies to identify biomarkers related to drug toxicity and to provide better insight into the underlying pathogenesis. Today some pharmacogenomic advances are already widely accepted, but pharmacogenomics still needs further development to elaborate more precise algorithms and many barriers to implementing individualized medicine exist. We briefly discuss our view about these barriers and we provide suggestions and areas of focus to advance in the field. PMID:22294980 10. Blood-based biomarkers for Parkinson's disease. Science.gov (United States) Chahine, Lama M; Stern, Matthew B; Chen-Plotkin, Alice 2014-01-01 There is a pressing need for biomarkers to diagnose Parkinson's disease (PD), assess disease severity, and prognosticate course. Various types of biologic specimens are potential candidates for identifying biomarkers--defined here as surrogate indicators of physiological or pathophysiological states--but blood has the advantage of being minimally invasive to obtain. There are, however, several challenges to identifying biomarkers in blood. Several candidate biomarkers identified in other diseases or in other types of biological fluids are being pursued as blood-based biomarkers in PD. In addition, unbiased discovery is underway using techniques including metabolomics, proteomics, and gene expression profiling. In this review, we summarize these techniques and discuss the challenges and successes of blood-based biomarker discovery in PD. Blood-based biomarkers that are discussed include α-synuclein, DJ-1, uric acid, epidermal growth factor, apolipoprotein-A1, and peripheral inflammatory markers. 11. Compton scattering in the Endpoint Model CERN Document Server Dagaonkar, Sumeet 2016-01-01 We use the Endpoint model for exclusive hadronic processes to study Compton scattering of the proton. The parameters of the Endpoint model are fixed using the data for $F_1$ and the ratio of Pauli and Dirac form factors ($F_2/F_1$) and then used to get numerical predictions for the differential scattering cross section. We studied the Compton scattering at fixed $\\theta_{CM}$ in the $s \\sim t \\gg \\Lambda_{QCD}$ limit and at fixed $s$ much larger than $t$ limit. We observed that the calculations in the Endpoint Model give a good fit with experimental data in both regions. 12. Speech endpoint detection in real noise environments Institute of Scientific and Technical Information of China (English) GUO Yanmeng; FU Qiang; YAN Yonghong 2007-01-01 A method of speech endpoint detection in environments of complicated additive noise is presented. Based on the analysis of noise, an adaptive model of stationary noise is proposed to detect the section where the signal is nonstationary. Then the voice is detected in this section by its harmonic structure, and the accurate endpoint is searched using energy.Compared with the typical algorithms, this algorithm operates reliably in most real noise environments. 13. Critical endpoint behavior: A Wang Landau study Science.gov (United States) Landau, D. P.; Wang, Fugao; Tsai, Shan-Ho 2008-07-01 We study the critical endpoint behavior using an asymmetric Ising model with two- and three-body interactions on a triangular lattice, in the presence of an external field. The simulation method we use is Wang-Landau sampling in a two-dimensional parameter space. We observe a clear divergence of the curvature of the spectator phase boundary and of the magnetization coexistence diameter derivative at the critical endpoint, and the exponents for both divergences agree well with previous theoretical predictions. 14. Revisiting algorithms for generating surrogate time series CERN Document Server Raeth, C; Papadakis, I E; Brinkmann, W 2011-01-01 The method of surrogates is one of the key concepts of nonlinear data analysis. Here, we demonstrate that commonly used algorithms for generating surrogates often fail to generate truly linear time series. Rather, they create surrogate realizations with Fourier phase correlations leading to non-detections of nonlinearities. We argue that reliable surrogates can only be generated, if one tests separately for static and dynamic nonlinearities. 15. Establishing a group of endpoints in a parallel computer Energy Technology Data Exchange (ETDEWEB) Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong 2016-02-02 A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification. 16. Novel biomarkers for cancer detection and prognostication NARCIS (Netherlands) Mehra, N. 2007-01-01 In this thesis we used a variety of approaches for biomarker discovery; in Part I we assessed whether we could identify a non-invasive surrogate markers of angiogenesis, as new vessel formation plays critical roles in the growth and metastatic spread of tumors. Moreover, many agents targeting the va 17. Forecasting interest rates with shifting endpoints DEFF Research Database (Denmark) Van Dijk, Dick; Koopman, Siem Jan; Wel, Michel van der 2014-01-01 We consider forecasting the term structure of interest rates with the assumption that factors driving the yield curve are stationary around a slowly time-varying mean or ‘shifting endpoint’. The shifting endpoints are captured using either (i) time series methods (exponential smoothing) or (ii......) long-range survey forecasts of either interest rates or inflation and output growth, or (iii) exponentially smoothed realizations of these macro variables. Allowing for shifting endpoints in yield curve factors provides substantial and significant gains in out-of-sample predictive accuracy, relative...... to stationary and random walk benchmarks. Forecast improvements are largest for long-maturity interest rates and for long-horizon forecasts.... 18. Polymorphic Endpoint Types for Copyless Message Passing Directory of Open Access Journals (Sweden) Viviana Bono 2011-07-01 Full Text Available We present PolySing#, a calculus that models process interaction based on copyless message passing, in the style of Singularity OS. We equip the calculus with a type system that accommodates polymorphic endpoint types, which are a variant of polymorphic session types, and we show that well-typed processes are free from faults, leaks, and communication errors. The type system is essentially linear, although linearity alone may leave room for scenarios where well-typed processes leak memory. We identify a condition on endpoint types that prevents these leaks from occurring. 19. Quantum Endpoint Detection Based on QRDA Science.gov (United States) Wang, Jian; Wang, Han; Song, Yan 2017-08-01 Speech recognition technology is widely used in many applications for man - machine interaction. To face more and more speech data, the computation of speech processing needs new approaches. The quantum computation is one of emerging computation technology and has been seen as useful computation model. So we focus on the basic operation of speech recognition processing, the voice activity detection, to present quantum endpoint detection algorithm. In order to achieve this algorithm, the n-bits quantum comparator circuit is given firstly. Then based on QRDA(Quantum Representation of Digital Audio), a quantum endpoint detection algorithm is presented. These quantum circuits could efficient process the audio data in quantum computer. 20. Quantum Endpoint Detection Based on QRDA Science.gov (United States) Wang, Jian; Wang, Han; Song, Yan 2017-10-01 Speech recognition technology is widely used in many applications for man - machine interaction. To face more and more speech data, the computation of speech processing needs new approaches. The quantum computation is one of emerging computation technology and has been seen as useful computation model. So we focus on the basic operation of speech recognition processing, the voice activity detection, to present quantum endpoint detection algorithm. In order to achieve this algorithm, the n-bits quantum comparator circuit is given firstly. Then based on QRDA(Quantum Representation of Digital Audio), a quantum endpoint detection algorithm is presented. These quantum circuits could efficient process the audio data in quantum computer. 1. Use of nutrigenomics endpoints in dietary interventions NARCIS (Netherlands) Hendriks, H.F.J. 2013-01-01 In this paper, the nutrigenomics approach is discussed as a research tool to study the physiological effects of nutrition and consequently how nutrition affects health and disease (endpoints). Nutrigenomics is the study of the effects of foods and food constituents on gene expression; the analyses i 2. Shift endpoint trace selection algorithm and wavelet analysis to detect the endpoint using optical emission spectroscopy Science.gov (United States) Ben Zakour, Sihem; Taleb, Hassen 2016-06-01 Endpoint detection (EPD) is very important undertaking on the side of getting a good understanding and figuring out if a plasma etching process is done on the right way. It is truly a crucial part of supplying repeatable effects in every single wafer. When the film to be etched has been completely erased, the endpoint is reached. In order to ensure the desired device performance on the produced integrated circuit, many sensors are used to detect the endpoint, such as the optical, electrical, acoustical/vibrational, thermal, and frictional. But, except the optical sensor, the other ones show their weaknesses due to the environmental conditions which affect the exactness of reaching endpoint. Unfortunately, some exposed area to the film to be etched is very low (signal and showing the incapacity of the traditional endpoint detection method to determine the wind-up of the etch process. This work has provided a means to improve the endpoint detection sensitivity by collecting a huge numbers of full spectral data containing 1201 spectra for each run, then a new unsophisticated algorithm is proposed to select the important endpoint traces named shift endpoint trace selection (SETS). Then, a sensitivity analysis of linear methods named principal component analysis (PCA) and factor analysis (FA), and the nonlinear method called wavelet analysis (WA) for both approximation and details will be studied to compare performances of the methods mentioned above. The signal to noise ratio (SNR) is not only computed based on the main etch (ME) period but also the over etch (OE) period. Moreover, a new unused statistic for EPD, coefficient of variation (CV), is proposed to reach the endpoint in plasma etches process. 3. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials). Science.gov (United States) Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; Van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; De Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence 2014-11-01 Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across trials, hampering comparison between them. The aim of the DATECAN (Definition for the Assessment of Time-to-event End-points in CANcer trials)-Pancreas project is to provide guidelines for standardised definition of time-to-event end-points in RCTs for pancreatic cancer. Time-to-event end-points currently used were identified from a literature review of pancreatic RCT trials (2006-2009). Academic research groups were contacted for participation in order to select clinicians and methodologists to participate in the pilot and scoring groups (>30 experts). A consensus was built after 2 rounds of the modified Delphi formal consensus approach with the Rand scoring methodology (range: 1-9). For pancreatic cancer, 14 time to event end-points and 25 distinct event types applied to two settings (detectable disease and/or no detectable disease) were considered relevant and included in the questionnaire sent to 52 selected experts. Thirty experts answered both scoring rounds. A total of 204 events distributed over the 14 end-points were scored. After the first round, consensus was reached for 25 items; after the second consensus was reached for 156 items; and after the face-to-face meeting for 203 items. The formal consensus approach reached the elaboration of guidelines for standardised definitions of time-to-event end-points allowing cross-comparison of RCTs in pancreatic cancer. Copyright © 2014 Elsevier Ltd. All rights reserved. 4. Embryotoxicity assessment of developmental neurotoxicants using a neuronal endpoint in the embryonic stem cell test. Science.gov (United States) Baek, Dae Hyun; Kim, Tae Gyun; Lim, Hwa Kyung; Kang, Jin Wook; Seong, Su Kyoung; Choi, Seung Eun; Lim, So Yun; Park, Sung Hee; Nam, Bong-hyun; Kim, Eun Hee; Kim, Mun Sin; Park, Kui Lea 2012-08-01 The embryonic stem cell test (EST) is a validated in vitro embryotoxicity test; however, as the inhibition of cardiac differentiation alone is used as a differentiation endpoint in the EST, it may not be a useful test to screen embryotoxic chemicals that affect the differentiation of noncardiac tissues. Previously, methylmercury (MeHg), cadmium and arsenic compounds, which are heavy metals that induce developmental neurotoxicity in vivo, were misclassified as nonembryotoxic with the EST. The aim of this study was to improve the EST to correctly screen such developmental neurotoxicants. We developed a neuronal endpoint (Tuj-1 ID₅₀) using flow cytometry analysis of Tuj-1-positive cells to screen developmental neurotoxicants (MeHg, valproic acid, sodium arsenate and sodium arsenite) correctly using an adherent monoculture differentiation method. Using Tuj-1 ID₅₀ in the EST instead of cardiac ID₅₀, all of the tested chemicals were classified as embryotoxic, while the negative controls were correctly classified as nonembryotoxic. To support the validity of Tuj-1 ID₅₀) , we compared the results from two experimenters who independently tested MeHg using our modified EST. An additional neuronal endpoint (MAP2 ID₅₀), obtained by analyzing the relative quantity of MAP2 mRNA, was used to classify the same chemicals. There were no significant differences in the three endpoint values of the two experimenters or in the classification results, except for isoniazid. In conclusion, our results indicate that Tuj-1 ID₅₀ can be used as a surrogate endpoint of the traditional EST to screen developmental neurotoxicants correctly and it can also be applied to other chemicals. 5. Fluid biomarkers in multiple system atrophy DEFF Research Database (Denmark) Laurens, Brice; Constantinescu, Radu; Freeman, Roy 2015-01-01 Despite growing research efforts, no reliable biomarker currently exists for the diagnosis and prognosis of multiple system atrophy (MSA). Such biomarkers are urgently needed to improve diagnostic accuracy, prognostic guidance and also to serve as efficacy measures or surrogates of target...... engagement for future clinical trials. We here review candidate fluid biomarkers for MSA and provide considerations for further developments and harmonization of standard operating procedures. A PubMed search was performed until April 24, 2015 to review the literature with regard to candidate blood...... and cerebrospinal fluid (CSF) biomarkers for MSA. Abstracts of 1760 studies were retrieved and screened for eligibility. The final list included 60 studies assessing fluid biomarkers in patients with MSA. Most studies have focused on alpha-synuclein, markers of axonal degeneration or catecholamines. Their results... 6. Scaling for interfacial tensions near critical endpoints. Science.gov (United States) Zinn, Shun-Yong; Fisher, Michael E 2005-01-01 Parametric scaling representations are obtained and studied for the asymptotic behavior of interfacial tensions in the full neighborhood of a fluid (or Ising-type) critical endpoint, i.e., as a function both of temperature and of density/order parameter or chemical potential/ordering field. Accurate nonclassical critical exponents and reliable estimates for the universal amplitude ratios are included naturally on the basis of the "extended de Gennes-Fisher" local-functional theory. Serious defects in previous scaling treatments are rectified and complete wetting behavior is represented; however, quantitatively small, but unphysical residual nonanalyticities on the wetting side of the critical isotherm are smoothed out "manually." Comparisons with the limited available observations are presented elsewhere but the theory invites new, searching experiments and simulations, e.g., for the vapor-liquid interfacial tension on the two sides of the critical endpoint isotherm for which an amplitude ratio -3.25+/-0.05 is predicted. 7. Surrogate Analysis and Index Developer (SAID) tool Science.gov (United States) Domanski, Marian M.; Straub, Timothy D.; Landers, Mark N. 2015-10-01 The use of acoustic and other parameters as surrogates for suspended-sediment concentrations (SSC) in rivers has been successful in multiple applications across the Nation. Tools to process and evaluate the data are critical to advancing the operational use of surrogates along with the subsequent development of regression models from which real-time sediment concentrations can be made available to the public. Recent developments in both areas are having an immediate impact on surrogate research and on surrogate monitoring sites currently (2015) in operation. 8. The Timing of Endpoints in Movement Science.gov (United States) 1981-11-01 order of keys- troke initiation can differ in repeated typings of the same word (Gentner, Grudin, & Conway, 1980). In piano playing, the music itself... theories of timing is discussed. A A U0@eIsion FIor TS UP A& f StUNICLASSIFIEDNO NSPOri.. a.Etd [The Timing of Endpoints in Movement IMichael I. Jordan...formal lessons or playing professionally. Four of these subjects were drummers and one was a piano player. Apparatus. The metronome beeps were 9. Precision mass measurements utilizing beta endpoints CERN Document Server Moltz, D M; Kern, B D; Noma, H; Ritchie, B G; Toth, K S 1981-01-01 A technique for precise determination of beta endpoints with an intrinsic germanium detector has been developed. The energy calibration is derived from gamma -ray photopeak measurements. This analysis procedure has been checked with a /sup 27/Si source produced in a (p, n) reaction on a /sup 27/Al target and subsequently applied to mass separated samples of /sup 76/Rb, /sup 77/Rb and /sup 78/Rb. Results indicate errors <50 keV are obtainable. (29 refs). 10. A generic operational strategy to qualify translational safety biomarkers. Science.gov (United States) Matheis, Katja; Laurie, David; Andriamandroso, Christiane; Arber, Nadir; Badimon, Lina; Benain, Xavier; Bendjama, Kaïdre; Clavier, Isabelle; Colman, Peter; Firat, Hüseyin; Goepfert, Jens; Hall, Steve; Joos, Thomas; Kraus, Sarah; Kretschmer, Axel; Merz, Michael; Padro, Teresa; Planatscher, Hannes; Rossi, Annamaria; Schneiderhan-Marra, Nicole; Schuppe-Koistinen, Ina; Thomann, Peter; Vidal, Jean-Marc; Molac, Béatrice 2011-07-01 The importance of using translational safety biomarkers that can predict, detect and monitor drug-induced toxicity during human trials is becoming increasingly recognized. However, suitable processes to qualify biomarkers in clinical studies have not yet been established. There is a need to define clear scientific guidelines to link biomarkers to clinical processes and clinical endpoints. To help define the operational approach for the qualification of safety biomarkers the IMI SAFE-T consortium has established a generic qualification strategy for new translational safety biomarkers that will allow early identification, assessment and management of drug-induced injuries throughout R&D. Copyright © 2011 Elsevier Ltd. All rights reserved. 11. Filament eruption with apparent reshuffle of endpoints CERN Document Server Filippov, Boris 2014-01-01 Filament eruption on 30 April - 1 May 2010, which shows the reconnection of one filament leg with a region far away from its initial position, is analyzed. Observations from three viewpoints are used for as precise as possible measurements of endpoint coordinates. The northern leg of the erupting prominence loop 'jumps' laterally to the latitude lower than the latitude of the originally southern endpoint. Thus, the endpoints reshuffled their positions in the limb view. Although this behaviour could be interpreted as the asymmetric zipping-like eruption, it does not look very likely. It seems more likely to be reconnection of the flux-rope field lines in its northern leg with ambient coronal magnetic field lines rooted in a quiet region far from the filament. From calculations of coronal potential magnetic field, we found that the filament before the eruption was stable for vertical displacements, but was liable to violation of the horizontal equilibrium. This is unusual initiation of an eruption with combinat... 12. Imaging Seeker Surrogate for IRCM evaluation NARCIS (Netherlands) Schleijpen, H.M.A.; Carpenter, S.R.; Mellier, B.; Dimmeler, A. 2007-01-01 NATO-SCI-139 and its predecessor groups have more than a decade of history in the evaluation and recommendation of EO and IR Countermeasures against anti-aircraft missiles. Surrogate Seekers have proven to be a valuable tool for this work. The use of surrogate seekers in international co-operations 13. 77 FR 34788 - Surrogate Foreign Corporations Science.gov (United States) 2012-06-12 ... Internal Revenue Service 26 CFR Part 1 RIN 1545-BF47 Surrogate Foreign Corporations AGENCY: Internal... regulations regarding whether a foreign corporation is treated as a surrogate foreign corporation. The final ] regulations affect certain domestic corporations and partnerships (and certain parties related thereto),... 14. Surrogate Guderley Test Problem Definition Energy Technology Data Exchange (ETDEWEB) Ramsey, Scott D. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory 2012-07-06 The surrogate Guderley problem (SGP) is a 'spherical shock tube' (or 'spherical driven implosion') designed to ease the notoriously subtle initialization of the true Guderley problem, while still maintaining a high degree of fidelity. In this problem (similar to the Guderley problem), an infinitely strong shock wave forms and converges in one-dimensional (1D) cylindrical or spherical symmetry through a polytropic gas with arbitrary adiabatic index {gamma}, uniform density {rho}{sub 0}, zero velocity, and negligible pre-shock pressure and specific internal energy (SIE). This shock proceeds to focus on the point or axis of symmetry at r = 0 (resulting in ostensibly infinite pressure, velocity, etc.) and reflect back out into the incoming perturbed gas. 15. On Using Surrogates with Genetic Programming. Science.gov (United States) Hildebrandt, Torsten; Branke, Jürgen 2015-01-01 One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP. 16. Endpoint of the hot electroweak phase transition CERN Document Server Csikor, Ferenc; Heitger, J 1999-01-01 We give the nonperturbative phase diagram of the four-dimensional hot electroweak phase transition. The Monte-Carlo analysis is done on lattices with different lattice spacings ($a$). A systematic extrapolation $a \\to 0$ is done. Our results show that the finite temperature SU(2)-Higgs phase transition is of first order for Higgs-boson masses $m_H<66.5 \\pm 1.4$ GeV. At this endpoint the phase transition is of second order, whereas above it only a rapid cross-over can be seen. The full four-dimensional result agrees completely with that of the dimensional reduction approximation. This fact is of particular importance, because it indicates that the fermionic sector of the Standard Model can be included perturbatively. We obtain that the Higgs-boson endpoint mass in the Standard Model is $72.4 \\pm 1.7$ GeV. Taking into account the LEP Higgs-boson mass lower bound excludes any electroweak phase transition in the Standard Model. 17. Parents and children: "surrogate" paradigm of modernity. Science.gov (United States) 2011-06-01 The article provides an overview of surrogate motherhood--one of many currently available forms of Assisted Reproductive Technologies for couples who find themselves unable to conceive a child on their own. Within the years of its existence surrogate motherhood managed to accumulate lots of bioethical problems, paradoxes, dilemmas and collisions. Author represents some of them. Also the legal, moral and religious implications of surrogacy are addressed. The religious perspective from the Orthodox Christian, Catholic, Jewish, Hinduism, and Islamic points of view are provided. The author concludes that surrogate motherhood is not only the answer to childlessness but it supports metamorphosis of traditional attitude towards such human value as it is a family. 18. Extracting gluino endpoints with event topology patterns Energy Technology Data Exchange (ETDEWEB) Pietsch, N. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Reuter, J.; Sakurai, K.; Wiesler, D. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany) 2012-06-15 In this paper we study the gluino dijet mass edge measurement at the LHC in a realistic situation including both SUSY and combinatorical backgrounds together with effects of initial and final state radiation as well as a finite detector resolution. Three benchmark scenarios are examined in which the dominant SUSY production process and also the decay modes are different. Several new kinematical variables are proposed to minimize the impact of SUSY and combinatorial backgrounds in the measurement. By selecting events with a particular number of jets and leptons, we attempt to measure two distinct gluino dijet mass edges originating from wino g {yields} jjW and bino g {yields}jjB decay modes, separately. We determine the endpoints of distributions of proposed and existing variables and show that those two edges can be disentangled and measured within good accuracy, irrespective of the presence of ISR, FSR, and detector effects. 19. Surrogate mothers: whose baby is it? Science.gov (United States) Cohen, B 1984-01-01 Advances in medical technology offer infertile couples who wish to raise children alternatives to adoption. The increasing number of surrogate mother contracts creates a myriad of legal issues surrounding the rights of the natural mother, the natural father and the child that is produced. In this Article, the Author discusses the legal issues and rights of the parties under the Constitution, the surrogate contract and family law principles. The Author proposes that courts should consider a surrogate contract as a revocable prebirth agreement which allows the natural mother to keep the child if she chooses. In addition, the Author advocates an interpretation of the statutes forbidding baby selling that would prohibit surrogate contracts in which the mother is paid a fee for the child. 20. Neutron capture cross sections from Surrogate measurements Directory of Open Access Journals (Sweden) Scielzo N.D. 2010-03-01 Full Text Available The prospects for determining cross sections for compound-nuclear neutron-capture reactions from Surrogate measurements are investigated. Calculations as well as experimental results are presented that test the Weisskopf-Ewing approximation, which is employed in most analyses of Surrogate data. It is concluded that, in general, one has to go beyond this approximation in order to obtain (n,γ cross sections of sufficient accuracy for most astrophysical and nuclear-energy applications. 1. Cancer Biomarkers OpenAIRE Kamel, Hala Fawzy Mohamed; Al-Amodi, Hiba Saeed Bagader 2016-01-01 Biomarkers have many potential applications in oncology, including risk assessment, screening, differential diagnosis, determination of prognosis, prediction of response to treatment, and monitoring of progression of disease. Because of the critical role that biomarkers play at all stages of disease, it is important that they undergo rigorous evaluation, including analytical validation, clinical validation, and assessment of clinical utility, prior to incorporation into routine clinical care.... 2. Optimal allocation of resources in a biomarker setting. Science.gov (United States) Rosner, Bernard; Hendrickson, Sara; Willett, Walter 2015-01-30 Nutrient intake is often measured with substantial error both in commonly used surrogate instruments such as a food frequency questionnaire (FFQ) and in gold standard-type instruments such as a diet record (DR). If there is a correlated error between the FFQ and DR, then standard measurement error correction methods based on regression calibration can produce biased estimates of the regression coefficient (λ) of true intake on surrogate intake. However, if a biomarker exists and the error in the biomarker is independent of the error in the FFQ and DR, then the method of triads can be used to obtain unbiased estimates of λ, provided that there are replicate biomarker data on at least a subsample of validation study subjects. Because biomarker measurements are expensive, for a fixed budget, one can use a either design where a large number of subjects have one biomarker measure and only a small subsample is replicated or a design that has a smaller number of subjects and has most or all subjects validated. The purpose of this paper is to optimize the proportion of subjects with replicated biomarker measures, where optimization is with respect to minimizing the variance of ln(λ̂). The methodology is illustrated using vitamin C intake data from the European Prospective Investigation into Cancer and Nutrition study where plasma vitamin C is the biomarker. In this example, the optimal validation study design is to have 21% of subjects with replicated biomarker measures. 3. Population-scale assessment endpoints in ecological risk assessment part II: selection of assessment endpoint attributes. Science.gov (United States) Landis, Wayne G; Kaminski, Laurel A 2007-07-01 Because ecological services often are tied to specific species, the risk to populations is a critical endpoint and important feature of ecological risk assessments. In Part 1 of this series it was demonstrated that population scale assessment endpoints are important expressions of the valued components of ecological structures. This commentary reviews several of the characteristics of populations that can be evaluated and used in population scale risk assessments. Two attributes are evaluated as promising. The 1st attribute is the change in potential productivity of the population over a specified time period. The 2nd attribute is the change in the age structure of a population, expressed graphically or as a normalized effects vector (NEV). The NEV is a description of the change in age structure due to a toxicant or other stressor and appears to be characteristic of specific stressor effects. 4. Development of biomarkers for Huntington's disease. Science.gov (United States) Weir, David W; Sturrock, Aaron; Leavitt, Blair R 2011-06-01 Huntington's disease is an autosomal dominant, progressive neurodegenerative disorder, for which there is no disease-modifying treatment. By use of predictive genetic testing, it is possible to identify individuals who carry the gene defect before the onset of symptoms, providing a window of opportunity for intervention aimed at preventing or delaying disease onset. However, without robust and practical measures of disease progression (ie, biomarkers), the efficacy of therapeutic interventions in this premanifest Huntington's disease population cannot be readily assessed. Current progress in the development of biomarkers might enable evaluation of disease progression in individuals at the premanifest stage of the disease; these biomarkers could be useful in defining endpoints in clinical trials in this population. Clinical, cognitive, neuroimaging, and biochemical biomarkers are being investigated for their potential in clinical use and their value in the development of future treatments for patients with Huntington's disease. 5. Ecosystem services as assessment endpoints for ecological risk assessment. Science.gov (United States) Munns, Wayne R; Rea, Anne W; Suter, Glenn W; Martin, Lawrence; Blake-Hedges, Lynne; Crk, Tanja; Davis, Christine; Ferreira, Gina; Jordan, Steve; Mahoney, Michele; Barron, Mace G 2016-07-01 Ecosystem services are defined as the outputs of ecological processes that contribute to human welfare or have the potential to do so in the future. Those outputs include food and drinking water, clean air and water, and pollinated crops. The need to protect the services provided by natural systems has been recognized previously, but ecosystem services have not been formally incorporated into ecological risk assessment practice in a general way in the United States. Endpoints used conventionally in ecological risk assessment, derived directly from the state of the ecosystem (e.g., biophysical structure and processes), and endpoints based on ecosystem services serve different purposes. Conventional endpoints are ecologically important and susceptible entities and attributes that are protected under US laws and regulations. Ecosystem service endpoints are a conceptual and analytical step beyond conventional endpoints and are intended to complement conventional endpoints by linking and extending endpoints to goods and services with more obvious benefit to humans. Conventional endpoints can be related to ecosystem services even when the latter are not considered explicitly during problem formulation. To advance the use of ecosystem service endpoints in ecological risk assessment, the US Environmental Protection Agency's Risk Assessment Forum has added generic endpoints based on ecosystem services (ES-GEAE) to the original 2003 set of generic ecological assessment endpoints (GEAEs). Like conventional GEAEs, ES-GEAEs are defined by an entity and an attribute. Also like conventional GEAEs, ES-GEAEs are broadly described and will need to be made specific when applied to individual assessments. Adoption of ecosystem services as a type of assessment endpoint is intended to improve the value of risk assessment to environmental decision making, linking ecological risk to human well-being, and providing an improved means of communicating those risks. Integr Environ Assess Manag 6. Microvascular structure as a prognostically relevant endpoint. Science.gov (United States) Agabiti-Rosei, Enrico; Rizzoni, Damiano 2017-05-01 Remodelling of subcutaneous small resistance arteries, as indicated by an increased media-to-lumen ratio, is frequently present in hypertensive, obese, or diabetic patients. The increased media-to-lumen ratio may impair organ flow reserve. This may be important in the maintenance and, probably, also in the progressive worsening of hypertensive disease. The presence of structural alterations represents a prognostically relevant factor, in terms of development of target organ damage or cardiovascular events, thus allowing us a prediction of complications in hypertension. In fact, media-to-lumen ratio of small arteries at baseline, and possibly their changes during treatment may have a strong prognostic significance. However, new, non-invasive techniques are needed before suggesting extensive application of the evaluation of remodelling of small arteries for the cardiovascular risk stratification in hypertensive patients. Some new techniques for the evaluation of microvascular morphology in the retina, currently under clinical investigation, seem to represent a promising and interesting future perspective. The evaluation of microvascular structure is progressively moving from bench to bedside, and it could represent, in the near future, an evaluation to be performed in all hypertensive patients, to obtain a better stratification of cardiovascular risk, and, possibly, it might be considered as an intermediate endpoint in the evaluation of the effects of antihypertensive therapy, provided that a demonstration of a prognostic value of non-invasive measures of microvascular structure is made available. 7. Ordered Kinematic Endpoints for 5-body Cascade Decays CERN Document Server Klimek, Matthew D 2016-01-01 We present expressions for the kinematic endpoints of 5-body cascade decay chains proceeding through all possible combinations of 2-body and 3-body decays, with one stable invisible particle in the final decay stage. When an invariant mass can be formed in multiple ways by choosing different final state particles from a common vertex, we introduce techniques for finding the sub-leading endpoints for all indistinguishable versions of the invariant mass. In contrast to short decay chains, where sub-leading endpoints are linearly related to the leading endpoints, we find that in 5-body decays, they provide additional independent constraints on the mass spectrum. 8. Surrogates for herbicide removal in stormwater biofilters. Science.gov (United States) Zhang, Kefeng; Deletic, Ana; Page, Declan; McCarthy, David T 2015-09-15 Real time monitoring of suitable surrogate parameters are critical to the validation of any water treatment processes, and is of particularly high importance for validation of natural stormwater treatment systems. In this study, potential surrogates for herbicide removal in stormwater biofilters (also known as stormwater bio-retention or rain-gardens) were assessed using field challenge tests and matched laboratory column experiments. Differential UV absorbance at 254mn (ΔUVA254), total phosphorus (ΔTP), dissolved phosphorus (ΔDP), total nitrogen (ΔTN), ammonia (ΔNH3), nitrate and nitrite (ΔNO3+NO2), dissolved organic carbon (ΔDOC) and total suspended solids (ΔTSS) were compared with glyphosate, atrazine, simazine and prometryn removal rates. The influence of different challenge conditions on the performance of each surrogate was studied. Differential TP was significantly and linearly related to glyphosate reduction (R(2) = 0.75-0.98, P herbicides were reliable under normal and challenge dry conditions, but weaker correlations were observed under challenge wet conditions. Of those tested, ΔTP is the most promising surrogate for glyphosate removal and ΔUVA254 is a suitable surrogate for triazines removal in stormwater biofilters. Copyright © 2015 Elsevier Ltd. All rights reserved. 9. Systems biology and biomarker discovery Energy Technology Data Exchange (ETDEWEB) Rodland, Karin D. 2010-12-01 Medical practitioners have always relied on surrogate markers of inaccessible biological processes to make their diagnosis, whether it was the pallor of shock, the flush of inflammation, or the jaundice of liver failure. Obviously, the current implementation of biomarkers for disease is far more sophisticated, relying on highly reproducible, quantitative measurements of molecules that are often mechanistically associated with the disease in question, as in glycated hemoglobin for the diagnosis of diabetes [1] or the presence of cardiac troponins in the blood for confirmation of myocardial infarcts [2]. In cancer, where the initial symptoms are often subtle and the consequences of delayed diagnosis often drastic for disease management, the impetus to discover readily accessible, reliable, and accurate biomarkers for early detection is compelling. Yet despite years of intense activity, the stable of clinically validated, cost-effective biomarkers for early detection of cancer is pathetically small and still dominated by a handful of markers (CA-125, CEA, PSA) first discovered decades ago. It is time, one could argue, for a fresh approach to the discovery and validation of disease biomarkers, one that takes full advantage of the revolution in genomic technologies and in the development of computational tools for the analysis of large complex datasets. This issue of Disease Markers is dedicated to one such new approach, loosely termed the 'Systems Biology of Biomarkers'. What sets the Systems Biology approach apart from other, more traditional approaches, is both the types of data used, and the tools used for data analysis - and both reflect the revolution in high throughput analytical methods and high throughput computing that has characterized the start of the twenty first century. 10. Imaging Biomarkers or Biomarker Imaging? Directory of Open Access Journals (Sweden) Markus Mitterhauser 2014-06-01 Full Text Available Since biomarker imaging is traditionally understood as imaging of molecular probes, we highly recommend to avoid any confusion with the previously defined term “imaging biomarkers” and, therefore, only use “molecular probe imaging (MPI” in that context. Molecular probes (MPs comprise all kinds of molecules administered to an organism which inherently carry a signalling moiety. This review highlights the basic concepts and differences of molecular probe imaging using specific biomarkers. In particular, PET radiopharmaceuticals are discussed in more detail. Specific radiochemical and radiopharmacological aspects as well as some legal issues are presented. 11. Study to Understand Cervical Cancer Early Endpoints and Determinants (SUCCEED) Science.gov (United States) A study to comprehensively assess biomarkers of risk for progressive cervical neoplasia, and thus develop a new set of biomarkers that can distinguish those at highest risk of cervical cancer from those with benign infection 12. Nitrate Salt Surrogate Blending Scoping Test Plan Energy Technology Data Exchange (ETDEWEB) Anast, Kurt Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2015-11-13 Test blending equipment identified in the “Engineering Options Assessment Report: Nitrate Salt Waste Stream Processing”. Determine if the equipment will provide adequate mixing of zeolite and surrogate salt/Swheat stream; optimize equipment type and operational sequencing; impact of baffles and inserts on mixing performance; and means of validating mixing performance 13. Videotrees: Improving video surrogate presentation using hierarchy NARCIS (Netherlands) Jansen, Michel; Heeren, Willemijn; Dijk, van Betsy 2008-01-01 As the amount of available video content increases, so does the need for better ways of browsing all this material. Because the nature of video makes it hard to process, the need arises for adequate surrogates for video that can readily be skimmed and browsed. In this paper, the effects of the use o 14. Fluid biomarkers in multiple system atrophy: A review of the MSA Biomarker Initiative. Science.gov (United States) Laurens, Brice; Constantinescu, Radu; Freeman, Roy; Gerhard, Alexander; Jellinger, Kurt; Jeromin, Andreas; Krismer, Florian; Mollenhauer, Brit; Schlossmacher, Michael G; Shaw, Leslie M; Verbeek, Marcel M; Wenning, Gregor K; Winge, Kristian; Zhang, Jing; Meissner, Wassilios G 2015-08-01 Despite growing research efforts, no reliable biomarker currently exists for the diagnosis and prognosis of multiple system atrophy (MSA). Such biomarkers are urgently needed to improve diagnostic accuracy, prognostic guidance and also to serve as efficacy measures or surrogates of target engagement for future clinical trials. We here review candidate fluid biomarkers for MSA and provide considerations for further developments and harmonization of standard operating procedures. A PubMed search was performed until April 24, 2015 to review the literature with regard to candidate blood and cerebrospinal fluid (CSF) biomarkers for MSA. Abstracts of 1760 studies were retrieved and screened for eligibility. The final list included 60 studies assessing fluid biomarkers in patients with MSA. Most studies have focused on alpha-synuclein, markers of axonal degeneration or catecholamines. Their results suggest that combining several CSF fluid biomarkers may be more successful than using single markers, at least for the diagnosis. Currently, the clinically most useful markers may comprise a combination of the light chain of neurofilament (which is consistently elevated in MSA compared to controls and Parkinson's disease), metabolites of the catecholamine pathway and proteins such as α-synuclein, DJ-1 and total-tau. Beyond future efforts in biomarker discovery, the harmonization of standard operating procedures will be crucial for future success. 15. Combustion Kinetic Studies of Gasolines and Surrogates KAUST Repository Javed, Tamour 2016-11-01 Future thrusts for gasoline engine development can be broadly summarized into two categories: (i) efficiency improvements in conventional spark ignition engines, and (ii) development of advance compression ignition (ACI) concepts. Efficiency improvements in conventional spark ignition engines requires downsizing (and turbocharging) which may be achieved by using high octane gasolines, whereas, low octane gasolines fuels are anticipated for ACI concepts. The current work provides the essential combustion kinetic data, targeting both thrusts, that is needed to develop high fidelity gasoline surrogate mechanisms and surrogate complexity guidelines. Ignition delay times of a wide range of certified gasolines and surrogates are reported here. These measurements were performed in shock tubes and rapid compression machines over a wide range of experimental conditions (650 – 1250 K, 10 – 40 bar) relevant to internal combustion engines. Using the measured the data and chemical kinetic analyses, the surrogate complexity requirements for these gasolines in homogeneous environments are specified. For the discussions presented here, gasolines are classified into three categories: (i)\\tLow octane gasolines including Saudi Aramco’s light naphtha fuel (anti-knock index, AKI = (RON + MON)/2 = 64; Sensitivity (S) = RON – MON = 1), certified FACE (Fuels for Advanced Combustion Engines) gasoline I and J (AKI ~ 70, S = 0.7 and 3 respectively), and their Primary Reference Fuels (PRF, mixtures of n-heptane and iso-octane) and multi-component surrogates. (ii)\\t Mid octane gasolines including FACE A and C (AKI ~ 84, S ~ 0 and 1 respectively) and their PRF surrogates. Laser absorption measurements of intermediate and product species formed during gasoline/surrogate oxidation are also reported. (iii)\\t A wide range of n-heptane/iso-octane/toluene (TPRF) blends to adequately represent the octane and sensitivity requirements of high octane gasolines including FACE gasoline F and G 16. A novel surrogate index for hepatic insulin resistance. LENUS (Irish Health Repository) Vangipurapu, J 2011-03-01 In epidemiological and genetic studies surrogate indices are needed to investigate insulin resistance in different insulin-sensitive tissues. Our objective was to develop a surrogate index for hepatic insulin resistance. 17. Use of endpoint adjudication to improve the quality and validity of endpoint assessment for medical device development and post marketing evaluation: Rationale and best practices. A report from the cardiac safety research consortium. Science.gov (United States) Seltzer, Jonathan H; Heise, Ted; Carson, Peter; Canos, Daniel; Hiatt, Jo Carol; Vranckx, Pascal; Christen, Thomas; Cutlip, Donald E 2017-08-01 This white paper provides a summary of presentations, discussions and conclusions of a Thinktank entitled "The Role of Endpoint Adjudication in Medical Device Clinical Trials". The think tank was cosponsored by the Cardiac Safety Research Committee, MDEpiNet and the US Food and Drug Administration (FDA) and was convened at the FDA's White Oak headquarters on March 11, 2016. Attention was focused on tailoring best practices for evaluation of endpoints in medical device clinical trials, practical issues in endpoint adjudication of therapeutic, diagnostic, biomarker and drug-device combinations, and the role of adjudication in regulatory and reimbursement issues throughout the device lifecycle. Attendees included representatives from medical device companies, the FDA, Centers for Medicare and Medicaid Services (CMS), end point adjudication specialist groups, clinical research organizations, and active, academically based adjudicators. The manuscript presents recommendations from the think tank regarding (1) rationale for when adjudication is appropriate, (2) best practices establishment and operation of a medical device adjudication committee and (3) the role of endpoint adjudication for post market evaluation in the emerging era of real world evidence. Copyright © 2017. Published by Elsevier Inc. 18. Time to Review the Role of Surrogate End Points in Health Policy: State of the Art and the Way Forward. Science.gov (United States) Ciani, Oriana; Buyse, Marc; Drummond, Michael; Rasi, Guido; Saad, Everardo D; Taylor, Rod S 2017-03-01 The efficacy of medicines, medical devices, and other health technologies should be proved in trials that assess final patient-relevant outcomes such as survival or morbidity. Market access and coverage decisions are, however, often based on surrogate end points, biomarkers, or intermediate end points, which aim to substitute and predict patient-relevant outcomes that are unavailable because of methodological, financial, or practical constraints. We provide a summary of the present use of surrogate end points in health care policy, discussing the case for and against their adoption and reviewing validation methods. We introduce a three-step framework for policymakers to handle surrogates, which involves establishing the level of evidence, assessing the strength of the association, and quantifying relations between surrogates and final outcomes. Although the use of surrogates can be problematic, they can, when selected and validated appropriately, offer important opportunities for more efficient clinical trials and faster access to new health technologies that benefit patients and health care systems. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved. 19. Quality of Documentation as a Surrogate Marker for Awareness and Training Effectiveness of PHTLS-Courses. Part of the Prospective Longitudinal Mixed-Methods EPPTC-Trial OpenAIRE Häske, David; Beckers, Stefan K; Hofmann, Marzellus; Lefering, Rolf; Gliwitzky, Bernhard; Wölfl, Christoph C.; Grützner, Paul; Stöckle, Ulrich; Dieroff, Marc; Münzberg, Matthias 2017-01-01 Objective Care for severely injured patients requires multidisciplinary teamwork. A decrease in the number of accident victims ultimately affects the routine and skills. PHTLS (“Pre-Hospital Trauma Life Support”) courses are established two-day courses for medical and non-medical rescue service personnel, aimed at improving the pre-hospital care of trauma patients worldwide. The study aims the examination of the quality of documentation before and after PHTLS courses as a surrogate endpoint o... 20. A qualitative investigation of selecting surrogate decision-makers NARCIS (Netherlands) Edwards, S.J.L.; Brown, P.; Twyman, M.A.; Christie, D.; Rakow, T. 2011-01-01 Background Empirical studies of surrogate decision-making tend to assume that surrogates should make only a 'substituted judgement'—that is, judge what the patient would want if they were mentally competent. Objectives To explore what people want in a surrogate decision-maker whom they themselves se 1. System Reliability Analysis Capability and Surrogate Model Application in RAVEN Energy Technology Data Exchange (ETDEWEB) Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L. 2015-11-01 This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields. 2. Biomarkers for intracellular pathogens: establishing tools as vaccine and therapeutic endpoints for visceral leishmaniasis. Science.gov (United States) Vallur, A C; Duthie, M S; Reinhart, C; Tutterrow, Y; Hamano, S; Bhaskar, K R H; Coler, R N; Mondal, D; Reed, S G 2014-06-01 Visceral leishmaniasis in South Asia is a serious disease affecting children and adults. Acute visceral leishmaniasis develops in only a fraction of those infected individuals, the majority being asymptomatic with the potential to transmit infection and develop disease. We followed 56 individuals characterized as being asymptomatic by seropositivity with rk39 rapid diagnostic test in a hyperendemic district of Bangladesh to define the utility of Leishmania-specific antibodies and DNA in identifying infection. At baseline, 54 of the individuals were seropositive with one or more quantitative antibody assays and antibody levels persisted at follow up. Most seropositive individuals (47/54) tested positive by quantitative PCR at baseline, but only 16 tested positive at follow up. The discrepancies among the different tests may shed light on the dynamics of asymptomatic infections of Leishmania donovani, as well as underscore the need for standard diagnostic tools for active surveillance as well as assessing the effectiveness of prophylactic and therapeutic interventions. ©2013 Infectious Disease Research Institute Clinical Microbiology and Infection © 2013 European Society of Clinical Microbiology and Infectious Diseases. 3. Biomarkers in T cell therapy clinical trials Directory of Open Access Journals (Sweden) Kalos Michael 2011-08-01 Full Text Available Abstract T cell therapy represents an emerging and promising modality for the treatment of both infectious disease and cancer. Data from recent clinical trials have highlighted the potential for this therapeutic modality to effect potent anti-tumor activity. Biomarkers, operationally defined as biological parameters measured from patients that provide information about treatment impact, play a central role in the development of novel therapeutic agents. In the absence of information about primary clinical endpoints, biomarkers can provide critical insights that allow investigators to guide the clinical development of the candidate product. In the context of cell therapy trials, the definition of biomarkers can be extended to include a description of parameters of the cell product that are important for product bioactivity. This review will focus on biomarker studies as they relate to T cell therapy trials, and more specifically: i. An overview and description of categories and classes of biomarkers that are specifically relevant to T cell therapy trials, and ii. Insights into future directions and challenges for the appropriate development of biomarkers to evaluate both product bioactivity and treatment efficacy of T cell therapy trials. 4. Accurate measurement method for tube's endpoints based on machine vision Science.gov (United States) Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng 2017-01-01 Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement. 5. Accurate Measurement Method for Tube's Endpoints Based on Machine Vision Institute of Scientific and Technical Information of China (English) LIU Shaoli; JIN Peng; LIU Jianhua; WANG Xiao; SUN Peng 2017-01-01 Tubes are used widely in aerospace vehicles,and their accurate assembly can directly affect the assembling reliability and the quality of products.It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly.However,the traditional tube inspection method is time-consuming and complex operations.Therefore,a new measurement method for a tube's endpoints based on machine vision is proposed.First,reflected light on tube's surface can be removed by using photometric linearization.Then,based on the optimization model for the tube's endpoint measurements and the principle of stereo matching,the global coordinates and the relative distance of the tube's endpoint are obtained.To confirm the feasibility,11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured.The experiment results show that the measurement repeatability accuracy is 0.167 mm,and the absolute accuracy is 0.328 mm.The measurement takes less than 1 min.The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement. 6. Accurate measurement method for tube's endpoints based on machine vision Science.gov (United States) Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng 2016-08-01 Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement. 7. Surrogate decision making and intellectual virtue. Science.gov (United States) Bock, Gregory L 2014-01-01 Patients can be harmed by a religiously motivated surrogate decision maker whose decisions are contrary to the standard of care; therefore, surrogate decision making should be held to a high standard. Stewart Eskew and Christopher Meyers proposed a two-part rule for deciding which religiously based decisions to honor: (1) a secular reason condition and (2) a rationality condition. The second condition is based on a coherence theory of rationality, which they claim is accessible, generous, and culturally sensitive. In this article, I will propose strengthening the rationality condition by grounding it in a theory of intellectual virtue, which is both rigorous and culturally sensitive. Copyright 2014 The Journal of Clinical Ethics. All rights reserved. 8. Biomarkers in the Management of Difficult Asthma. Science.gov (United States) Schleich, Florence; Demarche, Sophie; Louis, Renaud 2016-01-01 Difficult asthma is a heterogeneous disease of the airways including various types of bronchial inflammation and various degrees of airway remodeling. Therapeutic response of severe asthmatics can be predicted by the use of biomarkers of Type2-high or Type2-low inflammation. Based on sputum cell analysis, four inflammatory phenotypes have been described. As induced sputum is timeconsuming and expensive technique, surrogate biomarkers are useful in clinical practice. Eosinophilic phenotype is likely to reflect ongoing adaptive immunity in response to allergen. Several biomarkers of eosinophilic asthma are easily available in clinical practice (blood eosinophils, serum IgE, exhaled nitric oxyde, serum periostin). Neutrophilic asthma is thought to reflect innate immune system activation in response to pollutants or infectious agents while paucigranulocytic asthma is thought to be not inflammatory and characterized by smooth muscle dysfunction. We currently lack of user-friendly biomarkers of neutrophilic asthma and airway remodeling. In this review, we summarize the biomarkers available for the management of difficult asthma. 9. The Surrogate Method: Past, Present and Future Energy Technology Data Exchange (ETDEWEB) Lesher, S R; Bernstein, L A; Burke, J T; Lyles, B F; Clark, R M; Fallon, P; Phair, L 2008-01-09 The STARS/LiBerACE collaboration has been exploring the surrogate technique with success in the actinide region. This method uses a direct reaction to measure the decay probability of the same compound nucleus produced via a neutron-induced channel. This paper serves as an overview of these activities. Using the STARS array at 88-inch Cyclotron at Lawrence Berkeley National Laboratory we have explored the following surrogate reactions: {sup 234}U({alpha}, {alpha}{prime}f), {sup 235}U({sup 3}He, {alpha}f), {sup 236}U({alpha}, {alpha}{prime}f), {sup 238}U ({alpha},{alpha}{prime}f), {sup 238}U({sup 3}He,{alpha}f), {sup 238}U({sup 3}He, tf) surrogates for {sup 233}U(n,f), {sup 233}U(n,f), {sup 235}U(n,f), {sup 237}U(n,f), {sup 236}U(n,f), and {sup 237}Np(n,f), respectively. 10. [Biomedical Perspective of the Surrogate Motherhood]. Science.gov (United States) Jouve de la Barreda, Nicolás 2017-01-01 The subrogated motherhood takes place when an embryo created by in vitro fertilization (IVF) technology is implanted in a surrogate, sometimes called a gestational mother, by means a contract with her. It can imply to natural families (woman and man) with or without infertility problems, or to monoparental or biparental families of the same sex. Concerning the origin of the gametes used in the IVF emerges different implications on the genetic relationship of the resulting child with the surrogate and the future parents. The subrogated motherhood was initially considered an option to solve infertility problems. Nevertheless this practice has become a possible and attractive option as a source of economic resources for poor women. The cases of benefit of a pregnancy without mediating a contract are exceptional and they are not properly cases of ″subrogated maternity″ but of ″altruistic maternity″ and must be considered as heterologous in vitro fertilization. In this article are analyzed the medical, genetic and bioethics aspects of this new derivation of the fertilization in vitro. As points of special attention are considered the following questions: Is it the surrogate motherhood used preferably to solve infertility problems? Is not this actually a new form of exploitation of the woman? Does not suppose an attack to the natural family? Does not suppose in addition an attack to the dignity of the human being? 11. Surrogate formulations for thermal treatment of low-level mixed waste. Part 1: Radiological surrogates Energy Technology Data Exchange (ETDEWEB) Stockdale, J.A.D.; Bostick, W.D.; Hoffmann, D.P. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States); Lee, H.T. [Oak Ridge Associated Universities, TN (United States) 1994-01-01 The evaluation and comparison of proposed thermal treatment systems for mixed wastes can be expedited by tests in which the radioactive components of the wastes are replaced by surrogate materials chosen to mimic, as far as is possible, the chemical and physical properties of the radioactive materials of concern. In this work, sponsored by the Mixed Waste Integrated Project of the US Department of Energy, the authors have examined reported experience with such surrogates and suggest a simplified standard list of materials for use in tests of thermal treatment systems. The chief radioactive nuclides of concern in the treatment of mixed wastes are {sup 239}Pu, {sup 238}U, {sup 235}U, {sup 137}Cs, {sup 103}Ru, {sup 99}Tc, and {sup 90}Sr. These nuclides are largely by-products of uranium enrichment, reactor fuel reprocessing, and weapons program activities. Cs, Ru, and Sr all have stable isotopes that can be used as perfect surrogates for the radioactive forms. Technetium exists only in radioactive form, as do plutonium and uranium. If one wishes to preclude radioactive contamination of the thermal treatment system under trial burn, surrogate elements must be chosen for these three. For technetium, the authors suggest the use of natural ruthenium, and for both plutonium and uranium, they recommend cerium. The seven radionuclides listed can therefore be simulated by a surrogate package containing stable isotopes of ruthenium, strontium, cesium, and cerium. 12. Comparative endpoint sensitivity of in vitro estrogen agonist assays. Science.gov (United States) Dreier, David A; Connors, Kristin A; Brooks, Bryan W 2015-07-01 Environmental and human health implications of endocrine disrupting chemicals (EDCs), particularly xenoestrogens, have received extensive study. In vitro assays are increasingly employed as diagnostic tools to comparatively evaluate chemicals, whole effluent toxicity and surface water quality, and to identify causative EDCs during toxicity identification evaluations. Recently, the U.S. Environmental Protection Agency (USEPA) initiated ToxCast under the Tox21 program to generate novel bioactivity data through high throughput screening. This information is useful for prioritizing chemicals requiring additional hazard information, including endocrine active chemicals. Though multiple in vitro and in vivo techniques have been developed to assess estrogen agonist activity, the relative endpoint sensitivity of these approaches and agreement of their conclusions remain unclear during environmental diagnostic applications. Probabilistic hazard assessment (PHA) approaches, including chemical toxicity distributions (CTD), are useful for understanding the relative sensitivity of endpoints associated with in vitro and in vivo toxicity assays by predicting the likelihood of chemicals eliciting undesirable outcomes at or above environmentally relevant concentrations. In the present study, PHAs were employed to examine the comparative endpoint sensitivity of 16 in vitro assays for estrogen agonist activity using a diverse group of compounds from the USEPA ToxCast dataset. Reporter gene assays were generally observed to possess greater endpoint sensitivity than other assay types, and the Tox21 ERa LUC BG1 Agonist assay was identified as the most sensitive in vitro endpoint for detecting an estrogenic response. When the sensitivity of this most sensitive ToxCast in vitro endpoint was compared to the human MCF-7 cell proliferation assay, a common in vitro model for biomedical and environmental monitoring applications, the ERa LUC BG1 assay was several orders of magnitude less 13. Urinary Sugars--A Biomarker of Total Sugars Intake. Science.gov (United States) Tasevska, Natasha 2015-07-01 Measurement error in self-reported sugars intake may explain the lack of consistency in the epidemiologic evidence on the association between sugars and disease risk. This review describes the development and applications of a biomarker of sugars intake, informs its future use and recommends directions for future research. Recently, 24 h urinary sucrose and fructose were suggested as a predictive biomarker for total sugars intake, based on findings from three highly controlled feeding studies conducted in the United Kingdom. From this work, a calibration equation for the biomarker that provides an unbiased measure of sugars intake was generated that has since been used in two US-based studies with free-living individuals to assess measurement error in dietary self-reports and to develop regression calibration equations that could be used in future diet-disease analyses. Further applications of the biomarker include its use as a surrogate measure of intake in diet-disease association studies. Although this biomarker has great potential and exhibits favorable characteristics, available data come from a few controlled studies with limited sample sizes conducted in the UK. Larger feeding studies conducted in different populations are needed to further explore biomarker characteristics and stability of its biases, compare its performance, and generate a unique, or population-specific biomarker calibration equations to be applied in future studies. A validated sugars biomarker is critical for informed interpretation of sugars-disease association studies. 14. Sample size determination in clinical trials with multiple endpoints CERN Document Server Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R 2015-01-01 This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin... 15. Recent Experimental Progress on Surrogate Reactions Science.gov (United States) Beausang, Cornelius 2014-09-01 Reactions on unstable nuclei are important in a wide variety of nuclear physics scenarios. Cross sections for neutron (or light charged particle) induced reactions on target nuclei spanning the chart of the nuclei are important for nuclear astrophysics (r-process, s-process rp- and p-processes etc.), for nuclear energy generation and for national security applications. Many such reactions occur on short-lived unstable nuclei. Even with the present generation of radioactive beam facilities, many such reactions are difficult, if not impossible, to measure directly. For these reactions, often the surrogate reaction technique provides the only option to provide some experimental guidance for the calculations. The experimental and theoretical techniques required are described in some detail in the recent review article by Escher et al.. Originally introduced in the 1970's the last decade has seen a resurgence of interest in the surrogate technique. Various ratio techniques, external, internal and hybrid, have been developed to minimize the effect of target contamination. In the actinide region, a large number of surrogate (n,f) cross sections have been measured. In general, these show agreement to within 5--10%, with directly measured (n,f) data where these data exist (benchmarking), for equivalent neutron energies ranging from ~100 keV up to tens of MeV. For (n, γ) reactions, measurements have been attempted for select nuclei in various mass regions (A ~ 90, 150 and 235) and for these the agreement with directly measured data is less good. The various experimental techniques employed will be described as well as the current state of the experimental data. Some future directions will be described. Reactions on unstable nuclei are important in a wide variety of nuclear physics scenarios. Cross sections for neutron (or light charged particle) induced reactions on target nuclei spanning the chart of the nuclei are important for nuclear astrophysics (r-process, s 16. Application of molecular endpoints in early life stage salmonid environmental biomonitoring. Science.gov (United States) Marlatt, Vicki L; Sherrard, Ryan; Kennedy, Chris J; Elphick, James R; Martyniuk, Christopher J 2016-04-01 Molecular endpoints can enhance existing whole animal bioassays by more fully characterizing the biological impacts of aquatic pollutants. Laboratory and field studies were used to examine the utility of adopting molecular endpoints for a well-developed in situ early life stage (eyed embryo to onset of swim-up fry) salmonid bioassay to improve diagnostic assessments of water quality in the field. Coastal cutthroat trout (Oncorhynchus clarki clarki) were exposed in the laboratory to the model metal (zinc, 40μg/L) and the polycyclic aromatic hydrocarbon (pyrene, 100μg/L) in water to examine the resulting early life stage salmonid responses. In situ field exposures and bioassays were conducted in parallel to evaluate the water quality of three urban streams in British Columbia (two sites with anthropogenic inputs and one reference site). The endpoints measured in swim-up fry included survival, deformities, growth (weight and length), vitellogenin (vtg) and metallothionein (Mt) protein levels, and hepatic gene expression (e.g., metallothioneins [mta and mtb], endocrine biomarkers [vtg and estrogen receptors, esr] and xenobiotic-metabolizing enzymes [cytochrome P450 1A3, cyp1a3 and glutathione transferases, gstk]). No effects were observed in the zinc treatment, however exposure of swim-up fry to pyrene resulted in decreased survival, deformities and increased estrogen receptor alpha (er1) mRNA levels. In the field exposures, xenobiotic-metabolizing enzymes (cyp1a3, gstk) and zinc transporter (zntBigM103) mRNA were significantly increased in swim-up fry deployed at the sites with more anthropogenic inputs compared to the reference site. Cluster analysis revealed that gene expression profiles in individuals from the streams receiving anthropogenic inputs were more similar to each other than to the reference site. Collectively, the results obtained in this study suggest that molecular endpoints may be useful, and potentially more sensitive, indicators of site 17. Simultaneous inference of a binary composite endpoint and its components DEFF Research Database (Denmark) Große Ruse, Mareile; Ritz, Christian; Hothorn, Ludwig A. 2017-01-01 Binary composite endpoints offer some advantages as a way to succinctly combine evidence from a number of related binary endpoints recorded in the same clinical trial into a single outcome. However, as some concerns about the clinical relevance as well as the interpretation of such composite endp......). The method is compared to the gatekeeping approach and results are provided in the Supplementary Material. In two data examples we show how the procedure may be adapted to handle local significance levels specified through a priori given weights.... 18. Biomarkers in Pediatric ARDS: Future Directions Directory of Open Access Journals (Sweden) Benjamin E Orwoll 2016-06-01 Full Text Available Acute respiratory distress syndrome (ARDS is common among mechanically ventilated children, and accompanies up to 30% of all PICU deaths. Though ARDS diagnosis is based on clinical criteria, biological markers of acute lung damage have been extensively studied in adults and children. Biomarkers of inflammation, alveolar epithelial and capillary endothelial disruption, disordered coagulation, and associated derangements measured in the circulation and other body fluids such as brochoalveolar lavage have improved our understanding of pathobiology of ARDS. The biochemical signature of ARDS has been increasingly well described in adult populations, and this has led to the identification of molecular phenotypes to augment clinical classifications. However, there is a paucity of data from pediatric ARDS patients. Biomarkers and molecular phenotypes have the potential to identify patients at high risk of poor outcomes, and perhaps inform the development of targeted therapies for specific groups of patients. Additionally, because of the lower incidence of and mortality from ARDS in pediatric patients relative to adults and lack of robust clinical predictors of outcome, there is an ongoing interest in biological markers as surrogate outcome measures. The recent definition of pediatric ARDS (pARDS provides additional impetus for measurement of established and novel biomarkers in future pediatric studies in order to further characterize this disease process. This chapter will review the currently available literature and discuss potential future directions for investigation into biomarkers in ARDS among children. 19. Priority wetland invertebrates as conservation surrogates. Science.gov (United States) Ormerod, S J; Durance, Isabelle; Terrier, Aurelie; Swanson, Alisa M 2010-04-01 Invertebrates are important functionally in most ecosystems, but seldom appraised as surrogate indicators of biological diversity. Priority species might be good candidates; thus, here we evaluated whether three freshwater invertebrates listed in the U.K. Biodiversity Action Plan indicated the richness, composition, and conservation importance of associated wetland organisms as defined respectively by their alpha diversity, beta diversity, and threat status. Sites occupied by each of the gastropods Segmentina nitida, Anisus vorticulus, and Valvata macrostoma had greater species richness of gastropods and greater conservation importance than other sites. Each also characterized species assemblages associated with significant variations between locations in alpha or beta diversity among other mollusks and aquatic macrophytes. Because of their distinct resource requirements, conserving the three priority species extended the range of wetland types under management for nature conservation by 18% and the associated gastropod niche-space by around 33%. Although nonpriority species indicated variations in richness, composition, and conservation importance among other organisms as effectively as priority species, none characterized such a wide range of high-quality wetland types. We conclude that priority invertebrates are no more effective than nonpriority species as indicators of alpha and beta diversity or conservation importance among associated organisms. Nevertheless, conserving priority species can extend the array of distinct environments that are protected for their specialized biodiversity and environmental quality. We suggest that this is a key role for priority species and conservation surrogates more generally, and, on our evidence, can best be delivered through multiple species with contrasting habitat requirements. 20. Estimating Predictability Redundancy and Surrogate Data Method CERN Document Server Pecen, L 1995-01-01 A method for estimating theoretical predictability of time series is presented, based on information-theoretic functionals---redundancies and surrogate data technique. The redundancy, designed for a chosen model and a prediction horizon, evaluates amount of information between a model input (e.g., lagged versions of the series) and a model output (i.e., a series lagged by the prediction horizon from the model input) in number of bits. This value, however, is influenced by a method and precision of redundancy estimation and therefore it is a) normalized by maximum possible redundancy (given by the precision used), and b) compared to the redundancies obtained from two types of the surrogate data in order to obtain reliable classification of a series as either unpredictable or predictable. The type of predictability (linear or nonlinear) and its level can be further evaluated. The method is demonstrated using a numerically generated time series as well as high-frequency foreign exchange data and the theoretical ... 1. Developing a Cognition Endpoint for Traumatic Brain Injury Clinical Trials. Science.gov (United States) Silverberg, Noah D; Crane, Paul K; Dams-O'Connor, Kristen; Holdnack, James; Ivins, Brian J; Lange, Rael T; Manley, Geoffrey T; McCrea, Michael; Iverson, Grant L 2017-01-15 Cognitive impairment is a core clinical feature of traumatic brain injury (TBI). After TBI, cognition is a key determinant of post-injury productivity, outcome, and quality of life. As a final common pathway of diverse molecular and microstructural TBI mechanisms, cognition is an ideal endpoint in clinical trials involving many candidate drugs and nonpharmacological interventions. Cognition can be reliably measured with performance-based neuropsychological tests that have greater granularity than crude rating scales, such as the Glasgow Outcome Scale-Extended, which remain the standard for clinical trials. Remarkably, however, there is no well-defined, widely accepted, and validated cognition endpoint for TBI clinical trials. A single cognition endpoint that has excellent measurement precision across a wide functional range and is sensitive to the detection of small improvements (and declines) in cognitive functioning would enhance the power and precision of TBI clinical trials and accelerate drug development research. We outline methodologies for deriving a cognition composite score and a research program for validation. Finally, we discuss regulatory issues and the limitations of a cognition endpoint. 2. An Endpoint Estimate for the Commutator of Singular Integrals Institute of Scientific and Technical Information of China (English) Yong Zhong SUN; Wei Yi SU 2005-01-01 It is well known that the commutator Tb of the singular integral operator T with a BMO function b is bounded on Lp(Rn), 1 < p <∞. In this paper, we consider the endpoint estimates for a kind of commutator of singular integrals. A BMO-type estimate for Tb is obtained under the assumption b ∈ LMO. 3. Chloride and sulphate toxicity to Hydropsyche exocellata (Trichoptera, Hydropsychidae): Exploring intraspecific variation and sub-lethal endpoints Energy Technology Data Exchange (ETDEWEB) Sala, Miquel [Centre Tecnològic Forestal de Catalunya - CTFC, Solsona, Catalunya (Spain); Faria, Melissa [CESAM, Departamento de Biologia, Universidade de Aveiro, 3810-193 Aveiro (Portugal); Sarasúa, Ignacio [Technische Universität München, Munich, Bayern (Germany); Barata, Carlos [Institute of Environmental Assessment and Water Research (IDAEA-CSIC), Barcelona (Spain); Bonada, Núria [Grup de Recerca Freshwater Ecology and Management (FEM), Departament d' Ecologia, Facultat de Biologia, Universitat de Barcelona (UB), Diagonal 643, 08028 Barcelona, Catalonia (Spain); Grup de Recerca Freshwater Ecology and Management (FEM), Departament d' Ecologia, Facultat de Biologia, Institut de Recerca de la Biodiversitat (IRBio), Universitat de Barcelona - UB, Diagonal 643, 08028 Barcelona, Catalonia (Spain); Brucet, Sandra [Aquatic Ecology Group, BETA Tecnio Centre, University of Vic - Central University of Catalonia, Vic, Catalonia (Spain); Catalan Institution for Research and Advanced Studies, ICREA, Barcelona 08010 (Spain); Llenas, Laia; Ponsá, Sergio [Aquatic Ecology Group, BETA Tecnio Centre, University of Vic - Central University of Catalonia, Vic, Catalonia (Spain); Prat, Narcís [Grup de Recerca Freshwater Ecology and Management (FEM), Departament d' Ecologia, Facultat de Biologia, Universitat de Barcelona (UB), Diagonal 643, 08028 Barcelona, Catalonia (Spain); Soares, Amadeu M.V.M. [CESAM, Departamento de Biologia, Universidade de Aveiro, 3810-193 Aveiro (Portugal); and others 2016-10-01 The rivers and streams of the world are becoming saltier due to human activities. In spite of the potential damage that salt pollution can cause on freshwater ecosystems, this is an issue that is currently poorly managed. Here we explored intraspecific differences in the sensitivity of freshwater fauna to two major ions (Cl{sup −} and SO{sub 4}{sup 2−}) using the net-spinning caddisfly Hydropsyche exocellata Dufour 1841 (Trichoptera, Hydropsychidae) as a model organism. We exposed H. exocellata to saline solutions (reaching a conductivity of 2.5 mS cm{sup −1}) with Cl{sup −}:SO{sub 4}{sup 2−} ratios similar to those occurring in effluents coming from the meat, mining and paper industries, which release dissolved salts to rivers and streams in Spain. We used two different populations, coming from low and high conductivity streams. To assess toxicity, we measured sub-lethal endpoints: locomotion, symmetry of the food-capturing nets and oxidative stress biomarkers. According to biomarkers and net building, the population historically exposed to lower conductivities (B10) showed higher levels of stress than the population historically exposed to higher conductivities (L102). However, the differences between populations were not strong. For example, net symmetry was lower in the B10 than in the L102 only 48 h after treatment was applied, and biomarkers showed a variety of responses, with no discernable pattern. Also, treatment effects were rather weak, i.e. only some endpoints, and in most cases only in the B10 population, showed a significant response to treatment. The lack of consistent differences between populations and treatments could be related to the high salt tolerance of H. exocellata, since both populations were collected from streams with relatively high conductivities. The sub-lethal effects tested in this study can offer an interesting and promising tool to monitor freshwater salinization by combining physiological and behavioural bioindicators 4. Modeling hard clinical end-point data in economic analyses. Science.gov (United States) Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V 2013-11-01 The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data. 5. Detection of colorectal neoplasia: Combination of eight blood-based, cancer-associated protein biomarkers. Science.gov (United States) Wilhelmsen, Michael; Christensen, Ib J; Rasmussen, Louise; Jørgensen, Lars N; Madsen, Mogens R; Vilandt, Jesper; Hillig, Thore; Klaerke, Michael; Nielsen, Knud T; Laurberg, Søren; Brünner, Nils; Gawel, Susan; Yang, Xiaoqing; Davis, Gerard; Heijboer, Annemieke; Martens, Frans; Nielsen, Hans J 2017-03-15 Serological biomarkers may be an option for early detection of colorectal cancer (CRC). The present study assessed eight cancer-associated protein biomarkers in plasma from subjects undergoing first time ever colonoscopy due to symptoms attributable to colorectal neoplasia. Plasma AFP, CA19-9, CEA, hs-CRP, CyFra21-1, Ferritin, Galectin-3 and TIMP-1 were determined in EDTA-plasma using the Abbott ARCHITECT® automated immunoassay platform. Primary endpoints were detection of (i) CRC and high-risk adenoma and (ii) CRC. Logistic regression was performed. Final reduced models were constructed selecting the four biomarkers with the highest likelihood scores. Subjects (N = 4,698) were consecutively included during 2010-2012. Colonoscopy detected 512 CRC patients, 319 colonic cancer and 193 rectal cancer. Extra colonic malignancies were detected in 177 patients, 689 had adenomas of which 399 were high-risk, 1,342 had nonneoplastic bowell disease and 1,978 subjects had 'clean' colorectum. Univariable analysis demonstrated that all biomarkers were statistically significant. Multivariate logistic regression demonstrated that the blood-based biomarkers in combination significantly predicted the endpoints. The reduced model resulted in the selection of CEA, hs-CRP, CyFra21-1 and Ferritin for the two endpoints; AUCs were 0.76 and 0.84, respectively. The postive predictive value at 90% sensitivity was 25% for endpoint 1 and the negative predictive value was 93%. For endpoint 2, the postive predictive value was 18% and the negative predictive value was 97%. Combinations of serological protein biomarkers provided a significant identification of subjects with high risk of the presence of colorectal neoplasia. The present set of biomarkers could become important adjunct in early detection of CRC. © 2016 UICC. 6. A Method of Surrogate Model Construction which Leverages Lower-Fidelity Information using Space Mapping Techniques Science.gov (United States) 2014-03-27 errors found using the polynomial response surrogate (LS PRM ) overlaid on the data from the space-mapped (SM) surrogate...nonlinear space-mapped surrogate responses, with the least-squares PRM surrogate response plotted for comparison . . . . . . . . . . . . . . . . . 65 42...Percent error comparison between the least-squares space-mapping and the PRM surrogate models derived from samples in the second dataset 7. Biomarkers in the management of ulcerative colitis: a brief review Directory of Open Access Journals (Sweden) Hussain, Shabnum 2011-01-01 Full Text Available Several attempts have been made in the last two decades to investigate ulcerative colitis (UC patients during the natural course of the disease so as to identify appropriate surrogate markers of disease activity. Most patients with quiescent inflammatory bowel disease have low grade inflammation and it is possible that relapse occurs only once the inflammatory process crosses a critical intensity. Since inflammation is a continuous process, its direct assessment may provide us a quantitative pre-symptomatic measure of imminent relapse. If substantial, it may allow targeted treatment early, to avert relapse or formulate newer therapeutic strategies to maintain symptomatic remission. It is clinically very important to identify these patients at a subclinical stage, noninvasively, by various biomarkers. Biomarkers help to gain an objective measurement of disease activity as symptoms are often subjective. Biomarkers also help to avoid invasive procedures which are often a burden to the patient and the health care system. If an ideal biomarker existed for UC, it would greatly facilitate the work of the gastroenterologist treating these patients. Both “classical” and “emerging” biomarkers of relevance for UC have been studied, but the quest for an ideal biomarker still continues. In this brief review we describe various biomarkers of clinical importance. 8. Patients’ preferences for selection of endpoints in cardiovascular clinical trials Directory of Open Access Journals (Sweden) Robert D. Chow 2014-02-01 Full Text Available Background: To reduce the duration and overall costs of cardiovascular trials, use of the combined endpoints in trial design has become commonplace. Though this methodology may serve the needs of investigators and trial sponsors, the preferences of patients or potential trial subjects in the trial design process has not been studied. Objective: To determine the preferences of patients in the design of cardiovascular trials. Design: Participants were surveyed in a pilot study regarding preferences among various single endpoints commonly used in cardiovascular trials, preference for single vs. composite endpoints, and the likelihood of compliance with a heart medication if patients similar to them participated in the trial design process. Participants: One hundred adult English-speaking patients, 38% male, from a primary care ambulatory practice located in an urban setting. Key results: Among single endpoints, participants rated heart attack as significantly more important than death from other causes (4.53 vs. 3.69, p=0.004 on a scale of 1–6. Death from heart disease was rated as significantly more important than chest pain (4.73 vs. 2.47, p<0.001, angioplasty/PCI/CABG (4.73 vs. 2.43, p<0.001, and stroke (4.73 vs. 2.43, p<0.001. Participants also expressed a slight preference for combined endpoints over single endpoint (43% vs. 57%, incorporation of the opinions of the study patient population into the design of trials (48% vs. 41% for researchers, and a greater likelihood of medication compliance if patient preferences were considered during trial design (67% indicated a significant to major effect. Conclusions: Patients are able to make judgments and express preferences regarding trial design. They prefer that the opinions of the study population rather than the general population be incorporated into the design of the study. This novel approach to study design would not only incorporate patient preferences into medical decision making, but 9. Surrogate Modeling for Geometry Optimization in Material Design DEFF Research Database (Denmark) Rojas Larrazabal, Marielba de la Caridad; Abraham, Yonas B.; Holzwarth, Natalie A.W.; 2007-01-01 We propose a new approach based on surrogate modeling for geometry optimization in material design. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)......We propose a new approach based on surrogate modeling for geometry optimization in material design. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)... 10. Human surrogate neck response to +Gz vertical impact NARCIS (Netherlands) Rooij, L. van; Uittenbogaard, J. 2011-01-01 For the evaluation of impact scenarios with a substantial vertical component, the performance of current human surrogates - the RID 3D hardware dummy and two numerical human models - was evaluated. Volunteer tests with 10G and 6G pulses were compared to reconstructed tests with human surrogates. 11. Term clouds as surrogates for user generated speech NARCIS (Netherlands) M. Tsagkias; M. Larson; M. de Rijke 2008-01-01 User generated spoken audio remains a challenge for Automatic Speech Recognition (ASR) technology and content-based audio surrogates derived from ASR-transcripts must be error robust. An investigation of the use of term clouds as surrogates for podcasts demonstrates that ASR term clouds closely appr 12. INTEC SBW Solid Sludge Surrogate Recipe and Validation Energy Technology Data Exchange (ETDEWEB) Maio, Vince; Janikowski, Stuart; Johnson, Jim; Maio, Vince; Pao, Jenn-Hai 2004-06-01 A nonhazardous INTEC tank farm sludge surrogate that incorporated metathesis reactions to generate solids from solutions of known elements present in the radioactive INTEC tank farm sodium-bearing waste sludges was formulated. Elemental analyses, physical property analyses, and filtration testing were performed on waste surrogate and tank farm waste samples, and the results were compared. For testing physical systems associated with moving the tank farm solids, the surrogate described in this report is the best currently available choice. No other available surrogate exhibits the noted similarities in behavior to the sludges. The chemical morphology, particle size distribution, and settling and flow characteristics of the surrogate were similar to those exhibited by the waste sludges. Nonetheless, there is a difference in chemical makeup of the surrogate and the tank farm waste. If a chemical treatment process were to be evaluated for final treatment and disposition of the waste sludges, the surrogate synthesis process would likely require modification to yield a surrogate with a closer matching chemical composition. 13. Inactivation of Tulane virus, a novel surrogate for human norovirus Science.gov (United States) Human noroviruses (HuNoVs) are the major cause of non-bacterial epidemics of gastroenteritis. Due to the inability to cultivate HuNoVs and the lack of an efficient small animal model, surrogates are used to study HuNoV biology. Two such surrogates, the feline calicivirus (FCV) and the murine norovir... 14. Human surrogate neck response to +Gz vertical impact NARCIS (Netherlands) Rooij, L. van; Uittenbogaard, J. 2011-01-01 For the evaluation of impact scenarios with a substantial vertical component, the performance of current human surrogates - the RID 3D hardware dummy and two numerical human models - was evaluated. Volunteer tests with 10G and 6G pulses were compared to reconstructed tests with human surrogates. Add 15. Space Mapping Optimization of Microwave Circuits Exploiting Surrogate Models DEFF Research Database (Denmark) Bakr, M. H.; Bandler, J. W.; Madsen, Kaj 2000-01-01 A powerful new space-mapping (SM) optimization algorithm is presented in this paper. It draws upon recent developments in both surrogate model-based optimization and modeling of microwave devices, SM optimization is formulated as a general optimization problem of a surrogate model. This model... 16. Preclinical and human surrogate models of itch DEFF Research Database (Denmark) Hoeck, Emil August; Marker, Jens Broch; Gazerani, Parisa; 2016-01-01 Pruritus, or simply itch, is a debilitating symptom that significantly decreases the quality of life in a wide range of clinical conditions. While histamine remains the most studied mediator of itch in humans, treatment options for chronic itch, in particular antihistamine-resistant itch, are lim...... currently applied in animals and humans. This article is protected by copyright. All rights reserved.......Pruritus, or simply itch, is a debilitating symptom that significantly decreases the quality of life in a wide range of clinical conditions. While histamine remains the most studied mediator of itch in humans, treatment options for chronic itch, in particular antihistamine-resistant itch......, are limited. Relevant preclinical and human surrogate models of non-histaminergic itch are needed to accelerate the development of novel antipruritics and diagnostic tools. Advances in basic itch research have facilitated the development of diverse models of itch and associated dysesthesiae. While... 17. Surrogates of plutonium for detection equipment testing Science.gov (United States) Peerani, Paolo; Tomanin, Alice 2011-10-01 Fight against illicit trafficking of nuclear material relies on the possibility to detect nuclear material concealed in vehicles, people or cargo containers. This is done by equipping and training law enforcement and security staff in border stations or other points of access to strategic places and critical infrastructures with radiation detection equipment. The design, development, testing and evaluation of these instruments ideally require the use of real nuclear material to assess, verify and certify their detection performance. Availability of special nuclear material may be an issue, especially for industry, since only few specialized laboratories are licensed for such material. This paper tries to analyse and describe the possibility to use suitable surrogates that may replace the use of real nuclear material in testing the detection capabilities of instruments used in nuclear security. 18. [Surrogate maternity--literature review and practice]. Science.gov (United States) Pilka, L; Rumpík, D; Pilka, R; Koudelka, M; Prudil, L 2009-04-01 This review summarizes opinions on surrogacy including internatinal and governmental organizations attitudes, as well as some religious concerns. Literature review. Reprofit International, Brno, Reproductive medicine and gynecology centre, Zlin, Department of obstetrics and gynecology, Palacky University, Olomouc. The developments in the field of assissted reproduction during the last twenty years have attracted unexpected public interest in some of its ethical and moral aspects. It is very difficult to find a uniform attitude to ethical concerns of assisted conception in plural society. Surrogate mother is defined as a woman who bears and relinquishes a child for another person. The european congress on human reproduction in Barcelona 2008 adopted following résumé on surrogacy: Public opinion has shifted to a position where surrogacy is recognized as an appropriate response to infertility in some circumstances and it is to be expected that this approach will be further strenghtened with stress on positive aspects of familiar life. 19. Tractable Experiment Design via Mathematical Surrogates Energy Technology Data Exchange (ETDEWEB) Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2016-02-29 This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows for the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems. 20. Biomarkers in mood disorders research: developing new and improved therapeutics Directory of Open Access Journals (Sweden) MARK J. NICIU 2014-01-01 Full Text Available Background Recently, surrogate neurobiological biomarkers that correlate with target engagement and therapeutic response have been developed and tested in early phase studies of mood disorders. Objective The identification of biomarkers could help develop personalized psychiatric treatments that may impact public health. Methods These biomarkers, which are associated with clinical response post-treatment, can be directly validated using multimodal approaches including genetic tools, proteomics/metabolomics, peripheral measures, neuroimaging, biostatistical predictors, and clinical predictors. Results To date, early phase biomarker studies have sought to identify measures that can serve as “biosignatures”, or biological patterns of clinical response. These studies have also sought to identify clinical predictors and surrogate outcomes associated with pathophysiological domains consistently described in the National Institute of Mental Health’s (NIMH new Research Domain Criteria (RDoC. Using the N-methyl-D-aspartate (NMDA antagonist ketamine as an example, we identified changes in several domains (clinical, cognitive, and neurophysiological that predicted ketamine’s rapid and sustained antidepressant effects in individuals with treatment-resistant major depressive disorder (MDD or bipolar depression. Discussion These approaches may ultimately provide clues into the neurobiology of psychiatric disorders and may have enormous impact Backon the development of novel therapeutics. 1. Maximizing biomarker discovery by minimizing gene signatures Directory of Open Access Journals (Sweden) Chang Chang 2011-12-01 Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the 2. Sheet metal forming optimization by using surrogate modeling techniques Science.gov (United States) Wang, Hu; Ye, Fan; Chen, Lei; Li, Enying 2017-01-01 Surrogate assisted optimization has been widely applied in sheet metal forming design due to its efficiency. Therefore, to improve the efficiency of design and reduce the product development cycle, it is important for scholars and engineers to have some insight into the performance of each surrogate assisted optimization method and make them more flexible practically. For this purpose, the state-of-the-art surrogate assisted optimizations are investigated. Furthermore, in view of the bottleneck and development of the surrogate assisted optimization and sheet metal forming design, some important issues on the surrogate assisted optimization in support of the sheet metal forming design are analyzed and discussed, involving the description of the sheet metal forming design, off-line and online sampling strategies, space mapping algorithm, high dimensional problems, robust design, some challenges and potential feasible methods. Generally, this paper provides insightful observations into the performance and potential development of these methods in sheet metal forming design. 3. The Asthma Control Questionnaire as a clinical trial endpoint DEFF Research Database (Denmark) Barnes, P J; Casale, T B; Dahl, Ronald; 2014-01-01 The goal of asthma treatment is to control the disease according to guidelines issued by bodies such as the Global Initiative for Asthma. Effective control is dependent upon evaluation of symptoms, initiation of appropriate treatment and minimization of the progressive adverse effects...... of the disease and its therapies. Although individual outcome measures have been shown to correlate with asthma control, composite endpoints are preferred to enable more accurate and robust monitoring of the health of the individual patient. A number of validated instruments are utilized to capture...... these component endpoints; however, there is no consensus on the optimal instrument for use in clinical trials. The Asthma Control Questionnaire (ACQ) has been shown to be a valid, reliable instrument that allows accurate and reproducible assessment of asthma control that compares favourably with other commonly... 4. Internal bremsstrahlung endpoint energy of {sup 54}Mn Energy Technology Data Exchange (ETDEWEB) Hindi, M. M. [Physics Department, Tennessee Technological University, Cookeville, Tennessee 38505 (United States); Larimer, R.-M. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Norman, E. B. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Rech, G. A. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States) 2000-05-01 For {sup 54}Mn there is a discrepancy between the Q{sub EC} obtained from the endpoint energy of the internal bremsstrahlung (IB) spectrum which accompanies the electron capture decay (Q{sub EC}=1353{+-}8 keV) and that obtained from the accepted mass differences (Q{sub EC}=1377{+-}1 keV). This Q value is needed to deduce the partial-half life of the astrophysically interesting {beta}{sup -} decay of {sup 54}Mn from the recently measured {beta}{sup +} partial half-life. To resolve this discrepancy, we have remeasured the endpoint energy of the IB spectrum, by recording coincidences between the IB and the 835-keV {gamma} ray, both detected in Compton-suppressed Ge detectors. The Q{sub EC} we deduce is 1379{+-}8 keV, in agreement with the accepted mass differences. (c) 2000 The American Physical Society. 5. Environmental diversity as a surrogate for species representation. Science.gov (United States) Beier, Paul; de Albuquerque, Fábio Suzart 2015-10-01 Because many species have not been described and most species ranges have not been mapped, conservation planners often use surrogates for conservation planning, but evidence for surrogate effectiveness is weak. Surrogates are well-mapped features such as soil types, landforms, occurrences of an easily observed taxon (discrete surrogates), and well-mapped environmental conditions (continuous surrogate). In the context of reserve selection, the idea is that a set of sites selected to span diversity in the surrogate will efficiently represent most species. Environmental diversity (ED) is a rarely used surrogate that selects sites to efficiently span multivariate ordination space. Because it selects across continuous environmental space, ED should perform better than discrete surrogates (which necessarily ignore within-bin and between-bin heterogeneity). Despite this theoretical advantage, ED appears to have performed poorly in previous tests of its ability to identify 50 × 50 km cells that represented vertebrates in Western Europe. Using an improved implementation of ED, we retested ED on Western European birds, mammals, reptiles, amphibians, and combined terrestrial vertebrates. We also tested ED on data sets for plants of Zimbabwe, birds of Spain, and birds of Arizona (United States). Sites selected using ED represented European mammals no better than randomly selected cells, but they represented species in the other 7 data sets with 20% to 84% effectiveness. This far exceeds the performance in previous tests of ED, and exceeds the performance of most discrete surrogates. We believe ED performed poorly in previous tests because those tests considered only a few candidate explanatory variables and used suboptimal forms of ED's selection algorithm. We suggest future work on ED focus on analyses at finer grain sizes more relevant to conservation decisions, explore the effect of selecting the explanatory variables most associated with species turnover, and investigate 6. Using patient management as a surrogate for patient health outcomes in diagnostic test evaluation Directory of Open Access Journals (Sweden) Staub Lukas P 2012-02-01 Full Text Available Abstract Background Before a new test is introduced in clinical practice, evidence is needed to demonstrate that its use will lead to improvements in patient health outcomes. Studies reporting test accuracy may not be sufficient, and clinical trials of tests that measure patient health outcomes are rarely feasible. Therefore, the consequences of testing on patient management are often investigated as an intermediate step in the pathway. There is a lack of guidance on the interpretation of this evidence, and patient management studies often neglect a discussion of the limitations of measuring patient management as a surrogate for health outcomes. Methods We discuss the rationale for measuring patient management, describe the common study designs and provide guidance about how this evidence should be reported. Results Interpretation of patient management studies relies on the condition that patient management is a valid surrogate for downstream patient benefits. This condition presupposes two critical assumptions: the test improves diagnostic accuracy; and the measured changes in patient management improve patient health outcomes. The validity of this evidence depends on the certainty around these critical assumptions and the ability of the study design to minimise bias. Three common designs are test RCTs that measure patient management as a primary endpoint, diagnostic before-after studies that compare planned patient management before and after testing, and accuracy studies that are extended to report on the actual treatment or further tests received following a positive and negative test result. Conclusions Patient management can be measured as a surrogate outcome for test evaluation if its limitations are recognised. The potential consequences of a positive and negative test result on patient management should be pre-specified and the potential patient benefits of these management changes clearly stated. Randomised comparisons will provide 7. Gene expression profiling reveals multiple toxicity endpoints induced by hepatotoxicants Energy Technology Data Exchange (ETDEWEB) Huang Qihong; Jin Xidong; Gaillard, Elias T.; Knight, Brian L.; Pack, Franklin D.; Stoltz, James H.; Jayadev, Supriya; Blanchard, Kerry T 2004-05-18 Microarray technology continues to gain increased acceptance in the drug development process, particularly at the stage of toxicology and safety assessment. In the current study, microarrays were used to investigate gene expression changes associated with hepatotoxicity, the most commonly reported clinical liability with pharmaceutical agents. Acetaminophen, methotrexate, methapyrilene, furan and phenytoin were used as benchmark compounds capable of inducing specific but different types of hepatotoxicity. The goal of the work was to define gene expression profiles capable of distinguishing the different subtypes of hepatotoxicity. Sprague-Dawley rats were orally dosed with acetaminophen (single dose, 4500 mg/kg for 6, 24 and 72 h), methotrexate (1 mg/kg per day for 1, 7 and 14 days), methapyrilene (100 mg/kg per day for 3 and 7 days), furan (40 mg/kg per day for 1, 3, 7 and 14 days) or phenytoin (300 mg/kg per day for 14 days). Hepatic gene expression was assessed using toxicology-specific gene arrays containing 684 target genes or expressed sequence tags (ESTs). Principal component analysis (PCA) of gene expression data was able to provide a clear distinction of each compound, suggesting that gene expression data can be used to discern different hepatotoxic agents and toxicity endpoints. Gene expression data were applied to the multiplicity-adjusted permutation test and significantly changed genes were categorized and correlated to hepatotoxic endpoints. Repression of enzymes involved in lipid oxidation (acyl-CoA dehydrogenase, medium chain, enoyl CoA hydratase, very long-chain acyl-CoA synthetase) were associated with microvesicular lipidosis. Likewise, subsets of genes associated with hepatotocellular necrosis, inflammation, hepatitis, bile duct hyperplasia and fibrosis have been identified. The current study illustrates that expression profiling can be used to: (1) distinguish different hepatotoxic endpoints; (2) predict the development of toxic endpoints; and 8. A filament eruption with an apparent reshuffle of endpoints Science.gov (United States) Filippov, Boris 2014-08-01 A filament eruption during 2010 April 30-May 1, which shows the reconnection of one filament leg with a region far away from its initial position, is analysed. Observations from three viewpoints are used for measurements of endpoint coordinates as precise as possible. The northern leg of the erupting prominence loop jumps' laterally to a latitude lower than the latitude of the original southern endpoint. Thus, the endpoints have reshuffled their positions in the limb view. Although this behaviour could be interpreted as an asymmetric zipping-like' eruption, it does not look very likely. It seems more likely to represent reconnection of the flux-rope field lines in the northern leg with ambient coronal magnetic field lines rooted in a quiet region far from the filament. From calculations of coronal potential magnetic field, we found that the filament before the eruption was stable to vertical displacements, but was liable to violation of horizontal equilibrium. This is an unusual initiation of an eruption, with a combination of initial horizontal and vertical flux-rope displacements, showing a new and unexpected possibility for the start of an eruptive event. 9. Challenges assessing clinical endpoints in early Huntington disease Science.gov (United States) Paulsen, Jane S.; Wang, Chiachi; Duff, Kevin; Barker, Roger; Nance, Martha; Beglinger, Leigh; Moser, David; Williams, Janet K.; Simpson, Sheila; Langbehn, Douglas; van Kammen, Daniel P. 2010-01-01 The primary aim of this study was to evaluate the current accepted standard clinical endpoint for the earliest-studied HD participants likely to be recruited into clinical trials. Since the advent of genetic testing for HD, it is possible to identify gene carriers prior to the diagnosis of disease, which opens up the possibility of clinical trials of disease-modifying treatments in clinically asymptomatic persons. Current accepted standard clinical endpoints were examined as part of a multi-national, 32-site, longitudinal, observational study of 786 research participants currently in the HD prodrome (gene-positive but not clinically diagnosed). Clinical signs and symptoms were used to prospectively predict functional loss as assessed by current accepted standard endpoints over 8 years of follow up. Functional capacity measures were not sensitive for HD in the prodrome; over 88% scored at ceiling. Prospective evaluation revealed that the first functional loss was in their accustomed work. In a survival analysis, motor, cognitive, and psychiatric measures were all predictors of job change. To our knowledge, this is the first prospective study ever conducted on the emergence of functional loss secondary to brain disease. We conclude that future clinical trials designed for very early disease will require the development of new and more sensitive measures of real-life function. PMID:20623772 10. Combination of biomarkers DEFF Research Database (Denmark) Thurfjell, Lennart; Lötjönen, Jyrki; Lundqvist, Roger 2012-01-01 The New National Institute on Aging-Alzheimer's Association diagnostic guidelines for Alzheimer's disease (AD) incorporate biomarkers in the diagnostic criteria and suggest division of biomarkers into two categories: Aβ accumulation and neuronal degeneration or injury.... 11. The handbook of biomarkers CERN Document Server Jain, Kewal K 2010-01-01 This handbook describes many different types of biomarkers and their discovery. It also presents the background information needed for the evaluation of biomarkers as well as the essential procedures for their validation and use in clinical trials. 12. Biomarkers in Veterinary Medicine. Science.gov (United States) Myers, Michael J; Smith, Emily R; Turfle, Phillip G 2017-02-08 This article summarizes the relevant definitions related to biomarkers; reviews the general processes related to biomarker discovery and ultimate acceptance and use; and finally summarizes and reviews, to the extent possible, examples of the types of biomarkers used in animal species within veterinary clinical practice and human and veterinary drug development. We highlight opportunities for collaboration and coordination of research within the veterinary community and leveraging of resources from human medicine to support biomarker discovery and validation efforts for veterinary medicine. 13. Evidence-based medical perspectives: the evolving role of PSA for early detection, monitoring of treatment response, and as a surrogate end point of efficacy for interventions in men with different clinical risk states for the prevention and progression of prostate cancer. Science.gov (United States) Lieberman, Ronald 2004-01-01 Following FDA approval and introduction into the clinic in the mid-1980s, PSA testing has become arguably the most versatile serum tumor marker in urologic oncology with clinical use for early detection (screening) of prostate cancer (PC), risk stratification for clinical staging, prognosis, intermediate biomarker for monitoring tumor recurrence, and more recently as an intermediate biomarker for assessing therapeutic response to antiandrogens, radiation therapy, and chemotherapy. PSA now routinely guides health care providers for the clinical management of PC over a wide range of clinical risk states for men at risk of PC, after local definitive therapy and after systemic therapy to prevent progression to metastatic bone disease, and to palliate men with hormone refractory prostate cancer (HRPC). To further assess the evidence that supports these clinical applications, this commentary reviews and critically evaluates the emerging body of new data focusing on several recently published seminal articles by D'Amico et al and Thompson et al, the new National Comprehensive Cancer Network 2004 recommendations for starting PSA testing at the age of 40 years old, the latest results from 2 phase 3 randomized, controlled trials of taxane-based regimens showing improved survival for men with HRPC, and the recent US FDA Public Workshop on Clinical Trial Endpoints in Prostate Cancer that helped to distill and synthesize the current state of the art and the progress toward validation of PSA metrics (eg, PSA velocity) as a surrogate end point (SE) for treatment efficacy with taxane-based regimens. Furthermore, several randomized, controlled chemoprevention trials in progress evaluating agents such as selenium and vitamin E in high-risk cohorts are well poised to confirm the validity of PSA as an SE for clinical efficacy for the prevention and progression of PC. Although there continues to be a need to validate better biomarkers before diagnosis of PC (more sensitive and specific 14. Surrogate motherhood as a medical treatment procedure for women's infertility. Science.gov (United States) Jovic, Olga S 2011-03-01 The content of this work is conceived on the research of the consequences of surrogate motherhood as a process of assisted procreation, which represent a way of parenthood in cases when it is not possible to realize parenthood through a natural way. Surrogate motherhood is a process in which a woman (surrogate mother) agrees to carry a pregnancy with the intent to give the child to the couple with whom she has made a contract on surrogate maternity after the birth. This process of conception and birth makes the determination of the child's origin on its mother's side hard to determine, because of the distinction of the genetic and gestation phases of the two women. The concept of surrogate motherhood is to appear in two forms, depending on the existence or the non-existence of the genetic link between the surrogate mother and the child she gives birth to. There are gestation (full) and genetic (partial) surrogates each with different modalities and legal and ethical implications. In Serbia, Infertility Treatment and the Bio-medically Assisted Procreation Act from 2009 explicitly forbids surrogate motherhood, despite the fact that an infertile couple decides to use it, as a rule, after having tried all other treatment procedures, in cases when there is a diagnosis but the conventional treatment applied has not produced the desired results. Given the fact that no one has the right to ignore the sufferings of people who cannot procreate naturally, the medical practice and legal science in our country plead for a formulation of a legal framework in which to apply surrogate motherhood as an infertility treatment, under particular conditions. 15. Cerebrospinal fluid biomarkers in trials for Alzheimer and Parkinson diseases. Science.gov (United States) Lleó, Alberto; Cavedo, Enrica; Parnetti, Lucilla; Vanderstichele, Hugo; Herukka, Sanna Kaisa; Andreasen, Niels; Ghidoni, Roberta; Lewczuk, Piotr; Jeromin, Andreas; Winblad, Bengt; Tsolaki, Magda; Mroczko, Barbara; Visser, Pieter Jelle; Santana, Isabel; Svenningsson, Per; Blennow, Kaj; Aarsland, Dag; Molinuevo, José Luis; Zetterberg, Henrik; Mollenhauer, Brit 2015-01-01 Alzheimer disease (AD) and Parkinson disease (PD) are the most common neurodegenerative disorders. For both diseases, early intervention is thought to be essential to the success of disease-modifying treatments. Cerebrospinal fluid (CSF) can reflect some of the pathophysiological changes that occur in the brain, and the number of CSF biomarkers under investigation in neurodegenerative conditions has grown rapidly in the past 20 years. In AD, CSF biomarkers are increasingly being used in clinical practice, and have been incorporated into the majority of clinical trials to demonstrate target engagement, to enrich or stratify patient groups, and to find evidence of disease modification. In PD, CSF biomarkers have not yet reached the clinic, but are being studied in patients with parkinsonism, and are being used in clinical trials either to monitor progression or to demonstrate target engagement and downstream effects of drugs. CSF biomarkers might also serve as surrogate markers of clinical benefit after a specific therapeutic intervention, although additional data are required. It is anticipated that CSF biomarkers will have an important role in trials aimed at disease modification in the near future. In this Review, we provide an overview of CSF biomarkers in AD and PD, and discuss their role in clinical trials. 16. Mother-daughter in vitro fertilization triplet surrogate pregnancy. Science.gov (United States) Michelow, M C; Bernstein, J; Jacobson, M J; McLoughlin, J L; Rubenstein, D; Hacking, A I; Preddy, S; Van der Wat, I J 1988-02-01 A successful triplet pregnancy has been established in a surrogate gestational mother following the transfer of five embryos fertilized in vitro. The oocytes were donated by her biological daughter, and the sperm obtained from the daughter's husband. The daughter's infertility followed a total abdominal hysterectomy performed for a postpartum hemorrhage as a result of a placenta accreta. Synchronization of both their menstrual cycles was obtained using oral contraceptive suppression for 2 months, followed by stimulation of both the surrogate gestational mother and her daughter such that embryo transfer would occur at least 48 hr after the surrogate gestational mother's own ovulation. This case raises a number of medical, social, psychological, and ethical issues. 17. Harnessing Cerebrospinal Fluid Biomarkers in Clinical Trials for Treating Alzheimer's and Parkinson's Diseases: Potential and Challenges. Science.gov (United States) Kim, Dana; Kim, Young Sam; Shin, Dong Wun; Park, Chang Shin; Kang, Ju Hee 2016-10-01 No disease-modifying therapies (DMT) for neurodegenerative diseases (NDs) have been established, particularly for Alzheimer's disease (AD) and Parkinson's disease (PD). It is unclear why candidate drugs that successfully demonstrate therapeutic effects in animal models fail to show disease-modifying effects in clinical trials. To overcome this hurdle, patients with homogeneous pathologies should be detected as early as possible. The early detection of AD patients using sufficiently tested biomarkers could demonstrate the potential usefulness of combining biomarkers with clinical measures as a diagnostic tool. Cerebrospinal fluid (CSF) biomarkers for NDs are being incorporated in clinical trials designed with the aim of detecting patients earlier, evaluating target engagement, collecting homogeneous patients, facilitating prevention trials, and testing the potential of surrogate markers relative to clinical measures. In this review we summarize the latest information on CSF biomarkers in NDs, particularly AD and PD, and their use in clinical trials. The large number of issues related to CSF biomarker measurements and applications has resulted in relatively few clinical trials on CSF biomarkers being conducted. However, the available CSF biomarker data obtained in clinical trials support the advantages of incorporating CSF biomarkers in clinical trials, even though the data have mostly been obtained in AD trials. We describe the current issues with and ongoing efforts for the use of CSF biomarkers in clinical trials and the plans to harness CSF biomarkers for the development of DMT and clinical routines. This effort requires nationwide, global, and multidisciplinary efforts in academia, industry, and regulatory agencies to facilitate a new era. 18. Harnessing Cerebrospinal Fluid Biomarkers in Clinical Trials for Treating Alzheimer's and Parkinson's Diseases: Potential and Challenges Science.gov (United States) Kim, Dana; Kim, Young-Sam; Shin, Dong Wun; Park, Chang-Shin 2016-01-01 No disease-modifying therapies (DMT) for neurodegenerative diseases (NDs) have been established, particularly for Alzheimer's disease (AD) and Parkinson's disease (PD). It is unclear why candidate drugs that successfully demonstrate therapeutic effects in animal models fail to show disease-modifying effects in clinical trials. To overcome this hurdle, patients with homogeneous pathologies should be detected as early as possible. The early detection of AD patients using sufficiently tested biomarkers could demonstrate the potential usefulness of combining biomarkers with clinical measures as a diagnostic tool. Cerebrospinal fluid (CSF) biomarkers for NDs are being incorporated in clinical trials designed with the aim of detecting patients earlier, evaluating target engagement, collecting homogeneous patients, facilitating prevention trials, and testing the potential of surrogate markers relative to clinical measures. In this review we summarize the latest information on CSF biomarkers in NDs, particularly AD and PD, and their use in clinical trials. The large number of issues related to CSF biomarker measurements and applications has resulted in relatively few clinical trials on CSF biomarkers being conducted. However, the available CSF biomarker data obtained in clinical trials support the advantages of incorporating CSF biomarkers in clinical trials, even though the data have mostly been obtained in AD trials. We describe the current issues with and ongoing efforts for the use of CSF biomarkers in clinical trials and the plans to harness CSF biomarkers for the development of DMT and clinical routines. This effort requires nationwide, global, and multidisciplinary efforts in academia, industry, and regulatory agencies to facilitate a new era. 19. New sepsis biomarkers Directory of Open Access Journals (Sweden) Dolores Limongi 2016-06-01 Full Text Available Sepsis remains a leading cause of death in the intensive care units and in all age groups worldwide. Early recognition and diagnosis are key to achieving improved outcomes. Therefore, novel biomarkers that might better inform clinicians treating such patients are surely needed. The main attributes of successful biomarkers would be high sensitivity, specificity, possibility of bedside monitoring and financial accessibility. A panel of sepsis biomarkers along with currently used laboratory tests will facilitate earlier diagnosis, timely treatment and improved outcome may be more effective than single biomarkers. In this review, we summarize the most recent advances on sepsis biomarkers evaluated in clinical and experimental studies. 20. New sepsis biomarkers Institute of Scientific and Technical Information of China (English) Dolores Limongi; Cartesio D’Agostini; Marco Ciotti 2016-01-01 Sepsis remains a leading cause of death in the intensive care units and in all age groups worldwide. Early recognition and diagnosis are key to achieving improved outcomes.Therefore, novel biomarkers that might better inform clinicians treating such patients are surely needed. The main attributes of successful biomarkers would be high sensitivity,specificity, possibility of bedside monitoring and financial accessibility. A panel of sepsis biomarkers along with currently used laboratory tests will facilitate earlier diagnosis,timely treatment and improved outcome may be more effective than single biomarkers. In this review, we summarize the most recent advances on sepsis biomarkers evaluated in clinical and experimental studies. 1. New sepsis biomarkers Institute of Scientific and Technical Information of China (English) Dolores Limongi; Cartesio DAgostini; Marco Ciotti 2016-01-01 Sepsis remains a leading cause of death in the intensive care units and in all age groups worldwide. Early recognition and diagnosis are key to achieving improved outcomes. Therefore, novel biomarkers that might better inform clinicians treating such patients are surely needed. The main attributes of successful biomarkers would be high sensitivity, specificity, possibility of bedside monitoring and financial accessibility. A panel of sepsis biomarkers along with currently used laboratory tests will facilitate earlier diagnosis, timely treatment and improved outcome may be more effective than single biomarkers. In this review, we summarize the most recent advances on sepsis biomarkers evaluated in clinical and experimental studies. 2. SURROGATE MOTHER DALAM PERSPEKTIF HUKUM PIDANA INDONESIA Directory of Open Access Journals (Sweden) Mr. Muntaha 2013-04-01 Full Text Available The development of science and technology, in particular in the field of health, has already recently brought a huge advantage and problem in human life. An example of technological marvel that not only requires deep legal thoughts but also at the same time solution is the bio-medical technology advancement of surrogacy. Surrogacy deals with human’s inclination towards reproductive activity. However, it opens up legal complication, in particular with regards to the potential commission of a criminal action as well as to the notion of doctor’s liability. Perkembangan ilmu dan teknologi di bidang kesehatan yang semakin maju dan pesat telah membawa berbagai manfaat dan masalah dalam kehidupan manusia dewasa ini. Salah satu perkembangan yang tidak hanya membutuhkan pemikiran di bidang hukum, tetapi juga sekaligus solusinya adalah mengenai kecanggihan teknologi bio-medis surrogate mother. Surrogacy menyentuh sisi kemanusiaan seorang insan terhadap reproduksi. Akan tetapi, lembaga surrogacy juga membawa komplikasi hukum terutama terkait dengan potensi tindak pidana dan dengan persoalan tanggung jawab dokter. 3. Polynomial Chaos Surrogates for Bayesian Inference KAUST Repository Le Maitre, Olivier 2016-01-06 The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency. 4. A Large-Scale Study of Surrogate Physicality and Gesturing on Human–Surrogate Interactions in a Public Space Directory of Open Access Journals (Sweden) Kangsoo Kim 2017-07-01 Full Text Available Technological human surrogates, including robotic and virtual humans, have been popularly used in various scenarios, including training, education, and entertainment. Prior research has investigated the effects of the surrogate’s physicality and gesturing in human perceptions and social influence of the surrogate. However, those studies have been carried out in research laboratories, where the participants were aware that it was an experiment, and the participant demographics are typically relatively narrow—e.g., college students. In this paper, we describe and share results from a large-scale exploratory user study involving 7,685 people in a public space, where they were unaware of the experimental nature of the setting, to investigate the effects of surrogate physicality and gesturing on their behavior during human–surrogate interactions. We evaluate human behaviors using several variables, such as proactivity and reactivity, and proximity. We have identified several interesting phenomena that could lead to hypotheses developed as part of future hypothesis-based studies. Based on the measurements of the variables, we believe people are more likely to be engaged in a human–surrogate interaction when the surrogate is physically present, but movements and gesturing with its body parts have not shown the expected benefits for the interaction engagement. Regarding the demographics of the people in the study, we found higher overall engagement for females than males, and higher reactivity for younger than older people. We discuss implications for practitioners aiming to design a technological surrogate that will directly interact with real humans. 5. Testing of the OMERACT 8 draft validation criteria for a soluble biomarker reflecting structural damage in rheumatoid arthritis: a systematic literature search on 5 candidate biomarkers DEFF Research Database (Denmark) Syversen, Silje W; Landewe, Robert; van der Heijde, Désirée; 2009-01-01 OBJECTIVE: To test the OMERACT 8 draft validation criteria for soluble biomarkers by assessing the strength of literature evidence in support of 5 candidate biomarkers. METHODS: A systematic literature search was conducted on the 5 soluble biomarkers RANKL, osteoprotegerin (OPG), matrix...... metalloprotease (MMP-3), urine C-telopeptide of types I and II collagen (U-CTX-I and U CTX-II), focusing on the 14 OMERACT 8 criteria. Two electronic voting exercises were conducted to address: (1) strength of evidence for each biomarker as reflecting structural damage according to each individual criterion...... and the importance of each individual criterion; (2) overall strength of evidence in support of each of the 5 candidate biomarkers as reflecting structural damage endpoints in rheumatoid arthritis (RA) and identification of omissions to the criteria set. RESULTS: The search identified 111 articles. The strength... 6. Carotenoid status in man: effects on biomarkers of eye, skin and cardiovascular health NARCIS (Netherlands) Broekmans, W.M.R. 2002-01-01 Observational epidemiological studies have consistently shown that a diet rich in carotenoid-containing fruit and vegetables is associated with a reduced risk of chronic diseases. Because intervention studies with hard endpoints are time-consuming and costly, the use of biomarkers cou 7. Clinical research and methodology: What usage and what hierarchical order for secondary endpoints? Science.gov (United States) Laporte, Silvy; Diviné, Marine; Girault, Danièle 2016-02-01 In a randomised clinical trial, when the result of the primary endpoint shows a significant benefit, the secondary endpoints are scrutinised to identify additional effects of the treatment. However, this approach entails a risk of concluding that there is a benefit for one of these endpoints when such benefit does not exist (inflation of type I error risk). There are mainly two methods used to control the risk of drawing erroneous conclusions for secondary endpoints. The first method consists of distributing the risk over several co-primary endpoints, so as to maintain an overall risk of 5%. The second is the hierarchical test procedure, which consists of first establishing a hierarchy of the endpoints, then evaluating each endpoint in succession according to this hierarchy while the endpoints continue to show statistical significance. This simple method makes it possible to show the additional advantages of treatments and to identify the factors that differentiate them. 8. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer Science.gov (United States) Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E. 2014-08-12 Endpoint-based parallel data processing in a parallel active messaging interface (PAMI) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks. 9. AN ENDPOINT ESTIMATE FOR MAXIMAL MULTILINEAR SINGULAR INTEGRAL OPERATORS Institute of Scientific and Technical Information of China (English) 2007-01-01 A weak type endpoint estimate for the maximal multilinear singular integral operator T*Af(x)=supε>0|(f)(x-y)>ε (Ω(x-y)/(|x-y|(n+1)))(A(x)-A(y)-▽A(y)(x-y))f(y)dy| is established, where Ω is homogeneous of degree zero, integrable on the unit sphere and has vanishing moment of order one, and A has derivatives of order one in BMO(Rn). A regularity condition on Ω which implies an LlogL type estimate of T*A is given. 10. Establishing maintenance intervals based on measurement reliability of engineering endpoints. Science.gov (United States) James, P J 2000-01-01 Methods developed by the metrological community and principles used by the research community were integrated to provide a basis for a periodic maintenance interval analysis system. Engineering endpoints are used as measurement attributes on which to base two primary quality indicators: accuracy and reliability. Also key to establishing appropriate maintenance intervals is the ability to recognize two primary failure modes: random failure and time-related failure. The primary objective of the maintenance program is to avert predictable and preventable device failure, and understanding time-related failures enables service personnel to set intervals accordingly. 11. Two-temperature LATE-PCR endpoint genotyping Directory of Open Access Journals (Sweden) Reis Arthur H 2006-12-01 Full Text Available Abstract Background In conventional PCR, total amplicon yield becomes independent of starting template number as amplification reaches plateau and varies significantly among replicate reactions. This paper describes a strategy for reconfiguring PCR so that the signal intensity of a single fluorescent detection probe after PCR thermal cycling reflects genomic composition. The resulting method corrects for product yield variations among replicate amplification reactions, permits resolution of homozygous and heterozygous genotypes based on endpoint fluorescence signal intensities, and readily identifies imbalanced allele ratios equivalent to those arising from gene/chromosomal duplications. Furthermore, the use of only a single colored probe for genotyping enhances the multiplex detection capacity of the assay. Results Two-Temperature LATE-PCR endpoint genotyping combines Linear-After-The-Exponential (LATE-PCR (an advanced form of asymmetric PCR that efficiently generates single-stranded DNA and mismatch-tolerant probes capable of detecting allele-specific targets at high temperature and total single-stranded amplicons at a lower temperature in the same reaction. The method is demonstrated here for genotyping single-nucleotide alleles of the human HEXA gene responsible for Tay-Sachs disease and for genotyping SNP alleles near the human p53 tumor suppressor gene. In each case, the final probe signals were normalized against total single-stranded DNA generated in the same reaction. Normalization reduces the coefficient of variation among replicates from 17.22% to as little as 2.78% and permits endpoint genotyping with >99.7% accuracy. These assays are robust because they are consistent over a wide range of input DNA concentrations and give the same results regardless of how many cycles of linear amplification have elapsed. The method is also sufficiently powerful to distinguish between samples with a 1:1 ratio of two alleles from samples comprised of 12. Two-temperature LATE-PCR endpoint genotyping Science.gov (United States) Sanchez, J Aquiles; Abramowitz, Jessica D; Salk, Jesse J; Reis, Arthur H; Rice, John E; Pierce, Kenneth E; Wangh, Lawrence J 2006-01-01 Background In conventional PCR, total amplicon yield becomes independent of starting template number as amplification reaches plateau and varies significantly among replicate reactions. This paper describes a strategy for reconfiguring PCR so that the signal intensity of a single fluorescent detection probe after PCR thermal cycling reflects genomic composition. The resulting method corrects for product yield variations among replicate amplification reactions, permits resolution of homozygous and heterozygous genotypes based on endpoint fluorescence signal intensities, and readily identifies imbalanced allele ratios equivalent to those arising from gene/chromosomal duplications. Furthermore, the use of only a single colored probe for genotyping enhances the multiplex detection capacity of the assay. Results Two-Temperature LATE-PCR endpoint genotyping combines Linear-After-The-Exponential (LATE)-PCR (an advanced form of asymmetric PCR that efficiently generates single-stranded DNA) and mismatch-tolerant probes capable of detecting allele-specific targets at high temperature and total single-stranded amplicons at a lower temperature in the same reaction. The method is demonstrated here for genotyping single-nucleotide alleles of the human HEXA gene responsible for Tay-Sachs disease and for genotyping SNP alleles near the human p53 tumor suppressor gene. In each case, the final probe signals were normalized against total single-stranded DNA generated in the same reaction. Normalization reduces the coefficient of variation among replicates from 17.22% to as little as 2.78% and permits endpoint genotyping with >99.7% accuracy. These assays are robust because they are consistent over a wide range of input DNA concentrations and give the same results regardless of how many cycles of linear amplification have elapsed. The method is also sufficiently powerful to distinguish between samples with a 1:1 ratio of two alleles from samples comprised of 2:1 and 1:2 ratios of the 13. Hepatology may have problems with putative surrogate outcome measures DEFF Research Database (Denmark) Gluud, Christian; Brok, Jesper; Gong, Yan; 2007-01-01 hepatitis C, serum bilirubin concentration following ursodeoxycholic acid or immunosuppressants for patients with primary biliary cirrhosis, and nutritional outcomes following artificial nutrition for liver patients may not be valid surrogates for morbidity or mortality. The challenge is to develop reliable... 14. Fernald Silos 1 & 2 Accelerated Waste Retrieval Program Surrogate Development Energy Technology Data Exchange (ETDEWEB) Mullen, O Dennis; Erian, Fadel F. 2002-09-01 Whitepaper describing the rationale and methodology for development of surrogates to be used for testing retrieval and processing systems for the DOE Fernald Silos 1 & 2 wastes. One significant updating/revision is expected. 15. THE SURROGATE COLONIZATION OF PALESTINE, 1917-1939 OpenAIRE Atran, Scott 1989-01-01 The "surrogate colonization" of Palestine had a foreign power giving to a nonnative group rights over land occupied by an indigenous people. It thus brought into play the complementary and conflicting agendas of three culturally distinguishable parties: British, Jews and Arabs. Each party had both "externalist" [those with no sustained practical experience of day to day life in Palestine] and "internalist" representatives. The surrogate idea was based on a "strategic consensus" involving each... 16. Surrogate nutrition markers, malnutrition, and adequacy of nutrition support. Science.gov (United States) Seres, David S 2005-06-01 Surrogate nutrition markers are used to assess adequacy of nourishment and to define malnutrition despite evidence that fails to link nourishment, surrogate markers, and outcomes. Markers such as serum levels of albumin, prealbumin, transferrin, and IGF-1 and delayed hypersensitivity and total lymphocyte count may be valid to help stratify risk. However, it is not appropriate to consider these as markers of adequacy of nourishment in the sick patient. 17. Emotional experiences in surrogate mothers: A qualitative study OpenAIRE Hoda Ahmari Tehran; Shohreh Tashi; Nahid Mehran; Narges Eskandari; Tahmineh Dadkhah Tehrani 2014-01-01 Background: Surrogacy is one of the new techniques of assisted reproduction technology in which a woman carries and bears a child for another woman. In Iran, many Shia clerics and jurists considered it permissible so there is no religious prohibition for it. In addition to the risk of physical complications for complete surrogate mothers, the possibility of psychological complications resulted from emotional attachment to a living creature in the surrogate mother as another injury requires co... 18. Human surrogate models of neuropathic pain: validity and limitations. Science.gov (United States) Binder, Andreas 2016-02-01 Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research? 19. Magnetometer Response of Commonly Found Munitions Items and Munitions Surrogates Science.gov (United States) 2012-01-12 Predicted minimum magnetometer anomaly strength for a variety of munitions and surrogate items at a burial depth corresponding to 11x their respective...Response Live Site Demonstrations. The authors would like to thank Craig Murray of Parsons and Stephen Billings of Sky Research for their...variety of munitions and surrogate items at a burial depth corresponding to 11x their respective diameter. The sensor is assumed to be deployed as part 20. Biomarkers in clinical medicine. Science.gov (United States) Chen, Xiao-He; Huang, Shuwen; Kerr, David 2011-01-01 Biomarkers have been used in clinical medicine for decades. With the rise of genomics and other advances in molecular biology, biomarker studies have entered a whole new era and hold promise for early diagnosis and effective treatment of many diseases. A biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes or pharmacologic responses to a therapeutic intervention (1). They can be classified into five categories based on their application in different disease stages: 1) antecedent biomarkers to identify the risk of developing an illness, 2) screening biomarkers to screen for subclinical disease, 3) diagnostic biomarkers to recognize overt disease, 4) staging biomarkers to categorise disease severity, and 5) prognostic biomarkers to predict future disease course, including recurrence, response to therapy, and monitoring efficacy of therapy (1). Biomarkers can indicate a variety of health or disease characteristics, including the level or type of exposure to an environmental factor, genetic susceptibility, genetic responses to environmental exposures, markers of subclinical or clinical disease, or indicators of response to therapy. This chapter will focus on how these biomarkers have been used in preventive medicine, diagnostics, therapeutics and prognostics, as well as public health and their current status in clinical practice. 1. Proteomic Approaches in Biomarker Discovery: New Perspectives in Cancer Diagnostics Directory of Open Access Journals (Sweden) Petra Hudler 2014-01-01 Full Text Available Despite remarkable progress in proteomic methods, including improved detection limits and sensitivity, these methods have not yet been established in routine clinical practice. The main limitations, which prevent their integration into clinics, are high cost of equipment, the need for highly trained personnel, and last, but not least, the establishment of reliable and accurate protein biomarkers or panels of protein biomarkers for detection of neoplasms. Furthermore, the complexity and heterogeneity of most solid tumours present obstacles in the discovery of specific protein signatures, which could be used for early detection of cancers, for prediction of disease outcome, and for determining the response to specific therapies. However, cancer proteome, as the end-point of pathological processes that underlie cancer development and progression, could represent an important source for the discovery of new biomarkers and molecular targets for tailored therapies. 2. Cortical plasticity as a new endpoint measurement for chronic pain Directory of Open Access Journals (Sweden) Zhuo Min 2011-07-01 Full Text Available Abstract Animal models of chronic pain are widely used to investigate basic mechanisms of chronic pain and to evaluate potential novel drugs for treating chronic pain. Among the different criteria used to measure chronic pain, behavioral responses are commonly used as the end point measurements. However, not all chronic pain conditions can be easily measured by behavioral responses such as the headache, phantom pain and pain related to spinal cord injury. Here I propose that cortical indexes, that indicate neuronal plastic changes in pain-related cortical areas, can be used as endpoint measurements for chronic pain. Such cortical indexes are not only useful for those chronic pain conditions where a suitable animal model is lacking, but also serve as additional screening methods for potential drugs to treat chronic pain in humans. These cortical indexes are activity-dependent immediate early genes, electrophysiological identified plastic changes and biochemical assays of signaling proteins. It can be used to evaluate novel analgesic compounds that may act at peripheral or spinal sites. I hope that these new cortical endpoint measurements will facilitate our search for new, and more effective, pain medicines, and help to reduce false lead drug targets. 3. Design and analysis of crossover trials for absorbing binary endpoints. Science.gov (United States) Nason, Martha; Follmann, Dean 2010-09-01 The crossover is a popular and efficient trial design used in the context of patient heterogeneity to assess the effect of treatments that act relatively quickly and whose benefit disappears with discontinuation. Each patient can serve as her own control as within-individual treatment and placebo responses are compared. Conventional wisdom is that these designs are not appropriate for absorbing binary endpoints, such as death or HIV infection. We explore the use of crossover designs in the context of these absorbing binary endpoints and show that they can be more efficient than the standard parallel group design when there is heterogeneity in individuals' risks. We also introduce a new two-period design where first period "survivors" are rerandomized for the second period. This design combines the crossover design with the parallel design and achieves some of the efficiency advantages of the crossover design while ensuring that the second period groups are comparable by randomization. We discuss the validity of the new designs and evaluate both a mixture model and a modified Mantel-Haenszel test for inference. The mixture model assumes no carryover or period effects while the Mantel-Haenszel approach conditions out period effects. Simulations are used to compare the different designs and an example is provided to explore practical issues in implementation. 4. Critical endpoint for deconfinement in matrix and other effective models CERN Document Server 2012-01-01 We consider the position of the deconfining critical endpoint, where the first order transition for deconfinement is washed out by the presence of massive, dynamical quarks. We use an effective matrix model, employed previously to analyze the transition in the pure glue theory. If the param- eters of the pure glue theory are unaffected by the presence of dynamical quarks, and if the quarks only contribute perturbatively, then for three colors and three degenerate quark flavors this quark mass is very heavy, m_de \\sim 2.5 GeV, while the critical temperature, T_de, barely changes, \\sim 1% below that in the pure glue theory. The location of the deconfining critical endpoint is a sensitive test to differentiate between effective models. For example, models with a logarithmic potential for the Polyakov loop give much smaller values of the quark mass, m_de \\sim 1 GeV, and a large shift in T_de \\sim 10% lower than that in the pure glue theory. 5. Financial Surrogate Decision Making: Lessons from Applied Experimental Philosophy. Science.gov (United States) 2016-09-20 An estimated 1 in 4 elderly Americans need a surrogate to make decisions at least once in their lives. With an aging population, that number is almost certainly going to increase. This paper focuses on financial surrogate decision making. To illustrate some of the empirical and moral implications associated with financial surrogate decision making, two experiments suggest that default choice settings can predictably influence some surrogate financial decision making. Experiment 1 suggested that when making hypothetical financial decisions, surrogates tended to stay with default settings (OR = 4.37, 95% CI 1.52, 12.48). Experiment 2 replicated and extended this finding suggesting that in a different context (OR = 2.27, 95% CI 1.1, 4.65). Experiment 2 also suggested that those who were more numerate were less likely to be influenced by default settings than the less numerate, but only when the decision is whether to "opt in" (p = .05). These data highlight the importance of a recent debate about "nudging." Defaults are common methods to nudge people to make desirable choices while allowing the liberty to choose otherwise. Some of the ethics of using default settings to nudge surrogate decision makers are discussed. 6. Reliability-based design optimization with progressive surrogate models Science.gov (United States) Kanakasabai, Pugazhendhi; Dhingra, Anoop K. 2014-12-01 Reliability-based design optimization (RBDO) has traditionally been solved as a nested (bilevel) optimization problem, which is a computationally expensive approach. Unilevel and decoupled approaches for solving the RBDO problem have also been suggested in the past to improve the computational efficiency. However, these approaches also require a large number of response evaluations during optimization. To alleviate the computational burden, surrogate models have been used for reliability evaluation. These approaches involve construction of surrogate models for the reliability computation at each point visited by the optimizer in the design variable space. In this article, a novel approach to solving the RBDO problem is proposed based on a progressive sensitivity surrogate model. The sensitivity surrogate models are built in the design variable space outside the optimization loop using the kriging method or the moving least squares (MLS) method based on sample points generated from low-discrepancy sampling (LDS) to estimate the most probable point of failure (MPP). During the iterative deterministic optimization, the MPP is estimated from the surrogate model for each design point visited by the optimizer. The surrogate sensitivity model is also progressively updated for each new iteration of deterministic optimization by adding new points and their responses. Four example problems are presented showing the relative merits of the kriging and MLS approaches and the overall accuracy and improved efficiency of the proposed approach. 7. Cystatin C: a candidate biomarker for amyotrophic lateral sclerosis. Directory of Open Access Journals (Sweden) Meghan E Wilson Full Text Available Amyotrophic lateral sclerosis (ALS is a fatal neurologic disease characterized by progressive motor neuron degeneration. Clinical disease management is hindered by both a lengthy diagnostic process and the absence of effective treatments. Reliable panels of diagnostic, surrogate, and prognostic biomarkers are needed to accelerate disease diagnosis and expedite drug development. The cysteine protease inhibitor cystatin C has recently gained interest as a candidate diagnostic biomarker for ALS, but further studies are required to fully characterize its biomarker utility. We used quantitative enzyme-linked immunosorbent assay (ELISA to assess initial and longitudinal cerebrospinal fluid (CSF and plasma cystatin C levels in 104 ALS patients and controls. Cystatin C levels in ALS patients were significantly elevated in plasma and reduced in CSF compared to healthy controls, but did not differ significantly from neurologic disease controls. In addition, the direction of longitudinal change in CSF cystatin C levels correlated to the rate of ALS disease progression, and initial CSF cystatin C levels were predictive of patient survival, suggesting that cystatin C may function as a surrogate marker of disease progression and survival. These data verify prior results for reduced cystatin C levels in the CSF of ALS patients, identify increased cystatin C levels in the plasma of ALS patients, and reveal correlations between CSF cystatin C levels to both ALS disease progression and patient survival. 8. SITDEM: A simulation tool for disease/endpoint models of association studies based on single nucleotide polymorphism genotypes Science.gov (United States) Oh, Jung Hun; Deasy, Joseph O. 2016-01-01 The association analysis between single nucleotide polymorphisms (SNPs) and disease or endpoint in genome-wide association studies (GWAS) has been considered as a powerful strategy for investigating genetic susceptibility and for identifying significant biomarkers. The statistical analysis approaches with simulated data have been widely used to review experimental designs and performance measurements. In recent years, a number of authors have proposed methods for the simulation of biological data in the genomic field. However, these methods use large-scale genomic data as a reference to simulate experiments, which may limit the use of the methods in the case where the data in specific studies are not available. Few methods use experimental results or observed parameters for simulation. The goal of this study is to develop a Web application called SITDEM to simulate disease/endpoint models in three different approaches based on only parameters observed in GWAS. In our simulation, a key task is to compute the probability of genotypes. Based on that, we randomly sample simulation data. Simulation results are shown as a function of p-value against odds ratio or relative risk of a SNP in dominant and recessive models. Our simulation results show the potential of SITDEM for simulating genotype data. SITDEM could be particularly useful for investigating the relationship among observed parameters for target SNPs and for estimating the number of variables (SNPs) required to result in significant p-values in multiple comparisons. The proposed simulation tool is freely available at http://www.snpmodel.com. PMID:24480173 9. Sepsis biomarkers: a review Science.gov (United States) 2010-01-01 Introduction Biomarkers can be useful for identifying or ruling out sepsis, identifying patients who may benefit from specific therapies or assessing the response to therapy. Methods We used an electronic search of the PubMed database using the key words "sepsis" and "biomarker" to identify clinical and experimental studies which evaluated a biomarker in sepsis. Results The search retrieved 3370 references covering 178 different biomarkers. Conclusions Many biomarkers have been evaluated for use in sepsis. Most of the biomarkers had been tested clinically, primarily as prognostic markers in sepsis; relatively few have been used for diagnosis. None has sufficient specificity or sensitivity to be routinely employed in clinical practice. PCT and CRP have been most widely used, but even these have limited ability to distinguish sepsis from other inflammatory conditions or to predict outcome. PMID:20144219 10. Biomarkers in sarcoidosis. Science.gov (United States) Chopra, Amit; Kalkanis, Alexandros; Judson, Marc A 2016-11-01 Numerous biomarkers have been evaluated for the diagnosis, assessment of disease activity, prognosis, and response to treatment in sarcoidosis. In this report, we discuss the clinical and research utility of several biomarkers used to evaluate sarcoidosis. Areas covered: The sarcoidosis biomarkers discussed include serologic tests, imaging studies, identification of inflammatory cells and genetic analyses. Literature was obtained from medical databases including PubMed and Web of Science. Expert commentary: Most of the biomarkers examined in sarcoidosis are not adequately specific or sensitive to be used in isolation to make clinical decisions. However, several sarcoidosis biomarkers have an important role in the clinical management of sarcoidosis when they are coupled with clinical data including the results of other biomarkers. 11. Beyond multi-fractals: surrogate time series and fields Science.gov (United States) Venema, V.; Simmer, C. 2007-12-01 Most natural complex are characterised by variability on a large range of temporal and spatial scales. The two main methodologies to generate such structures are Fourier/FARIMA based algorithms and multifractal methods. The former is restricted to Gaussian data, whereas the latter requires the structure to be self-similar. This work will present so-called surrogate data as an alternative that works with any (empirical) distribution and power spectrum. The best-known surrogate algorithm is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm. We have studied six different geophysical time series (two clouds, runoff of a small and a large river, temperature and rain) and their surrogates. The power spectra and consequently the 2nd order structure functions were replicated accurately. Even the fourth order structure function was more accurately reproduced by the surrogates as would be possible by a fractal method, because the measured structure deviated too strong from fractal scaling. Only in case of the daily rain sums a fractal method could have been more accurate. Just as Fourier and multifractal methods, the current surrogates are not able to model the asymmetric increment distributions observed for runoff, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found differences for the structure functions on small scales. Surrogate methods are especially valuable for empirical studies, because the time series and fields that are generated are able to mimic measured variables accurately. Our main application is radiative transfer through structured clouds. Like many geophysical fields, clouds can only be sampled sparsely, e.g. with in-situ airborne instruments. However, for radiative transfer calculations we need full 3-dimensional cloud fields. A first study relating the measured properties of the cloud droplets and the radiative properties of the cloud field by generating surrogate cloud 12. Biomarkers for Parkinson's disease. Science.gov (United States) Sherer, Todd B 2011-04-20 Biomarkers for detecting the early stages of Parkinson's disease (PD) could accelerate development of new treatments. Such biomarkers could be used to identify individuals at risk for developing PD, to improve early diagnosis, to track disease progression with precision, and to test the efficacy of new treatments. Although some progress has been made, there are many challenges associated with developing biomarkers for detecting PD in its earliest stages. 13. On the application of an environmental radiological assessment system to an anthropomorphic surrogate. Science.gov (United States) Brown, Justin E; Hosseini, Ali; Dowdall, Mark 2014-01-01 Recent developments have seen the expansion of the system of radiological protection for humans to one including protection of the environment against detrimental effects of radiation exposure, although a fully developed framework for integration of human and ecological risk assessment for radionuclides is only at an early stage. In the context of integration, significant differences exist between assessment methodologies for humans and the environment in terms of transfer, exposure, and dosimetry. The aim of this elaboration was to explore possible implications of the simplifications made within the system of environmental radiological protection in terms of the efficacy and robustness of dose-rate predictions. A comparison was conducted between human radiological assessment and environmental radiological assessment for an anthropomorphic surrogate, the results for which, produced by both the environmental and human-oriented risk assessment systems, were critically compared and contrasted. The adopted approach split the calculations into several parts, these being 1) physical transfer in an ecosystem, 2) transfer to humans, 3) internal doses to humans, and 4) external doses to humans. The calculations were carried out using both a human radiological assessment and ecological risk assessment system for the same surrogate. The results of this comparison provided indications as to where the 2 systems are amenable to possible integration and where such integration may prove difficult. Initial stage transport models seem to be an obvious component amenable for integration, although complete integration is arguably unattainable as the differences between endpoints mean that the relevant outputs from the models will not be the same. For the transfer and dosimetry components of 2 typical methodologies, it seems that the efficacy of the environmental system is radionuclide-dependent, the predictions given by the environmental system for (90) Sr and (60) Co being 14. Computation of eigenfunctions and eigenvalues for the Sturm-Liouville problem with Dirichlet boundary conditions at the left endpoint and Neumann conditions at the right endpoint Science.gov (United States) Khapaev, M. M.; Khapaeva, T. M. 2016-10-01 A functional-based variational method is proposed for finding the eigenfunctions and eigenvalues in the Sturm-Liouville problem with Dirichlet boundary conditions at the left endpoint and Neumann conditions at the right endpoint. Computations are performed for three potentials: sin(( x-π)2/π), cos(4 x), and a high nonisosceles triangle. Science.gov (United States) Eskinazi, Ilan; Fregly, Benjamin J. 2016-01-01 Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591 16. Versatile Endpoint Storage Security with Trusted Integrity Modules DEFF Research Database (Denmark) Gonzalez, Javier; Bonnet, Philippe 2014-01-01 Net), or because they are constrained to specific hardware/software combinations (e.g., McAfee’s DeepSafe). In this paper, we propose a solution for personal devices equipped with a Trusted Execution Environment and a Secure Element. We propose Trusted Integrity Modules, separated by hardware from the operating...... system and applications, that guarantee the durability, confidentiality and integrity of a configurable subset of the filesystem data and meta-data. While, we detail our design with the Linux virtual file system, we expect that our results can be applied to a range of different file systems. As Trusted...... Execution Environments also become available in Cloud environments, we envisage that Trusted Integrity Modules could constitute the solution of choice for endpoint storage security on both clients and servers.... 17. Biodegradation of naphthenic acid surrogates by axenic cultures. Science.gov (United States) Yue, Siqing; Ramsay, Bruce A; Ramsay, Juliana A 2015-07-01 This is the first study to report that bacteria from the genera Ochrobactrum, Brevundimonas and Bacillus can be isolated by growth on naphthenic acids (NAs) extracted from oil sands process water (OSPW). These pure cultures were screened for their ability to use a range of aliphatic, cyclic and aromatic NA surrogates in 96-well microtiter plates using water-soluble tetrazolium redox dyes (Biolog Redox Dye H) as the indicator of metabolic activity. Of the three cultures, Ochrobactrum showed most metabolic activity on the widest range of NA surrogates. Brevundomonas and especially Ochrobactrum had higher metabolic activity on polycyclic aromatic compounds than other classes of NA surrogates. Bacillus also oxidized a wide range of NA surrogates but not as well as Ochrobactrum. Using this method to characterize NA utilisation, one can identify which NAs or NA classes in OSPW are more readily degraded. Since aromatic NAs have been shown to have an estrogenic effect and polycyclic monoaromatic compounds have been suggested to pose the greatest environmental threat among the NAs, these bacterial genera may play an important role in detoxification of OSPW. Furthermore, this study demonstrates that bacteria belonging to the genera Ochrobactrum and Bacillus can also degrade surrogates of tricyclic NAs. 18. Searching for Dynamical Earthquake Precursors with Surrogate Data Science.gov (United States) Lynch, J.; Revenaugh, J.; Georgopoulos, A. 2007-12-01 Surrogate data methods are resampling techniques related to the modern statistical bootstrap. The nonlinear dynamics community has promoted surrogate data as a useful tool for establishing the presence of nonlinear dynamics in experimental observations before applying more specific techniques such as nonlinear prediction. We propose to use surrogate data tests to search for evidence of transient nonlinear dynamics in seismographic data that act as a proxy for earthquake triggering mechanisms, such as fluid flow in the fault zone, failure cascades and slow prefatory slip, that signal changes in the coupling between geological boundaries. We will analyze the vertical component of broadband seismographic data recorded at 20Hz by the CI network of approximately 100 stations located throughout Southern California. We will focus on a period of six hours prior to seismic events of magnitude 4-5 located inside the CI network. Each seismographic record will be scanned for short, non-overlapping segments that pass a moderate stationarity criterion. We will then apply surrogate tests to each qualifying segment using three discriminating statistics: time reversal asymmetry, delay vector variance and zeroth-order nonlinear prediction error. We will correlate the results with known seismic activity and examine the spatial and temporal distribution of the surrogate test results for potential dynamical earthquake precursors. 19. Interim decision-making strategies in adaptive designs for population selection using time-to-event endpoints. Science.gov (United States) 2017-01-01 Adaptive designs in oncology clinical trials with interim analyses for population selection could be used in the development of targeted therapies if a predefined biomarker hypothesis exists. In this article, we consider an interim analysis using overall survival (OS), progression-free survival (PFS), and both OS and PFS, to determine whether the whole population or only the biomarker-positive population should continue into the subsequent stage of the trial, whereas the final decision is made based on OS data only. In order to increase the probability of selecting the most appropriate population at the interim analysis, we propose an interim decision-making strategy in adaptive designs with correlated endpoints considering the post-progression survival (PPS) magnitudes. In our approach, the interim decision is made on the basis of predictive power by incorporating information on OS as well as PFS to supplement the incomplete OS data. Simulation studies assuming a targeted therapy demonstrated that our interim decision-making procedure performs well in terms of selecting the proper population, especially under a scenario in which PPS affects the correlation between OS and PFS. 20. Respiratory Toxicity Biomarkers Science.gov (United States) The advancement in high throughput genomic, proteomic and metabolomic techniques have accelerated pace of lung biomarker discovery. A recent growth in the discovery of new lung toxicity/disease biomarkers have led to significant advances in our understanding of pathological proce... 1. Transcutaneous oxygen pressure as a surrogate index of lower limb amputation. Science.gov (United States) Nishio, Hiroomi; Minakata, Kenji; Kawaguchi, Atsushi; Kumagai, Motoyuki; Ikeda, Takafumi; Shimizu, Akira; Yokode, Masayuki; Morita, Satoshi; Sakata, Ryuzo 2016-12-01 A large number of clinical trials of therapeutic angiogenesis in patients with critical limb ischemia have been conducted in recent years. However, limb amputation, which is used as a primary endpoint in such studies, is not often required in Japan, which can make it difficult to carry out related clinical trials. Transcutaneous oxygen pressure (TcPO2) is widely used to evaluate the severity of limb ischemia, to decide the level of amputation, and to predict wound healing after limb amputation. The aim of the present study was to elucidate whether TcPO2 can be a surrogate index of limb ischemia, and to define an appropriate cutoff value for wound healing after limb amputation using meta-analysis. A computer search was performed to identify studies describing the association between TcPO2 and limb ischemic events. From these, studies focused on wound healing after limb amputation were combined and analyzed. Eleven studies were identified for inclusion in this analysis. The analysis demonstrated that TcPO2 20 mmHg was a valid cutoff value for limb amputation and TcPO2 30 mmHg would be an appropriate value for wound healing after limb amputation. TcPO2 of 20 and 30 mmHg were considered appropriate cutoff values for limb amputation and wound healing after amputation, respectively. 2. Biomarkers of Reflux Disease. Science.gov (United States) Kia, Leila; Pandolfino, John E; Kahrilas, Peter J 2016-06-01 Gastroesophageal reflux disease (GERD) encompasses an array of disorders unified by the reflux of gastric contents. Because there are many potential disease manifestations, esophageal and extraesophageal, there is no single biomarker of the entire disease spectrum; a set of GERD biomarkers that each quantifies specific aspects of GERD-related pathology might be needed. We review recent reports of biomarkers of GERD, specifically in relation to endoscopically negative esophageal disease and excluding conventional pH-impedance monitoring. We consider histopathologic biomarkers, baseline impedance, and serologic assays to determine that most markers are based on manifestations of impaired esophageal mucosal integrity, which is based on increased ionic and molecular permeability, and/or destruction of tight junctions. Impaired mucosal integrity quantified by baseline mucosal impedance, proteolytic fragments of junctional proteins, or histopathologic features has emerged as a promising GERD biomarker. 3. Biomarkers in Parkinson's disease. Science.gov (United States) Morgan, John C; Mehta, Shyamal H; Sethi, Kapil D 2010-11-01 Biomarkers are objectively measured characteristics that are indicators of normal biological processes, pathogenic processes, or responses to therapeutic interventions. To date, clinical assessment remains the gold standard in the diagnosis of Parkinson's disease (PD) and clinical rating scales are well established as the gold standard for tracking progression of PD. Researchers have identified numerous potential biomarkers that may aid in the differential diagnosis of PD and/or tracking disease progression. Clinical, genetic, blood and cerebrospinal fluid (proteomics, transcriptomics, metabolomics), and neuroimaging biomarkers may provide useful tools in the diagnosis of PD and in measuring disease progression and response to therapies. Some potential biomarkers are inexpensive and do not require much technical expertise, whereas others are expensive or require specialized equipment and technical skills. Many potential biomarkers in PD show great promise; however, they need to be assessed for their sensitivity and specificity over time in large and varied samples of patients with and without PD. 4. On consensus biomarker selection Directory of Open Access Journals (Sweden) Gambin Anna 2007-05-01 Full Text Available Abstract Background Recent development of mass spectrometry technology enabled the analysis of complex peptide mixtures. A lot of effort is currently devoted to the identification of biomarkers in human body fluids like serum or plasma, based on which new diagnostic tests for different diseases could be constructed. Various biomarker selection procedures have been exploited in recent studies. It has been noted that they often lead to different biomarker lists and as a consequence, the patient classification may also vary. Results Here we propose a new approach to the biomarker selection problem: to apply several competing feature ranking procedures and compute a consensus list of features based on their outcomes. We validate our methods on two proteomic datasets for the diagnosis of ovarian and prostate cancer. Conclusion The proposed methodology can improve the classification results and at the same time provide a unified biomarker list for further biological examinations and interpretation. 5. Modulation of cigarette smoke-related end-points in mutagenesis and carcinogenesis Energy Technology Data Exchange (ETDEWEB) De Flora, Silvio; D' Agostini, Francesco; Balansky, Roumen; Camoirano, Anna; Bennicelli, Carlo; Bagnasco, Maria; Cartiglia, Cristina; Tampa, Elena; Longobardi, Maria Grazia; Lubet, Ronald A.; Izzotti, Alberto 2003-03-01 The epidemic of lung cancer and the increase of other tumours and chronic degenerative diseases associated with tobacco smoking have represented one of the most dramatic catastrophes of the 20th century. The control of this plague is one of the major challenges of preventive medicine for the next decades. The imperative goal is to refrain from smoking. However, chemoprevention by dietary and/or pharmacological agents provides a complementary strategy, which can be targeted not only to current smokers but also to former smokers and passive smokers. This article summarises the results of studies performed in our laboratories during the last 10 years, and provides new data generated in vitro, in experimental animals and in humans. We compared the ability of 63 putative chemopreventive agents to inhibit the bacterial mutagenicity of mainstream cigarette smoke. Modulation by ethanol and the mechanisms involved were also investigated both in vitro and in vivo. Several studies evaluated the effects of dietary chemopreventive agents towards smoke-related intermediate biomarkers in various cells, tissues and organs of rodents. The investigated end-points included metabolic parameters, adducts to haemoglobin, bulky adducts to nuclear DNA, oxidative DNA damage, adducts to mitochondrial DNA, apoptosis, cytogenetic damage in alveolar macrophages, bone marrow and peripheral blood erytrocytes, proliferation markers, and histopathological alterations. The agents tested in vivo included N-acetyl-L-cysteine, 1,2-dithiole-3-thione, oltipraz, phenethyl isothiocyanate, 5,6-benzoflavone, and sulindac. We started applying multigene expression analysis to chemoprevention research, and postulated that an optimal agent should not excessively alter per se the physiological background of gene expression but should be able to attenuate the alterations produced by cigarette smoke or other carcinogens. We are working to develop an animal model for the induction of lung tumours following exposure 6. Love as a regulative ideal in surrogate decision making. Science.gov (United States) Stonestreet, Erica Lucast 2014-10-01 This discussion aims to give a normative theoretical basis for a "best judgment" model of surrogate decision making rooted in a regulative ideal of love. Currently, there are two basic models of surrogate decision making for incompetent patients: the "substituted judgment" model and the "best interests" model. The former draws on the value of autonomy and responds with respect; the latter draws on the value of welfare and responds with beneficence. It can be difficult to determine which of these two models is more appropriate for a given patient, and both approaches may seem inadequate for a surrogate who loves the patient. The proposed "best judgment" model effectively draws on the values incorporated in each of the traditional standards, but does so because these values are important to someone who loves a patient, since love responds to the patient as the specific person she is. 7. Surrogate modeling for initial rotational stiffness of welded tubular joints Directory of Open Access Journals (Sweden) M.R. Garifullin 2016-10-01 Full Text Available Recently, buildings and structures erected in Russia and abroad have to comply with stringent economic requirements. Buildings should not only be reliable and safe, have a beautiful architectural design, but also meet the criteria of rationality and energy efficiency. In practice, this usually means the need for additional comparative analysis in order to determine the optimal solution to the engineering task. Usually such an analysis is time-consuming and requires huge computational efforts. In this regard, surrogate modeling can be an effective tool for solving such problems. This article provides a brief description of surrogate models and the basic techniques of their construction, describes the construction process of a surrogate model to calculate initial rotational stiffness of welded RHS joints made of high strength steel (HSS. 8. Analysis of biomarker utility using a PBPK/PD model for carbaryl Directory of Open Access Journals (Sweden) Martin Blake Phillips 2014-11-01 Full Text Available There are many types of biomarkers; the two common ones are biomarkers of exposure and biomarkers of effect. The utility of a biomarker for estimating exposures or predicting risks depends on the strength of the correlation between biomarker concentrations and exposure/effects. In the current study, a combined exposure and physiologically-based pharmacokinetic/pharmacodynamic (PBPK/PD model of carbaryl was used to demonstrate the use of computational modeling for providing insight into the selection of biomarkers for different purposes. The Cumulative and Aggregate Risk Evaluation System (CARES was used to generate exposure profiles, including magnitude and timing, for use as inputs to the PBPK/PD model. The PBPK/PD model was then used to predict blood concentrations of carbaryl and urine concentrations of its principal metabolite, 1-naphthol (1-N, as biomarkers of exposure. The PBPK/PD model also predicted acetylcholinesterase (AChE inhibition in red blood cells (RBC as a biomarker of effect. The correlations of these simulated biomarker concentrations with intake doses or brain AChE inhibition (as a surrogate of effects were analyzed using a linear regression model. Results showed that 1-N in urine is a better biomarker of exposure than carbaryl in blood, and that 1-N in urine is correlated with the dose averaged over the last two days of the simulation. They also showed that RBC AChE inhibition is an appropriate biomarker of effect. This computational approach can be applied to a wide variety of chemicals to facilitate quantitative analysis of biomarker utility. 9. Disinfection byproduct regulatory compliance surrogates and bromide-associated risk. Science.gov (United States) Kolb, Chelsea; Francis, Royce A; VanBriesen, Jeanne M 2017-08-01 Natural and anthropogenic factors can alter bromide concentrations in drinking water sources. Increasing source water bromide concentrations increases the formation and alters the speciation of disinfection byproducts (DBPs) formed during drinking water treatment. Brominated DBPs are more toxic than their chlorinated analogs, and thus have a greater impact on human health. However, DBPs are regulated based on the mass sum of DBPs within a given class (e.g., trihalomethanes and haloacetic acids), not based on species-specific risk or extent of bromine incorporation. The regulated surrogate measures are intended to protect against not only the species they directly represent, but also against unregulated DBPs that are not routinely measured. Surrogates that do not incorporate effects of increasing bromide may not adequately capture human health risk associated with drinking water when source water bromide is elevated. The present study analyzes trihalomethanes (THMs), measured as TTHM, with varying source water bromide concentrations, and assesses its correlation with brominated THM, TTHM risk and species-specific THM concentrations and associated risk. Alternative potential surrogates are evaluated to assess their ability to capture THM risk under different source water bromide concentration conditions. The results of the present study indicate that TTHM does not adequately capture risk of the regulated species when source water bromide concentrations are elevated, and thus would also likely be an inadequate surrogate for many unregulated brominated species. Alternative surrogate measures, including THM3 and the bromodichloromethane concentration, are more robust surrogates for species-specific THM risk at varying source water bromide concentrations. Copyright © 2017. Published by Elsevier B.V. 10. Can biomarkers help us hit targets in difficult-to-treat asthma? Science.gov (United States) Fricker, Michael; Heaney, Liam G; Upham, John W 2017-04-01 Biomarkers may be a key foundation for the precision medicine of the future. In this article, we review current knowledge regarding biomarkers in difficult-to-treat asthma and their ability to guide the use of both conventional asthma therapies and novel (targeted) therapies. Biomarkers (as measured by tests including prednisolone and cortisol assays and the fractional exhaled nitric oxide (NO) suppression test) show promise in the assessment and management of non-adherence to inhaled and oral corticosteroids. Multiple markers of type 2 inflammation have been developed, including eosinophils in sputum and blood, exhaled NO, serum IgE and periostin. Although these show potential in guiding the selection of novel interventions for refractory type 2 inflammation in asthma, and in determining if the desired response is being achieved, it is becoming clear that different biomarkers reflect distinct components of the complex type 2 inflammatory pathways. Less progress has been made in identifying biomarkers for use in difficult-to-treat asthma that is not associated with type 2 inflammation. The future is likely to see further biomarker discovery, direct measurements of individual cytokines rather than surrogates of their activity and the increasing use of biomarkers in combination. If the promise of biomarkers is to be fulfilled, they will need to provide useful information that aids clinical decision-making, rather than being 'just another test' for clinicians to order. 11. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models Directory of Open Access Journals (Sweden) Scott E. Field 2014-07-01 Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in 12. Optimization using surrogate models - by the space mapping technique DEFF Research Database (Denmark) Søndergaard, Jacob 2003-01-01 mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space... 13. Optimization using surrogate models - by the space mapping technique DEFF Research Database (Denmark) Søndergaard, Jacob 2003-01-01 mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space... 14. [Biomarkers in Alzheimer's disease]. Science.gov (United States) García-Ribas, G; López-Sendón Moreno, J L; García-Caldentey, J 2014-04-01 The new diagnostic criteria for Alzheimer's disease (AD) include brain imaging and cerebrospinal fluid (CSF) biomarkers, with the aim of increasing the certainty of whether a patient has an ongoing AD neuropathologic process or not. Three CSF biomarkers, Aß42, total tau, and phosphorylated tau, reflect the core pathological features of AD. It is already known that these pathological processes of AD starts decades before the first symptoms, so these biomarkers may provide means of early disease detection. At least three stages of AD could be identified: preclinical AD, mild cognitive impairment due to AD, and dementia due to AD. In this review, we aim to summarize the CSF biomarker data available for each of these stages. We also review the actual research on blood-based biomarkers. Recent studies on healthy elderly subjects and on carriers of dominantly inherited AD mutations have also found biomarker changes that allow separate groups in these preclinical stages. These studies may aid for segregate populations in clinical trials and objectively evaluate if there are changes over the pathological processes of AD. Limits to widespread use of CSF biomarkers, apart from the invasive nature of the process itself, is the higher coefficient of variation for the analyses between centres. It requires strict pre-analytical and analytical procedures that may make feasible multi-centre studies and global cut-off points for the different stages of AD. 15. A Few Endpoint Geodesic Restriction Estimates for Eigenfunctions Science.gov (United States) Chen, Xuehua; Sogge, Christopher D. 2014-07-01 We prove a couple of new endpoint geodesic restriction estimates for eigenfunctions. In the case of general 3-dimensional compact manifolds, after a TT* argument, simply by using the L 2-boundedness of the Hilbert transform on , we are able to improve the corresponding L 2-restriction bounds of Burq, Gérard and Tzvetkov (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009). Also, in the case of 2-dimensional compact manifolds with nonpositive curvature, we obtain improved L 4-estimates for restrictions to geodesics, which, by Hölder's inequality and interpolation, implies improved L p -bounds for all exponents p ≥ 2. We do this by using oscillatory integral theorems of Hörmander (Ark Mat 11:1-11, 1973), Greenleaf and Seeger (J Reine Angew Math 455:35-56, 1994) and Phong and Stein (Int Math Res Notices 4:49-60, 1991), along with a simple geometric lemma (Lemma 3.2) about properties of the mixed-Hessian of the Riemannian distance function restricted to pairs of geodesics in Riemannian surfaces. We are also able to get further improvements beyond our new results in three dimensions under the assumption of constant nonpositive curvature by exploiting the fact that, in this case, there are many totally geodesic submanifolds. 16. Responsiveness of endpoints in osteoporosis clinical trials--an update. Science.gov (United States) Cranney, A; Welch, V; Tugwell, P; Wells, G; Adachi, J D; McGowan, J; Shea, B 1999-01-01 As an update of our earlier paper, published as part of the Outcome Measures in Rheumatology Clinical Trials (OMERACT 3) proceedings in 1996, we surveyed the types of outcomes incorporated in recent clinical trials. A literature search was conducted on MEDLINE and Current Contents, from January 1996 to March 1998, using the search strategy recommended by the Cochrane Collaboration for the identification of randomized controlled trials (RCT). Two independent reviewers selected trials according to inclusion criteria. The same reviewers extracted data on clinical and radiographic fractures, pain, quality of life, and bone mineral density (BMD). Seventy-four RCT conducted on bone loss in postmenopausal women were identified. Most trials incorporated biochemical markers and BMD as outcome measures. Fewer trials included vertebral fractures, pain, height, and quality of life. The responsiveness is presented in terms of the sample size needed per group to show a statistically significant difference. The most responsive outcomes were pain, BMD, and biochemical markers. The number needed to treat to prevent one vertebral fracture ranged from 13 to 54, depending on the intervention and population. Investigators should examine the characteristics of the patient population and the nature of the intervention in determining the sample size required to demonstrate a significant effect. The selection of endpoints should be based on their responsiveness, feasibility, and the importance of using standardized outcomes. Standardized outcomes greatly facilitate the synthesis of available information into systematic reviews by groups such as the Cochrane Collaboration. 17. Critical endpoint in the presence of a chiral chemical potential CERN Document Server Cui, Zhu-Fang; Lu, Ya; Roberts, Craig D; Schmidt, Sebastian M; Xu, Shu-Sheng; Zong, Hong-Shi 2016-01-01 A class of Polyakov-loop-modified Nambu--Jona-Lasinio (PNJL) models have been used to support a conjecture that numerical simulations of lattice-regularized quantum chromodynamics (QCD) defined with a chiral chemical potential can provide information about the existence and location of a critical endpoint in the QCD phase diagram drawn in the plane spanned by baryon chemical potential and temperature. That conjecture is challenged by conflicts between the model results and analyses of the same problem using simulations of lattice-regularized QCD (lQCD) and well-constrained Dyson-Schwinger equation (DSE) studies. We find the conflict is resolved in favor of the lQCD and DSE predictions when both a physically-motivated regularization is employed to suppress the contribution of high-momentum quark modes in the definition of the effective potential connected with the PNJL models and the four-fermion coupling in those models does not react strongly to changes in the mean-field that is assumed to mock-up Polyakov l... 18. Commentary: statistics for biomarkers. Science.gov (United States) Lovell, David P 2012-05-01 This short commentary discusses Biomarkers' requirements for the reporting of statistical analyses in submitted papers. It is expected that submitters will follow the general instructions of the journal, the more detailed guidance given by the International Committee of Medical Journal Editors, the specific guidelines developed by the EQUATOR network, and those of various specialist groups. Biomarkers expects that the study design and subsequent statistical analyses are clearly reported and that the data reported can be made available for independent assessment. The journal recognizes that there is continuing debate about different approaches to statistical science. Biomarkers appreciates that the field continues to develop rapidly and encourages the use of new methodologies. 19. Metabolic products as biomarkers Science.gov (United States) Melancon, M.J.; Alscher, R.; Benson, W.; Kruzynski, G.; Lee, R.F.; Sikka, H.C.; Spies, R.B.; Huggett, Robert J.; Kimerle, Richard A.; Mehrle, Paul M.=; Bergman, Harold L. 1992-01-01 Ideally, endogenous biomarkers would indicate both exposure and environmental effects of toxic chemicals; however, such comprehensive biochemical and physiological indices are currently being developed and, at the present time, are unavailable for use in environmental monitoring programs. Continued work is required to validate the use of biochemical and physiological stress indices as useful components of monitoring programs. Of the compounds discussed only phytochelatins and porphyrins are currently in biomarkers in a useful state; however, glutathione,metallothioneins, stress ethylene, and polyamines are promising as biomarkers in environmental monitoring. 20. Drop-out from cardiovascular magnetic resonance in a randomized controlled trial of ST-elevation myocardial infarction does not cause selection bias on endpoints. Science.gov (United States) Laursen, Peter Nørkjær; Holmvang, L; Kelbæk, H; Vejlstrup, N; Engstrøm, T; Lønborg, J 2017-07-01 The extent of selection bias due to drop-out in clinical trials of ST-elevation myocardial infarction (STEMI) using cardiovascular magnetic resonance (CMR) as surrogate endpoints is unknown. We sought to interrogate the characteristics and prognosis of patients who dropped out before acute CMR assessment compared to CMR-participants in a previously published double-blinded, placebo-controlled all-comer trial with CMR outcome as the primary endpoint. Baseline characteristics and composite endpoint of all-cause mortality, heart failure and re-infarction after 30 days and 5 years of follow-up were assessed and compared between CMR-drop-outs and CMR-participants using the trial screening log and the Eastern Danish Heart Registry. The drop-out rate from acute CMR was 28% (n = 92). These patients had a significantly worse clinical risk profile upon admission as evaluated by the TIMI-risk score (3.7 (± 2.1) vs 4.0 (± 2.6), p = 0.043) and by left ventricular ejection fraction (43 (± 9) vs. 47 (± 10), p = 0.029). CMR drop-outs had a higher incidence of known hypertension (39% vs. 35%, p = 0.043), known diabetes (14% vs. 7%, p = 0.025), known cardiac disease (11% vs. 3%, p = 0.013) and known renal function disease (5% vs. 0%, p = 0.007). However, the 30-day and 5-years composite endpoint rate was not significantly higher among the CMR drop-out ((HR 1.43 (95%-CI 0.5; 3.97) (p = 0.5)) and (HR 1.31 (95%-CI 0.84; 2.05) (p = 0.24)). CMR-drop-outs had a higher incidence of cardiovascular risk factors at baseline, a worse clinical risk profile upon admission. However, no significant difference was observed in the clinical endpoints between the groups. 1. A combined superiority and non-inferiority approach to multiple endpoints in clinical trials. Science.gov (United States) Bloch, Daniel A; Lai, Tze Leung; Su, Zheng; Tubert-Bitter, Pascale 2007-03-15 Treatment comparisons in clinical trials often involve multiple endpoints. By making use of bootstrap tests, we develop a new non-parametric approach to multiple-endpoint testing that can be used to demonstrate non-inferiority of a new treatment for all endpoints and superiority for some endpoint when it is compared to an active control. It is shown that this approach does not incur a large multiplicity cost in sample size to achieve reasonable power and that it can incorporate complex dependencies in the multivariate distributions of all outcome variables for the two treatments via bootstrap resampling. Copyright (c) 2006 John Wiley & Sons, Ltd. 2. A signal processing method for the friction-based endpoint detection system of a CMP process Energy Technology Data Exchange (ETDEWEB) Xu Chi; Guo Dongming; Jin Zhuji; Kang Renke, E-mail: xuchi_dut@163.com [Key Laboratory for Precision and Non-Traditional Machining Technology of Ministry of Education, Dalian University of Technology, Dalian 116024 (China) 2010-12-15 A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process. (semiconductor technology) 3. Opportunities and challenges of clinical trials in cardiology using composite primary endpoints Institute of Scientific and Technical Information of China (English) Geraldine; Rauch; Bernhard; Rauch; Svenja; Schüler; Meinhard; Kieser 2015-01-01 In clinical trials, the primary efficacy endpoint often corresponds to a so-called "composite endpoint". Composite endpoints combine several events of interest within a single outcome variable. Thereby it is intended to enlarge the expected effect size and thereby increase the power of the study. However, composite endpoints also come along with serious challenges and problems. On the one hand, composite endpoints may lead to difficulties during the planning phase of a trial with respect to the sample size calculation, asthe expected clinical effect of an intervention on the composite endpoint depends on the effects on its single components and their correlations. This may lead to wrong assumptions on the sample size needed. Too optimistic assumptions on the expected effect may lead to an underpowered of the trial, whereas a too conservatively estimated effect results in an unnecessarily high sample size. On the other hand, the interpretation of composite endpoints may be difficult, as the observed effect of the composite does not necessarily reflect the effects of the single components. Therefore the demonstration of the clinical efficacy of a new intervention by exclusively evaluating the composite endpoint may be misleading. The present paper summarizes results and recommendations of the latest research addressing the above mentioned problems in the planning, analysis and interpretation of clinical trials with composite endpoints, thereby providing a practical guidance for users. 4. Impact of weighted composite compared to traditional composite endpoints for the design of randomized controlled trials. Science.gov (United States) Bakal, Jeffrey A; Westerhout, Cynthia M; Armstrong, Paul W 2015-12-01 Composite endpoints are commonly used in cardiovascular clinical trials. When using a composite endpoint a subject is considered to have an event when the first component endpoint has occurred. The use of composite endpoints offers the ability to incorporate several clinically important endpoint events thereby augmenting the event rate and increasing statistical power for a given sample size. One assumption of the composite is that all component events are of equal clinical importance. This assumption is rarely achieved given the diversity of component endpoints included. One means of adjusting for this diversity is to adjust the outcomes using severity weights determined a priori. The use of a weighted endpoint also allows for the incorporation of multiple endpoints per patient. Although weighting the outcomes lowers the effective number of events, it offers additional information that reduces the variance of the estimate. We created a series of simulation studies to examine the effect on power as the individual components of a typical composite were changed. In one study, we noted that the weighted composite was able to offer discriminative power when the component outcomes were altered, while the traditional method was not. In the other study, we noted that the weighted composite offered a similar level of power to the traditional composite when the change was driven by the more severe endpoints. 5. Blood plasma clinical-chemical parameters as biomarker endpoints for organohalogen contaminant exposure in Norwegian raptor nestlings DEFF Research Database (Denmark) Sonne, Christian; Bustnes, Jan O; Herzke, Dorte 2012-01-01 Raptors are exposed to biomagnifying and toxic organohalogenated compounds (OHCs) such as organochlorines, brominated flame retardants and perfluorinated compounds. To investigate how OHC exposure may affect biochemical pathways we collected blood plasma from Norwegian northern goshawk (n=56......), golden eagle (n=12) and white-tailed eagle (n=36) nestlings during three consecutive breeding seasons. We found that blood plasma concentrations of calcium, sodium, creatinine, cholesterol, albumin, total protein, urea, inorganic phosphate, protein:creatinine, urea:creatinine and uric acid... 6. Blood plasma clinical-chemical parameters as biomarker endpoints for organohalogen contaminant exposure in Norwegian raptor nestlings DEFF Research Database (Denmark) Sonne, Christian; Bustnes, Jan O.; Herzke, Dorte; 2012-01-01 ), golden eagle (n=12) and white-tailed eagle (n=36) nestlings during three consecutive breeding seasons. We found that blood plasma concentrations of calcium, sodium, creatinine, cholesterol, albumin, total protein, urea, inorganic phosphate, protein:creatinine, urea:creatinine and uric acid...... were also negatively correlated to PCBs and PFCs, respectively. The most significant relationships were found for the highly contaminated northern goshawks and white-tailed eagles. The statistical relationships between OHCs and BCCPs indicate that biochemical pathways could be influenced while...... it is uncertain if such changes have any health effects. The OHC concentrations were below concentrations causing reproductive toxicity in adults of other raptor species but similar to those of concern for endocrine disruption of thyroid hormones in e.g., bald eagles.... 7. Summary of Remediated Nitrate Salt Surrogate Formulation and Testing Energy Technology Data Exchange (ETDEWEB) Brown, Geoffrey Wayne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Leonard, Philip [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hartline, Ernest Leon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tian, Hongzhao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2016-05-05 High Explosives Science and Technology (M-7) completed all required formulation and testing of Remediated Nitrate Salt (RNS) surrogates on April 27, 2016 as specified in PLAN-TA9-2443 Rev B, "Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing Standard Procedure", released February 16, 2016. This report summarizes the results of the work and also includes additional documentation required in that test plan. All formulation and testing was carried out according to PLAN-TA9-2443 Rev B. The work was carried out in three rounds, with the full matrix of samples formulated and tested in each round. Results from the first round of formulation and testing were documented in memorandum M7-J6-6042, " Results from First Round of Remediated Nitrate Salt Surrogate Formulation and Testing." Results from the second round of formulation and testing were documented in M7-16-6053 , "Results from the Second Round of Remediated Nitrate Salt Surrogate Formulation and Testing." Initial results from the third round were documented in M7-16-6057, "Initial Results from the Third Round of Remediated Nitrate Salt Formulation and Testing." 8. Frequency response as a surrogate eigenvalue problem in topology optimization DEFF Research Database (Denmark) Andreassen, Erik; Ferrari, Federico; Sigmund, Ole 2017-01-01 This article discusses the use of frequency response surrogates for eigenvalue optimization problems in topology optimization that may be used to avoid solving the eigenvalue problem. The motivation is to avoid complications that arise from multiple eigenvalues and the computational complexity as... 9. GENERATING SOPHISTICATED SPATIAL SURROGATES USING THE MIMS SPATIAL ALLOCATOR Science.gov (United States) The Multimedia Integrated Modeling System (MIMS) Spatial Allocator is open-source software for generating spatial surrogates for emissions modeling, changing the map projection of Shapefiles, and performing other types of spatial allocation that does not require the use of a comm... 10. Strength Reliability Analysis of Turbine Blade Using Surrogate Models Directory of Open Access Journals (Sweden) Wei Duan 2014-05-01 Full Text Available There are many stochastic parameters that have an effect on the reliability of steam turbine blades performance in practical operation. In order to improve the reliability of blade design, it is necessary to take these stochastic parameters into account. In this study, a variable cross-section twisted blade is investigated and geometrical parameters, material parameters and load parameters are considered as random variables. A reliability analysis method as a combination of a Finite Element Method (FEM, a surrogate model and Monte Carlo Simulation (MCS, is applied to solve the blade reliability analysis. Based on the blade finite element parametrical model and the experimental design, two kinds of surrogate models, Polynomial Response Surface (PRS and Artificial Neural Network (ANN, are applied to construct the approximation analytical expressions between the blade responses (including maximum stress and deflection and random input variables, which act as a surrogate of finite element solver to drastically reduce the number of simulations required. Then the surrogate is used for most of the samples needed in the Monte Carlo method and the statistical parameters and cumulative distribution functions of the maximum stress and deflection are obtained by Monte Carlo simulation. Finally, the probabilistic sensitivities analysis, which combines the magnitude of the gradient and the width of the scatter range of the random input variables, is applied to evaluate how much the maximum stress and deflection of the blade are influenced by the random nature of input parameters. 11. Biomarker time out. Science.gov (United States) Petzold, Axel; Bowser, Robert; Calabresi, Paolo; Zetterberg, Henrik; Uitdehaag, Bernard M J 2014-10-01 The advancement of knowledge relies on scientific investigations. The timing between asking a question and data collection defines if a study is prospective or retrospective. Prospective studies look forward from a point in time, are less prone to bias and are considered superior to retrospective studies. This conceptual framework conflicts with the nature of biomarker research. New candidate biomarkers are discovered in a retrospective manner. There are neither resources nor time for prospective testing in all cases. Relevant sources for bias are not covered. Ethical questions arise through the time penalty of an overly dogmatic concept. The timing of sample collection can be separated from testing biomarkers. Therefore the moment of formulating a hypothesis may be after sample collection was completed. A conceptual framework permissive to asking research questions without the obligation to bow to the human concept of calendar time would simplify biomarker research, but will require new safeguards against bias. 12. amphibian_biomarker_data Data.gov (United States) U.S. Environmental Protection Agency — Amphibian metabolite data used in Snyder, M.N., Henderson, W.M., Glinski, D.G., Purucker, S. T., 2017. Biomarker analysis of american toad (Anaxyrus americanus) and... 13. Biomarkers in the pathogenesis, diagnosis, and treatment of psoriasis Directory of Open Access Journals (Sweden) Molteni S 2012-09-01 Full Text Available Silvia Molteni, Eva RealiLaboratory of Translational Immunology, Istituto Ortopedico Galeazzi, Milan, ItalyAbstract: Development of psoriasis results from a complex interplay between genetically predisposing factors and environmental triggers that give rise to a self-sustaining pathogenic cycle involving T cells, dendritic cells, connective tissue, and skin epithelium. From 5% to 40% of patients with psoriasis also develop psoriatic arthritis, and increasing evidence indicates an association with other systemic manifestations, including cardiovascular disease and the metabolic syndrome. In psoriatic disease, there is a need for development of biomarkers for assessment of disease severity, for prediction of the outcome of therapeutic interventions, and for distinction between the different clinical variants of the disease. A field of great importance is identification of biomarkers for prediction of development of comorbidities, such as arthritis, cardiovascular disease, and the metabolic syndrome. Genetic determinants of psoriasis and their products not only give an important insight into the pathogenesis of the disease, but may also function as markers of risk for developing cutaneous psoriasis or psoriatic arthritis. So far, there are limited validation data to support the use of candidate biomarkers in clinical practice. Here we review the data from several studies on some of the most promising candidate biomarkers for cutaneous psoriasis and psoriatic arthritis, for the detection of systemic inflammation, and for use as endpoints for therapeutic interventions. Attention is focused on the molecules that take part in the interplay giving rise to psoriasis and on gene products that may represent a link between predisposing genetic factors and the immune and inflammatory processes involved in pathogenesis of the disease. Finally, we provide an overview on how biomarkers can offer insights into the pathogenesis and natural history of psoriasis 14. Recent Progress in the Development of Diesel Surrogate Fuels Energy Technology Data Exchange (ETDEWEB) Pitz, W J 2009-09-04 There has been much recent progress in the area of surrogate fuels for diesel. In the last few years, experiments and modeling have been performed on higher molecular weight components of relevance to diesel fuel such as n-hexadecane (n-cetane) and 2,2,4,4,6,8,8-heptamethylnonane (iso-cetane). Chemical kinetic models have been developed for all the n-alkanes up to 16 carbon atoms. Also, there has been much experimental and modeling work on lower molecular weight surrogate components such as n-decane and do-decane which are most relevant to jet fuel surrogates, but are also relevant to diesel surrogates where simulation of the full boiling point range is desired. For the cycloalkanes, experimental work on decalin and tetralin recently has been published. For multi-component surrogate fuel mixtures, recent work on modeling of these mixtures and comparisons to real diesel fuel is reviewed. Detailed chemical kinetic models for surrogate fuels are very large in size. Significant progress also has been made in improving the mechanism reduction tools that are needed to make these large models practicable in multidimensional reacting flow simulations of diesel combustion. Nevertheless, major research gaps remain. In the case of iso-alkanes, there are experiments and modeling work on only one of relevance to diesel: iso-cetane. Also, the iso-alkanes in diesel are lightly branched and no detailed chemical kinetic models or experimental investigations are available for such compounds. More components are needed to fill out the iso-alkane boiling point range. For the aromatic class of compounds, there has been no new work for compounds in the boiling point range of diesel. Most of the new work has been on alkyl aromatics that are of the range C7 to C8, below the C10 to C20 range that is needed. For the chemical class of cycloalkanes, experiments and modeling on higher molecular weight components are warranted. Finally for multi-component surrogates needed to treat real diesel 15. Recent Progress in the Development of Diesel Surrogate Fuels Energy Technology Data Exchange (ETDEWEB) Pitz, W J; Mueller, C J 2009-12-09 There has been much recent progress in the area of surrogate fuels for diesel. In the last few years, experiments and modeling have been performed on higher molecular weight components of relevance to diesel fuel such as n-hexadecane (n-cetane) and 2,2,4,4,6,8,8-heptamethylnonane (iso-cetane). Chemical kinetic models have been developed for all the n-alkanes up to 16 carbon atoms. Also, there has been much experimental and modeling work on lower molecular weight surrogate components such as n-decane and n-dodecane that are most relevant to jet fuel surrogates, but are also relevant to diesel surrogates where simulation of the full boiling point range is desired. For two-ring compounds, experimental work on decalin and tetralin recently has been published. For multi-component surrogate fuel mixtures, recent work on modeling of these mixtures and comparisons to real diesel fuel is reviewed. Detailed chemical kinetic models for surrogate fuels are very large in size. Significant progress also has been made in improving the mechanism reduction tools that are needed to make these large models practicable in multi-dimensional reacting flow simulations of diesel combustion. Nevertheless, major research gaps remain. In the case of iso-alkanes, there are experiments and modeling work on only one of relevance to diesel: iso-cetane. Also, the iso-alkanes in diesel are lightly branched and no detailed chemical kinetic models or experimental investigations are available for such compounds. More components are needed to fill out the iso-alkane boiling point range. For the aromatic class of compounds, there has been no new work for compounds in the boiling point range of diesel. Most of the new work has been on alkyl aromatics that are of the range C7 to C8, below the C10 to C20 range that is needed. For the chemical class of cycloalkanes, experiments and modeling on higher molecular weight components are warranted. Finally for multi-component surrogates needed to treat real 16. Development of a Human Cranial Bone Surrogate for Impact Studies. Science.gov (United States) Roberts, Jack C; Merkle, Andrew C; Carneal, Catherine M; Voo, Liming M; Johannes, Matthew S; Paulson, Jeff M; Tankard, Sara; Uy, O Manny 2013-01-01 In order to replicate the fracture behavior of the intact human skull under impact it becomes necessary to develop a material having the mechanical properties of cranial bone. The most important properties to replicate in a surrogate human skull were found to be the fracture toughness and tensile strength of the cranial tables as well as the bending strength of the three-layer (inner table-diplöe-outer table) architecture of the human skull. The materials selected to represent the surrogate cranial tables consisted of two different epoxy resins systems with random milled glass fiber to enhance the strength and stiffness and the materials to represent the surrogate diplöe consisted of three low density foams. Forty-one three-point bending fracture toughness tests were performed on nine material combinations. The materials that best represented the fracture toughness of cranial tables were then selected and formed into tensile samples and tested. These materials were then used with the two surrogate diplöe foam materials to create the three-layer surrogate cranial bone samples for three-point bending tests. Drop tower tests were performed on flat samples created from these materials and the fracture patterns were very similar to the linear fractures in pendulum impacts of intact human skulls, previously reported in the literature. The surrogate cranial tables had the quasi-static fracture toughness and tensile strength of 2.5 MPa√ m and 53 ± 4.9 MPa, respectively, while the same properties of human compact bone were 3.1 ± 1.8 MPa√ m and 68 ± 18 MPa, respectively. The cranial surrogate had a quasi-static bending strength of 68 ± 5.7 MPa, while that of cranial bone was 82 ± 26 MPa. This material/design is currently being used to construct spherical shell samples for drop tower and ballistic tests. 17. Current advances in biomarkers for targeted therapy in triple-negative breast cancer Directory of Open Access Journals (Sweden) Fleisher B 2016-10-01 Full Text Available Brett Fleisher,1 Charlotte Clarke,2 Sihem Ait-Oudhia1 1Department of Pharmaceutics, Center for Pharmacometrics and Systems Pharmacology, College of Pharmacy, University of Florida, Orlando, FL, 2Department of Translational Research, UT MD Anderson Cancer Center, Houston, TX, USA Abstract: Triple-negative breast cancer (TNBC is a complex heterogeneous disease characterized by the absence of three hallmark receptors: human epidermal growth factor receptor 2, estrogen receptor, and progesterone receptor. Compared to other breast cancer subtypes, TNBC is more aggressive, has a higher prevalence in African-Americans, and more frequently affects younger patients. Currently, TNBC lacks clinically accepted targets for tailored therapy, warranting the need for candidate biomarkers. BiomarkerBase, an online platform used to find biomarkers reported in clinical trials, was utilized to screen all potential biomarkers for TNBC and select only the ones registered in completed TNBC trials through clinicaltrials.gov. The selected candidate biomarkers were classified as surrogate, prognostic, predictive, or pharmacodynamic (PD and organized by location in the blood, on the cell surface, in the cytoplasm, or in the nucleus. Blood biomarkers include vascular endothelial growth factor/vascular endothelial growth factor receptor and interleukin-8 (IL-­8; cell surface biomarkers include EGFR, insulin-like growth factor binding protein, c-Kit, c-Met, and PD-L1; cytoplasm biomarkers include PIK3CA, pAKT/S6/p4E-BP1, PTEN, ALDH1, and the PIK3CA/AKT/mTOR-related metabolites; and nucleus biomarkers include BRCA1, the glucocorticoid receptor, TP53, and Ki67. Candidate biomarkers were further organized into a “cellular protein network” that demonstrates potential connectivity. This review provides an inventory and reference point for promising biomarkers for breakthrough targeted therapies in TNBC. Keywords: anti-cancer directed pharmacotherapy, difficult 18. Theranostic Biomarkers for Schizophrenia Science.gov (United States) Nikolac Perkovic, Matea; Nedic Erjavec, Gordana; Svob Strac, Dubravka; Uzun, Suzana; Kozumplik, Oliver; Pivac, Nela 2017-01-01 Schizophrenia is a highly heritable, chronic, severe, disabling neurodevelopmental brain disorder with a heterogeneous genetic and neurobiological background, which is still poorly understood. To allow better diagnostic procedures and therapeutic strategies in schizophrenia patients, use of easy accessible biomarkers is suggested. The most frequently used biomarkers in schizophrenia are those associated with the neuroimmune and neuroendocrine system, metabolism, different neurotransmitter systems and neurotrophic factors. However, there are still no validated and reliable biomarkers in clinical use for schizophrenia. This review will address potential biomarkers in schizophrenia. It will discuss biomarkers in schizophrenia and propose the use of specific blood-based panels that will include a set of markers associated with immune processes, metabolic disorders, and neuroendocrine/neurotrophin/neurotransmitter alterations. The combination of different markers, or complex multi-marker panels, might help in the discrimination of patients with different underlying pathologies and in the better classification of the more homogenous groups. Therefore, the development of the diagnostic, prognostic and theranostic biomarkers is an urgent and an unmet need in psychiatry, with the aim of improving diagnosis, therapy monitoring, prediction of treatment outcome and focus on the personal medicine approach in order to improve the quality of life in patients with schizophrenia and decrease health costs worldwide. PMID:28358316 19. Theranostic Biomarkers for Schizophrenia. Science.gov (United States) Perkovic, Matea Nikolac; Erjavec, Gordana Nedic; Strac, Dubravka Svob; Uzun, Suzana; Kozumplik, Oliver; Pivac, Nela 2017-03-30 Schizophrenia is a highly heritable, chronic, severe, disabling neurodevelopmental brain disorder with a heterogeneous genetic and neurobiological background, which is still poorly understood. To allow better diagnostic procedures and therapeutic strategies in schizophrenia patients, use of easy accessible biomarkers is suggested. The most frequently used biomarkers in schizophrenia are those associated with the neuroimmune and neuroendocrine system, metabolism, different neurotransmitter systems and neurotrophic factors. However, there are still no validated and reliable biomarkers in clinical use for schizophrenia. This review will address potential biomarkers in schizophrenia. It will discuss biomarkers in schizophrenia and propose the use of specific blood-based panels that will include a set of markers associated with immune processes, metabolic disorders, and neuroendocrine/neurotrophin/neurotransmitter alterations. The combination of different markers, or complex multi-marker panels, might help in the discrimination of patients with different underlying pathologies and in the better classification of the more homogenous groups. Therefore, the development of the diagnostic, prognostic and theranostic biomarkers is an urgent and an unmet need in psychiatry, with the aim of improving diagnosis, therapy monitoring, prediction of treatment outcome and focus on the personal medicine approach in order to improve the quality of life in patients with schizophrenia and decrease health costs worldwide. 20. Biomarkers for neuromyelitis optica. Science.gov (United States) Chang, Kuo-Hsuan; Ro, Long-Sun; Lyu, Rong-Kuo; Chen, Chiung-Mei 2015-02-02 Neuromyelitis optica (NMO) is an acquired, heterogeneous inflammatory disorder, which is characterized by recurrent optic neuritis and longitudinally extensive spinal cord lesions. The discovery of the serum autoantibody marker, anti-aquaporin 4 (anti-AQP4) antibody, revolutionizes our understanding of pathogenesis of NMO. In addition to anti-AQP4 antibody, other biomarkers for NMO are also reported. These candidate biomarkers are particularly involved in T helper (Th)17 and astrocytic damages, which play a critical role in the development of NMO lesions. Among them, IL-6 in the peripheral blood is associated with anti-AQP4 antibody production. Glial fibrillary acidic protein (GFAP) in CSF demonstrates good correlations with clinical severity of NMO relapses. Detecting these useful biomarkers may be useful in the diagnosis and evaluation of disease activity of NMO. Development of compounds targeting these biomarkers may provide novel therapeutic strategies for NMO. This article will review the related biomarker studies in NMO and discuss the potential therapeutics targeting these biomarkers. 1. Hall et al., 2016 Artificial Turf Surrogate Surface Methods Paper Data File Data.gov (United States) U.S. Environmental Protection Agency — Mercury dry deposition data quantified via static water surrogate surface (SWSS) and artificial turf surrogate surface (ATSS) collectors. This dataset is associated... 2. Is Doubling of Serum Creatinine a Valid Clinical 'Hard' Endpoint in Clinical Nephrology Trials? NARCIS (Netherlands) Lambers Heerspink, H. J.; Perkovic, V.; de Zeeuw, D. 2011-01-01 The composite of end stage renal disease (ESRD), doubling of serum creatinine and (renal) death, is a frequently used endpoint in randomized clinical trials in nephrology. Doubling of serum creatinine is a well-accepted part of this endpoint because a doubling of serum creatinine reflects a large su 3. Sequential optimization of strip bending process using multiquadric radial basis function surrogate models NARCIS (Netherlands) Havinga, Gosse Tjipke; van den Boogaard, Antonius H.; Klaseboer, G. 2013-01-01 Surrogate models are used within the sequential optimization strategy for forming processes. A sequential improvement (SI) scheme is used to refine the surrogate model in the optimal region. One of the popular surrogate modeling methods for SI is Kriging. However, the global response of Kriging mode 4. Multi-Toxic Endpoints of the Foodborne Mycotoxins in Nematode Caenorhabditis elegans. Science.gov (United States) Yang, Zhendong; Xue, Kathy S; Sun, Xiulan; Tang, Lili; Wang, Jia-Sheng 2015-12-02 Aflatoxins B₁ (AFB₁), deoxynivalenol (DON), fumonisin B₁ (FB₁), T-2 toxin (T-2), and zearalenone (ZEA) are the major foodborne mycotoxins of public health concerns. In the present study, the multiple toxic endpoints of these naturally-occurring mycotoxins were evaluated in Caenorhabditis elegans model for their lethality, toxic effects on growth and reproduction, as well as influence on lifespan. We found that the lethality endpoint was more sensitive for T-2 toxicity with the EC50 at 1.38 mg/L, the growth endpoint was relatively sensitive for AFB₁ toxic effects, and the reproduction endpoint was more sensitive for toxicities of AFB₁, FB₁, and ZEA. Moreover, the lifespan endpoint was sensitive to toxic effects of all five tested mycotoxins. Data obtained from this study may serve as an important contribution to knowledge on assessment of mycotoxin toxic effects, especially for assessing developmental and reproductive toxic effects, using the C. elegans model. 5. An endpoint damage oriented model for life cycle environmental impact assessment of buildings in China Institute of Scientific and Technical Information of China (English) GU LiJing; LIN BoRong; GU DaoJin; ZHU YingXin 2008-01-01 The midpoint impact assessment methodology and several weighting methods that are currently used by most building Life cycle assessment (LCA) researchers in China, still have some shortcomings. In order to make the evaluation results have better temporal and spatial applicability, the endpoint impact assessment methodology was adopted in this paper. Based on the endpoint damage oriented concept, four endpoints of resource exhaustion, energy exhaustion, human health damage and ecosystem damage were selected according to the situation of China and the specialties of the building industry. Subsequently the formula for calculating each endpoint, the background value for normalization and the weighting factors were defined. Following that, an endpoint damage oriented model to evaluate the life cycle environmental impact of buildings in China was established. This model can produce an integrated indicator for environmental impact, and consequently provides references for directing the sustainable building design. 6. Validation of New Cancer Biomarkers DEFF Research Database (Denmark) Duffy, Michael J; Sturgeon, Catherine M; Söletormos, Georg 2015-01-01 BACKGROUND: Biomarkers are playing increasingly important roles in the detection and management of patients with cancer. Despite an enormous number of publications on cancer biomarkers, few of these biomarkers are in widespread clinical use. CONTENT: In this review, we discuss the key steps...... in advancing a newly discovered cancer candidate biomarker from pilot studies to clinical application. Four main steps are necessary for a biomarker to reach the clinic: analytical validation of the biomarker assay, clinical validation of the biomarker test, demonstration of clinical value from performance...... initiation of the study. SUMMARY: Application of the methodology outlined above should result in a more efficient and effective approach to the development of cancer biomarkers as well as the reporting of cancer biomarker studies. With rigorous application, all stakeholders, and especially patients, would... 7. Systematic Review and Meta-analysis of the Association Between Exposure to Environmental Tobacco Smoke and Periodontitis Endpoints Among Nonsmokers. Science.gov (United States) 2016-11-01 A systematic review was conducted to summarize the epidemiological evidence on environmental tobacco smoke (ETS) exposure and prevalent periodontitis endpoints among nonsmokers. We searched PubMed, EMBASE, Web of Science, Pro-Quest dissertations, and conference proceedings of a dental research association. We included studies from which prevalence odds ratios (POR) could be extracted for periodontitis determined by examiner measurements of clinical attachment level (CAL) and/or probing pocket depth (PD) or self-report of missing teeth. Studies determined ETS exposure by self-report or biomarker (cotinine) levels. For studies reporting CAL and/or PD (n = 6), associations were stronger with cotinine-measured exposure (n = 3; random effects POR [95% prediction interval] = 1.63 (0.90, 2.96)) than self-reported exposure (n = 3; random effects POR = 1.15 (0.68, 1.96)). There was no meaningful difference in summary estimate for studies reporting CAL and/or PD endpoint (n = 6; random effects POR = 1.34 (0.93, 1.94)) as opposed to tooth loss (n = 2; random effects POR = 1.33 (0.52, 3.40)). There appears to be a positive association between exposure to ETS and prevalent periodontitis endpoints among nonsmokers, the magnitude of which depended mostly on the method of ETS assessment. The notoriety of ETS is often discussed in terms of its associations with cancer, chronic conditions like cardiovascular diseases, and respiratory illnesses in children. However, very little attention is paid to its association with oral diseases, especially periodontitis. Periodontitis affects a large proportion of the population and is a major cause of tooth loss. This study summarized the epidemiologic association between exposure to ETS and periodontitis among nonsmokers. Although the findings are consistent with a positive association, methodological weaknesses relating to study design, assessment of ETS, periodontitis, and adjustment covariates were highlighted and recommendations for 8. Biomarkers of sepsis Science.gov (United States) 2013-01-01 Sepsis is an unusual systemic reaction to what is sometimes an otherwise ordinary infection, and it probably represents a pattern of response by the immune system to injury. A hyper-inflammatory response is followed by an immunosuppressive phase during which multiple organ dysfunction is present and the patient is susceptible to nosocomial infection. Biomarkers to diagnose sepsis may allow early intervention which, although primarily supportive, can reduce the risk of death. Although lactate is currently the most commonly used biomarker to identify sepsis, other biomarkers may help to enhance lactate’s effectiveness; these include markers of the hyper-inflammatory phase of sepsis, such as pro-inflammatory cytokines and chemokines; proteins such as C-reactive protein and procalcitonin which are synthesized in response to infection and inflammation; and markers of neutrophil and monocyte activation. Recently, markers of the immunosuppressive phase of sepsis, such as anti-inflammatory cytokines, and alterations of the cell surface markers of monocytes and lymphocytes have been examined. Combinations of pro- and anti-inflammatory biomarkers in a multi-marker panel may help identify patients who are developing severe sepsis before organ dysfunction has advanced too far. Combined with innovative approaches to treatment that target the immunosuppressive phase, these biomarkers may help to reduce the mortality rate associated with severe sepsis which, despite advances in supportive measures, remains high. PMID:23480440 9. Mass spectrometry for biomarker development Energy Technology Data Exchange (ETDEWEB) Wu, Chaochao; Liu, Tao; Baker, Erin Shammel; Rodland, Karin D.; Smith, Richard D. 2015-06-19 Biomarkers potentially play a crucial role in early disease diagnosis, prognosis and targeted therapy. In the past decade, mass spectrometry based proteomics has become increasingly important in biomarker development due to large advances in technology and associated methods. This chapter mainly focuses on the application of broad (e.g. shotgun) proteomics in biomarker discovery and the utility of targeted proteomics in biomarker verification and validation. A range of mass spectrometry methodologies are discussed emphasizing their efficacy in the different stages in biomarker development, with a particular emphasis on blood biomarker development. 10. Biomarkers intersect with the exposome. Science.gov (United States) Rappaport, Stephen M 2012-09-01 The exposome concept promotes use of omic tools for discovering biomarkers of exposure and biomarkers of disease in studies of diseased and healthy populations. A two-stage scheme is presented for profiling omic features in serum to discover molecular biomarkers and then for applying these biomarkers in follow-up studies. The initial component, referred to as an exposome-wide-association study (EWAS), employs metabolomics and proteomics to interrogate the serum exposome and, ultimately, to identify, validate and differentiate biomarkers of exposure and biomarkers of disease. Follow-up studies employ knowledge-driven designs to explore disease causality, prevention, diagnosis, prognosis and treatment. 11. Very Short Literature Survey From Supervised Learning To Surrogate Modeling CERN Document Server Brusan, Altay 2012-01-01 The past century was era of linear systems. Either systems (especially industrial ones) were simple (quasi)linear or linear approximations were accurate enough. In addition, just at the ending decades of the century profusion of computing devices were available, before then due to lack of computational resources it was not easy to evaluate available nonlinear system studies. At the moment both these two conditions changed, systems are highly complex and also pervasive amount of computation strength is cheap and easy to achieve. For recent era, a new branch of supervised learning well known as surrogate modeling (meta-modeling, surface modeling) has been devised which aimed at answering new needs of modeling realm. This short literature survey is on to introduce surrogate modeling to whom is familiar with the concepts of supervised learning. Necessity, challenges and visions of the topic are considered. 12. A Parallel and Distributed Surrogate Model Implementation for Computational Steering KAUST Repository Butnaru, Daniel 2012-06-01 Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE. 13. The Theory and Practice of Surrogate Decision-Making. Science.gov (United States) Wendler, David 2017-01-01 When a patient lacks decision-making capacity and has not left a clear advance directive, there is now widespread agreement that patient-designated and next-of-kin surrogates should implement substituted judgment within a process of shared decision-making. Specifically, after discussing the "best scientific evidence available, as well as the patient's values, goals, and preferences" with the patient's clinicians, the patient-designated or next-of-kin surrogate should attempt to determine what decision the patient would have made in the circumstances. To the extent that this approach works, it seems to provide about as much respect for the autonomy of incapacitated patients as we could ask for. But, as articles in this issue of the Report by Jeffrey Berger and by Ellen Robinson and colleagues emphasize, reality presents challenges. © 2017 The Hastings Center. 14. Biomarkers of the Dementia Directory of Open Access Journals (Sweden) Mikio Shoji 2011-01-01 Full Text Available Recent advances in biomarker studies on dementia are summarized here. CSF Aβ40, Aβ42, total tau, and phosphorylated tau are the most sensitive biomarkers for diagnosis of Alzheimer's disease (AD and prediction of onset of AD from mild cognitive impairment (MCI. Based on this progress, new diagnostic criteria for AD, MCI, and preclinical AD were proposed by National Institute of Aging (NIA and Alzheimer's Association in August 2010. In these new criteria, progress in biomarker identification and amyloid imaging studies in the past 10 years have added critical information. Huge contributions of basic and clinical studies have established clinical evidence supporting these markers. Based on this progress, essential therapy for cure of AD is urgently expected. 15. Inflammatory biomarkers and cancer DEFF Research Database (Denmark) Rasmussen, Line Jee Hartmann; Schultz, Martin; Gaardsting, Anne 2017-01-01 In Denmark, patients with serious nonspecific symptoms and signs of cancer (NSSC) are referred to the diagnostic outpatient clinics (DOCs) where an accelerated cancer diagnostic program is initiated. Various immunological and inflammatory biomarkers have been associated with cancer, including...... soluble urokinase plasminogen activator receptor (suPAR) and the pattern recognition receptors (PRRs) pentraxin-3, mannose-binding lectin, ficolin-1, ficolin-2 and ficolin-3. We aimed to evaluate these biomarkers and compare their diagnostic ability to classical biomarkers for diagnosing cancer...... in patients with NSSC. Patients were included from the DOC, Department of Infectious Diseases, Copenhagen University Hospital Hvidovre. Patients were given a final diagnosis based on the combined results from scans, blood work and physical examination. Weight loss, Charlson score and previous cancer were... 16. The use of protozoa in ecotoxicology: application of multiple endpoint tests of the ciliate E. crassus for the evaluation of sediment quality in coastal marine ecosystems. Science.gov (United States) Gomiero, A; Dagnino, A; Nasci, C; Viarengo, A 2013-01-01 Despite an increasing number of surveys describing adverse effects of contaminated sediments on marine organisms, few studies have addressed protists. In this study, the free-crawling marine ciliate Euplotes crassus was evaluated as the test organism for the screening of sediment toxicity using sediments from both coastal and estuarine sites of the Venice Lagoon (Marghera harbour [MH], Valle Millecampi [MV], Murano island [MI] and Lido inlet [LI]). Two endpoints of high ecological value, mortality (Mry) and replication rate (RpR), were assessed in combination with the two sublethal biomarkers of stress, endocytotic rate (Ecy) and lysosomal membrane stability (NRRT). The results showed a significant inhibition of RpR, Ecy and NRRT paralleled by a small and insignificantly increased Mry of the exposed specimens. Our results thus demonstrate that only a combination of mortality and sublethal biomarkers was able to characterise an exposure-related stress syndrome. The suite of biomarkers described here was also able to detect and resolve a pollution-induced stress syndrome at an early stage of pollution. The contamination level of the sediments was assessed using chemical analysis, by estimating bioavailability and by computing a toxic pressure coefficient (TPC) to account for potential additive effects of different pollutants. The observed biological responses were consistent with the contamination levels in sediments, suggesting a high potential for using Protozoa in bioassays to assess environmental risk in coastal marine systems. 17. Commercial agencies and surrogate motherhood: a transaction cost approach. Science.gov (United States) Galbraith, Mhairi; McLachlan, Hugh V; Swales, J Kim 2005-03-01 In this paper we investigate the legal arrangements involved in UK surrogate motherhood from a transaction-cost perspective. We outline the specific forms the transaction costs take and critically comment on the way in which the UK institutional and organisational arrangements at present adversely influence transaction costs. We then focus specifically on the potential role of surrogacy agencies and look at UK and US evidence on commercial and voluntary agencies. Policy implications follow. 18. Quantification of the Relationship between Surrogate Fuel Structure and Performance Science.gov (United States) 2012-07-31 cycloperoxy-5-yl (BICYC5.O2) and bicyclo[2,2,1]hexene peroxy (C2O2H221) radicals . The latter route leads to the formation of vinyl ketene and the formyl ...3089 selection of stable molecule and radicals . The adopted calculation method for the determination of such data is outlined in Appendix 1...chemistry of aromatic fuel components used in surrogate fuels and the importance of the cyclopentadi- enyl radical in poly-aromatic hydrocarbon (PAH 19. Biopolicies and biotechnologies: reflections on surrogate maternity in India OpenAIRE 2010-01-01 This article explores the impact of biotechnology, particularly on assisted reproductive technologies such as surrogate motherhood. The study is based on interviews and field work conducted in the city of Hyderabad in India within the frame of the seminar on “Research Methodology” given by Dr. Rohan D´Souza at the Centre for Studies in Science Policy at the Jawaharlal Nehru University in India. The theoretical framework of this analysis focuses on exploring concepts such as cyborg (Haraway,19... 20. Regression calibration with more surrogates than mismeasured variables KAUST Repository Kipnis, Victor 2012-06-29 In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures. 1. Biopolicies and biotechnologies: reflections on surrogate maternity in India OpenAIRE 2010-01-01 This article explores the impact of biotechnology, particularly on assisted reproductive technologies such as surrogate motherhood. The study is based on interviews and field work conducted in the city of Hyderabad in India within the frame of the seminar on “Research Methodology” given by Dr. Rohan D´Souza at the Centre for Studies in Science Policy at the Jawaharlal Nehru University in India. The theoretical framework of this analysis focuses on exploring concepts such as cyborg (Haraway,19... 2. Diaphragm as an anatomic surrogate for lung tumor motion CERN Document Server Cervino, Laura I; Sandhu, Ajay; Jiang, Steve B 2009-01-01 Lung tumor motion due to respiration poses a challenge in the application of modern three-dimensional conformal radiotherapy. Direct tracking of the lung tumor during radiation therapy is very difficult without implanted fiducial markers. Indirect tracking relies on the correlation of the tumor's motion and the surrogate's motion. The present paper presents an analysis of the correlation between the tumor motion and the diaphragm motion in order to evaluate the potential use of diaphragm as a surrogate for tumor motion. We have analyzed the correlation between diaphragm motion and superior-inferior lung tumor motion in 32 fluoroscopic image sequences from 10 lung cancer patients. A simple linear model and a more complex linear model that accounts for phase delays between the two motions have been used. Results show that the diaphragm is a good surrogate for tumor motion prediction for most patients, resulting in an average correlation factor of 0.94 and 0.98 with each model respectively. The model that accoun... 3. The Confucian bioethics of surrogate decision making: its communitarian roots. Science.gov (United States) Fan, Ruiping 2011-10-01 The family is the exemplar community of Chinese society. This essay explores how Chinese communitarian norms, expressed in thick commitments to the authority and autonomy of the family, are central to contemporary Chinese bioethics. In particular, it focuses on the issue of surrogate decision making to illustrate the Confucian family-grounded communitarian bioethics. The essay first describes the way in which the family, in Chinese bioethics, functions as a whole to provide consent for significant medical and surgical interventions when a patient has lost decision-making capacity. It is argued that the practice of not having an established order for surrogate decision makers (e.g., spouse, children, and then parents), as it is done in the United States, reflects the acknowledgment that the family as a social reality cannot be reduced to a stereotype of the appropriate order of default decision makers. This description of the family as being in authority to make surrogate decisions for an incompetent family member is enriched by an elaboration of the differences among the concepts of patient autonomy, family autonomy, and moral autonomy. The Chinese model, as well as the Confucian communitarian life of families, engages a family autonomy that is supported by a Confucian understanding of moral autonomy, rather than individual autonomy. Finally, the issue of possible conflicts between patient and family interests in relation to a patient's past wishes in the Chinese model is addressed in light of the role of the physician. 4. Evaluation of bone surrogates for indirect and direct ballistic fractures. Science.gov (United States) Bir, Cynthia; Andrecovich, Chris; DeMaio, Marlene; Dougherty, Paul J 2016-04-01 The mechanism of injury for fractures to long bones has been studied for both direct ballistic loading as well as indirect. However, the majority of these studies have been conducted on both post-mortem human subjects (PMHS) and animal surrogates which have constraints in terms of storage, preparation and testing. The identification of a validated bone surrogate for use in forensic, medical and engineering testing would provide the ability to investigate ballistic loading without these constraints. Two specific bone surrogates, Sawbones and Synbone, were evaluated in comparison to PMHS for both direct and indirect ballistic loading. For the direct loading, the mean velocity to produce fracture was 121 ± 19 m/s for the PMHS, which was statistically different from the Sawbones (140 ± 7 m/s) and Synbone (146 ± 3 m/s). The average distance to fracture in the indirect loading was .70 cm for the PMHS. The Synbone had a statistically similar average distance to fracture (.61 cm, p=0.54) however the Sawbones average distance to fracture was statistically different (.41 cm, pballistic testing was not identified and future work is warranted. 5. Surrogate Assisted Design Optimization of an Air Turbine Directory of Open Access Journals (Sweden) 2014-01-01 Full Text Available Surrogates are cheaper to evaluate and assist in designing systems with lesser time. On the other hand, the surrogates are problem dependent and they need evaluation for each problem to find a suitable surrogate. The Kriging variants such as ordinary, universal, and blind along with commonly used response surface approximation (RSA model were used in the present problem, to optimize the performance of an air impulse turbine used for ocean wave energy harvesting by CFD analysis. A three-level full factorial design was employed to find sample points in the design space for two design variables. A Reynolds-averaged Navier Stokes solver was used to evaluate the objective function responses, and these responses along with the design variables were used to construct the Kriging variants and RSA functions. A hybrid genetic algorithm was used to find the optimal point in the design space. It was found that the best optimal design was produced by the universal Kriging while the blind Kriging produced the worst. The present approach is suggested for renewable energy application. 6. SU-F-BRF-10: Deformable MRI to CT Validation Employing Same Day Planning MRI for Surrogate Analysis Energy Technology Data Exchange (ETDEWEB) Padgett, K; Stoyanova, R; Johnson, P; Dogan, N; Pollack, A [University of Miami School of Medicine, Miami, FL (United States); Piper, J; Javorek, A [MIM Software, Inc., Beachwood, OH (United States) 2014-06-15 Purpose: To compare rigid and deformable registrations of the prostate in the multi-modality setting (diagnostic-MRI to planning-CT) by utilizing a planning-MRI as a surrogate. The surrogate allows for the direct quantitative analysis which can be difficult in the multi-modality domain where intensity mapping differs. Methods: For ten subjects, T2 fast-spin-echo images were acquired at two different time points, the first several weeks prior to planning (diagnostic-MRI) and the second on the same day in which the planning CT was collected (planning-MRI). Significant effort in patient positioning and bowel/bladder preparation was undertaken to minimize distortion of the prostate in all datasets. The diagnostic-MRI was deformed to the planning-CT utilizing a commercially available deformable registration algorithm synthesized from local registrations. The deformed MRI was then rigidly aligned to the planning MRI which was used as the surrogate for the planning-CT. Agreement between the two MRI datasets was scored using intensity based metrics including Pearson correlation and normalized mutual information, NMI. A local analysis was performed by looking only within the prostate, proximal seminal vesicles, penile bulb and combined areas. A similar method was used to assess a rigid registration between the diagnostic-MRI and planning-CT. Results: Utilizing the NMI, the deformable registrations were superior to the rigid registrations in 9 of 10 cases demonstrating a 15.94% improvement (p-value < 0.001) within the combined area. The Pearson correlation showed similar results with the deformable registration superior in the same number of cases and demonstrating a 6.97% improvement (p-value <0.011). Conclusion: Validating deformable multi-modality registrations using spatial intensity based metrics is difficult due to the inherent differences in intensity mapping. This population provides an ideal testing ground for MRI to CT deformable registrations by obviating the need 7. Biomarkers for systemic lupus erythematosus. Science.gov (United States) Ahearn, Joseph M; Liu, Chau-Ching; Kao, Amy H; Manzi, Susan 2012-04-01 The urgent need for lupus biomarkers was demonstrated in September 2011 during a Workshop sponsored by the Food and Drug Administration: Potential Biomarkers Predictive of Disease Flare. After 2 days of discussion and more than 2 dozen presentations from thought leaders in both industry and academia, it became apparent that highly sought biomarkers to predict lupus flare have not yet been identified. Even short of the elusive biomarker of flare, few biomarkers for systemic lupus erythematosus (SLE) diagnosis, monitoring, and stratification have been validated and employed for making clinical decisions. This lack of reliable, specific biomarkers for SLE hampers proper clinical management of patients with SLE and impedes development of new lupus therapeutics. As such, the intensity of investigation to identify lupus biomarkers is climbing a steep trajectory, lending cautious optimism that a validated panel of biomarkers for lupus diagnosis, monitoring, stratification, and prediction of flare may soon be in hand. 8. A Dynamical Modeling Approach for Analysis of Longitudinal Clinical Trials in the Presence of Missing Endpoints. Science.gov (United States) Banks, H T; Hu, Shuhua; Rosenberg, Eric 2017-01-01 Randomized longitudinal clinical trials are the gold standard to evaluate the effectiveness of interventions among different patient treatment groups. However, analysis of such clinical trials becomes difficult in the presence of missing data, especially in the case where the study endpoints become difficult to measure because of subject dropout rates or/and the time to discontinue the assigned interventions are different among the patient groups. Here we report on using a validated mathematical model combined with an inverse problem approach to predict the values for the missing endpoints. A small randomized HIV clinical trial where endpoints for most of patients are missing is used to demonstrate this approach. 9. Effectiveness of biological surrogates for predicting patterns of marine biodiversity: a global meta-analysis. Directory of Open Access Journals (Sweden) Camille Mellin Full Text Available The use of biological surrogates as proxies for biodiversity patterns is gaining popularity, particularly in marine systems where field surveys can be expensive and species richness high. Yet, uncertainty regarding their applicability remains because of inconsistency of definitions, a lack of standard methods for estimating effectiveness, and variable spatial scales considered. We present a Bayesian meta-analysis of the effectiveness of biological surrogates in marine ecosystems. Surrogate effectiveness was defined both as the proportion of surrogacy tests where predictions based on surrogates were better than random (i.e., low probability of making a Type I error; P and as the predictability of targets using surrogates (R(2. A total of 264 published surrogacy tests combined with prior probabilities elicited from eight international experts demonstrated that the habitat, spatial scale, type of surrogate and statistical method used all influenced surrogate effectiveness, at least according to either P or R(2. The type of surrogate used (higher-taxa, cross-taxa or subset taxa was the best predictor of P, with the higher-taxa surrogates outperforming all others. The marine habitat was the best predictor of R(2, with particularly low predictability in tropical reefs. Surrogate effectiveness was greatest for higher-taxa surrogates at a <10-km spatial scale, in low-complexity marine habitats such as soft bottoms, and using multivariate-based methods. Comparisons with terrestrial studies in terms of the methods used to study surrogates revealed that marine applications still ignore some problems with several widely used statistical approaches to surrogacy. Our study provides a benchmark for the reliable use of biological surrogates in marine ecosystems, and highlights directions for future development of biological surrogates in predicting biodiversity. 10. Sensitivity of the sea snail Gibbula umbilicalis to mercury exposure--linking endpoints from different biological organization levels. Science.gov (United States) Cabecinhas, Adriana S; Novais, Sara C; Santos, Sílvia C; Rodrigues, Andreia C M; Pestana, João L T; Soares, Amadeu M V M; Lemos, Marco F L 2015-01-01 Mercury contamination is a common phenomenon in the marine environment and for this reason it is important to develop cost-effective and relevant tools to assess its toxic effects on a number of different species. To evaluate the possible effects of Hg in the sea snail Gibbula umbilicalis, animals were exposed to increasing concentrations of the contaminant in the ionic form for 96 h. After this exposure period, mortality, feeding and flipping behavior, the activity of the biomarkers glutathione S-transferase, superoxide dismutase, catalase, lactate dehydrogenase and cholinesterase, the levels of lipid peroxidation and cellular energy allocation were measured. After 96 h of exposure to the highest Hg concentration (≈LC20), there was a significant inhibition of the cholinesterase activity as well as impairment in the flipping behavior and post-exposure feeding of the snails. Cholinesterase inhibition was correlated with the impairment of behavioral responses also caused by exposure to Hg. These endpoints, including the novel flipping test, revealed sensitivity to Hg and might be used as relevant early warning indicators of prospective effects at higher biological organization levels, making these parameters potential tools for environmental risk assessment. The proposed test species showed sensitivity to Hg and proved to be a suitable and resourceful species to be used in ecotoxicological testing to assess effects of other contaminants in marine ecosystems. 11. Statistical methods for down-selection of treatment regimens based on multiple endpoints, with application to HIV vaccine trials. Science.gov (United States) Huang, Ying; Gilbert, Peter B; Fu, Rong; Janes, Holly 2016-09-20 SummaryBiomarker endpoints measuring vaccine-induced immune responses are essential to HIV vaccine development because of their potential to predict the effect of a vaccine in preventing HIV infection. A vaccine's immune response profile observed in phase I immunogenicity studies is a key factor in determining whether it is advanced for further study in phase II and III efficacy trials. The multiplicity of immune variables and scientific uncertainty in their relative importance, however, pose great challenges to the development of formal algorithms for selecting vaccines to study further. Motivated by the practical need to identify a set of promising vaccines from a pool of candidate regimens for inclusion in an upcoming HIV vaccine efficacy trial, we propose a new statistical framework for the selection of vaccine regimens based on their immune response profile. In particular, we propose superiority and non-redundancy criteria to be achieved in down-selection, and develop novel statistical algorithms that integrate hypothesis testing and ranking for selecting vaccine regimens satisfying these criteria. Performance of the proposed selection algorithms are evaluated through extensive numerical studies. We demonstrate the application of the proposed methods through the comparison of immune responses between several HIV vaccine regimens. The methods are applicable to general down-selection applications in clinical trials. 12. Emerging Biomarkers in Glioblastoma Energy Technology Data Exchange (ETDEWEB) McNamara, Mairéad G.; Sahebjam, Solmaz; Mason, Warren P., E-mail: warren.mason@uhn.ca [Pencer Brain Tumor Centre, Princess Margaret Cancer Centre, 610 University Avenue, Toronto, Ontario M5G 2M9 (Canada) 2013-08-22 Glioblastoma, the most common primary brain tumor, has few available therapies providing significant improvement in survival. Molecular signatures associated with tumor aggressiveness as well as with disease progression and their relation to differences in signaling pathways implicated in gliomagenesis have recently been described. A number of biomarkers which have potential in diagnosis, prognosis and prediction of response to therapy have been identified and along with imaging modalities could contribute to the clinical management of GBM. Molecular biomarkers including O(6)-methlyguanine-DNA-methyltransferase (MGMT) promoter and deoxyribonucleic acid (DNA) methylation, loss of heterozygosity (LOH) of chromosomes 1p and 19q, loss of heterozygosity 10q, isocitrate dehydrogenase (IDH) mutations, epidermal growth factor receptor (EGFR), epidermal growth factor, latrophilin, and 7 transmembrane domain-containing protein 1 on chromosome 1 (ELTD1), vascular endothelial growth factor (VEGF), tumor suppressor protein p53, phosphatase and tensin homolog (PTEN), p16INK4a gene, cytochrome c oxidase (CcO), phospholipid metabolites, telomerase messenger expression (hTERT messenger ribonucleic acid [mRNA]), microRNAs (miRNAs), cancer stem cell markers and imaging modalities as potential biomarkers are discussed. Inclusion of emerging biomarkers in prospective clinical trials is warranted in an effort for more effective personalized therapy in the future. 13. Biomarkers for anorexia nervosa DEFF Research Database (Denmark) Sjøgren, Jan Magnus 2017-01-01 Biomarkers for anorexia nervosa (AN) which reflect the pathophysiology and relate to the aetiology of the disease, are warranted and could bring us one step closer to targeted treatment of AN. Some leads may be found in the biochemistry which often is found disturbed in AN, although normalization... 14. Neuroimaging Biomarkers for Psychosis Science.gov (United States) Hager, Brandon M. 2015-01-01 Background Biomarkers provide clinicians with a predictable model for the diagnosis, treatment and follow-up of medical ailments. Psychiatry has lagged behind other areas of medicine in the identification of biomarkers for clinical diagnosis and treatment. In this review, we investigated the current state of neuroimaging as it pertains to biomarkers for psychosis. Methods We reviewed systematic reviews and meta-analyses of the structural (sMRI), functional (fMRI), diffusion-tensor (DTI), Positron emission tomography (PET) and spectroscopy (MRS) studies of subjects at-risk or those with an established schizophrenic illness. Only articles reporting effect-sizes and confidence intervals were included in an assessment of robustness. Results Out of the identified meta-analyses and systematic reviews, 21 studies met the inclusion criteria for assessment. There were 13 sMRI, 4 PET, 3 MRS, and 1 DTI studies. The search terms included in the current review encompassed familial high risk (FHR), clinical high risk (CHR), First episode (FES), Chronic (CSZ), schizophrenia spectrum disorders (SSD), and healthy controls (HC). Conclusions Currently, few neuroimaging biomarkers can be considered ready for diagnostic use in patients with psychosis. At least in part, this may be related to the challenges inherent in the current symptom-based approach to classifying these disorders. While available studies suggest a possible value of imaging biomarkers for monitoring disease progression, more systematic research is needed. To date, the best value of imaging data in psychoses has been to shed light on questions of disease pathophysiology, especially through the characterization of endophenotypes. PMID:25883891 15. Metabolomics in diagnosis and biomarker discovery of colorectal cancer. Science.gov (United States) Zhang, Aihua; Sun, Hui; Yan, Guangli; Wang, Ping; Han, Ying; Wang, Xijun 2014-04-01 Colorectal cancer (CRC), a major public health concern, is the second leading cause of cancer death in developed countries. There is a need for better preventive strategies to improve the patient outcome that is substantially influenced by cancer stage at the time of diagnosis. Patients with early stage colorectal have a significant higher 5-year survival rates compared to patients diagnosed at late stage. Although traditional colonoscopy remains the effective means to diagnose CRC, this approach generally suffers from poor patient compliance. Thus, it is important to develop more effective methods for early diagnosis of this disease process, also there is an urgent need for biomarkers to diagnose CRC, assess disease severity, and prognosticate course. Increasing availability of high-throughput methodologies open up new possibilities for screening new potential candidates for identifying biomarkers. Fortunately, metabolomics, the study of all metabolites produced in the body, considered most closely related to a patient's phenotype, can provide clinically useful biomarkers applied in CRC, and may now open new avenues for diagnostics. It has a largely untapped potential in the field of oncology, through the analysis of the cancer metabolome to identify marker metabolites defined here as surrogate indicators of physiological or pathophysiological states. In this review we take a closer look at the metabolomics used within the field of colorectal cancer. Further, we highlight the most interesting metabolomics publications and discuss these in detail; additional studies are mentioned as a reference for the interested reader. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved. 16. 78 FR 73199 - Draft Guidance for Industry on Bioequivalence Studies With Pharmacokinetic Endpoints for Drugs... Science.gov (United States) 2013-12-05 ... HUMAN SERVICES Food and Drug Administration Draft Guidance for Industry on Bioequivalence Studies With Pharmacokinetic Endpoints for Drugs Submitted Under an Abbreviated New Drug Application; Availability AGENCY: Food... guidances to industry on Bioavailability and Bioequivalence Studies for Orally Administered Drug... 17. A survey of immunohistochemical biomarkers for basal-like breast cancer against a gene expression profile gold standard. Science.gov (United States) Won, Jennifer R; Gao, Dongxia; Chow, Christine; Cheng, Jinjin; Lau, Sherman Y H; Ellis, Matthew J; Perou, Charles M; Bernard, Philip S; Nielsen, Torsten O 2013-11-01 Gene expression profiling of breast cancer delineates a particularly aggressive subtype referred to as 'basal-like', which comprises ∼15% of all breast cancers, afflicts younger women and is refractory to endocrine and anti-HER2 therapies. Immunohistochemical surrogate definitions for basal-like breast cancer, such as the clinical ER/PR/HER2 triple-negative phenotype and models incorporating positive expression for CK5 (CK5/6) and/or EGFR are heavily cited. However, many additional biomarkers for basal-like breast cancer have been described in the literature. A parallel comparison of 46 proposed immunohistochemical biomarkers of basal-like breast cancer was performed against a gene expression profile gold standard on a tissue microarray containing 42 basal-like and 80 non-basal-like breast cancer cases. Ki67 and PPH3 were the most sensitive biomarkers (both 92%) positively expressed in the basal-like subtype, whereas CK14, IMP3 and NGFR were the most specific (100%). Among biomarkers surveyed, loss of INPP4B (a negative regulator of phosphatidylinositol signaling) was 61% sensitive and 99% specific with the highest odds ratio (OR) at 108, indicating the strongest association with basal-like breast cancer. Expression of nestin, a common marker of neural progenitor cells that is also associated with the triple-negative/basal-like phenotype and poor breast cancer prognosis, possessed the second highest OR at 29 among the 46 biomarkers surveyed, as well as 54% sensitivity and 96% specificity. As a positively expressed biomarker, nestin possesses technical advantages over INPP4B that make it a more ideal biomarker for identification of basal-like breast cancer. The comprehensive immunohistochemical biomarker survey presented in this study is a necessary step for determining an optimized surrogate immunopanel that best defines basal-like breast cancer in a practical and clinically accessible way. 18. Finite-size effects and the search for the critical endpoint in heavy ion collisions CERN Document Server Palhares, Leticia F; Kodama, Takeshi 2009-01-01 We discuss how the finiteness of the system created in a heavy-ion collision affects possible signatures of the QCD critical endpoint. We show sizable results for the modifications of the chiral phase diagram at volume scales typically encountered in current heavy-ion collisions and address the applicability of finite-size scaling as a tool in the experimental search for the critical endpoint. 19. Autonomous Sub-Pixel Satellite Track Endpoint Determination for Space Based Images Energy Technology Data Exchange (ETDEWEB) Simms, L M 2011-03-07 An algorithm for determining satellite track endpoints with sub-pixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel endpoint determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness. 20. SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines OpenAIRE Yumusak, Semih; Dogdu, Erdogan; KODAZ, Halife; Kamilaris, Andreas 2016-01-01 In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this ... 1. Development of Pain Endpoint Models for Use in Prostate Cancer Clinical Trials and Drug Approval Science.gov (United States) 2015-10-01 Award Number: W81XWH-11-1-0639 TITLE: Development of Pain Endpoint Models for Use in Prostate Cancer Clinical Trials and Drug Approval PRINCIPAL...SEP 2014 – 29 SEP 2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER W81XWH-11-1-0639 Development of Pain Endpoint Models for Use in Prostate Cancer...standard methods for measuring pain palliation and pain progression in prostate cancer clinical trials that are feasible, methodologically rigorous, and 2. Surrogate model based iterative ensemble smoother for subsurface flow data assimilation Science.gov (United States) Chang, Haibin; Liao, Qinzhuo; Zhang, Dongxiao 2017-02-01 Subsurface geological formation properties often involve some degree of uncertainty. Thus, for most conditions, uncertainty quantification and data assimilation are necessary for predicting subsurface flow. The surrogate model based method is one common type of uncertainty quantification method, in which a surrogate model is constructed for approximating the relationship between model output and model input. Based on the prediction ability, the constructed surrogate model can be utilized for performing data assimilation. In this work, we develop an algorithm for implementing an iterative ensemble smoother (ES) using the surrogate model. We first derive an iterative ES scheme using a regular routine. In order to utilize surrogate models, we then borrow the idea of Chen and Oliver (2013) to modify the Hessian, and further develop an independent parameter based iterative ES formula. Finally, we establish the algorithm for the implementation of iterative ES using surrogate models. Two surrogate models, the PCE surrogate and the interpolation surrogate, are introduced for illustration. The performances of the proposed algorithm are tested by synthetic cases. The results show that satisfactory data assimilation results can be obtained by using surrogate models that have sufficient accuracy. 3. Endpoints in adjuvant treatment trials: a systematic review of the literature in colon cancer and proposed definitions for future trials. NARCIS (Netherlands) Punt, C.J.A.; Buyse, M.; Kohne, C.H.; Hohenberger, P.; Labianca, R.; Schmoll, H.J.; Pahlman, L.; Sobrero, A.; Douillard, J.Y. 2007-01-01 Disease-free survival is increasingly being used as the primary endpoint of most trials testing adjuvant treatments in cancer. Other frequently used endpoints include overall survival, recurrence-free survival, and time to recurrence. These endpoints are often defined differently in different trials 4. Physiological and lavage fluid cytological and biochemical endpoints of toxicity in the rat Energy Technology Data Exchange (ETDEWEB) Lehnert, B.E. 1992-12-31 Exposure of the respiratory tract to toxic materials can result in a variety of physiologic disturbances that can serve as endpoints of toxicity. In addition to a brief review of commonly assessed physiologic endpoints, attention is given in the first component of this report to the use of both nose breathing and mouth breathing rats in toxicity studies that involve measurements of ventilatory functional changes in response to test atmospheres. Additionally, the usefulness of maximum oxygen consumption, or VO{sub 2max}, as a physiologic endpoint of toxicity that uses exercising rats after exposure to test atmospheres is described, along with an introduction to post-exposure exercise as an important behavioral activity that can markedly impact on the severity of acute lung injury caused by pneumoedematogenic materials. The second component of this report focuses on bronchoalveolar lavage and cytological and biochemical endpoints that can be assessed in investigations of the toxicities of test materials. As will be shown herein, some of the biochemical endpoints of toxicity, especially, can sensitively detect subtle injury to the lower respiratory tract that may escape detection by changes in some other conventional endpoints of toxicity, including lung gravimetric increases and histopathological alterations. 5. Physiological and lavage fluid cytological and biochemical endpoints of toxicity in the rat Energy Technology Data Exchange (ETDEWEB) Lehnert, B.E. 1992-01-01 Exposure of the respiratory tract to toxic materials can result in a variety of physiologic disturbances that can serve as endpoints of toxicity. In addition to a brief review of commonly assessed physiologic endpoints, attention is given in the first component of this report to the use of both nose breathing and mouth'' breathing rats in toxicity studies that involve measurements of ventilatory functional changes in response to test atmospheres. Additionally, the usefulness of maximum oxygen consumption, or VO[sub 2max], as a physiologic endpoint of toxicity that uses exercising rats after exposure to test atmospheres is described, along with an introduction to post-exposure exercise as an important behavioral activity that can markedly impact on the severity of acute lung injury caused by pneumoedematogenic materials. The second component of this report focuses on bronchoalveolar lavage and cytological and biochemical endpoints that can be assessed in investigations of the toxicities of test materials. As will be shown herein, some of the biochemical endpoints of toxicity, especially, can sensitively detect subtle injury to the lower respiratory tract that may escape detection by changes in some other conventional endpoints of toxicity, including lung gravimetric increases and histopathological alterations. 6. Quality of Documentation as a Surrogate Marker for Awareness and Training Effectiveness of PHTLS-Courses. Part of the Prospective Longitudinal Mixed-Methods EPPTC-Trial. Science.gov (United States) Häske, David; Beckers, Stefan K; Hofmann, Marzellus; Lefering, Rolf; Gliwitzky, Bernhard; Wölfl, Christoph C; Grützner, Paul; Stöckle, Ulrich; Dieroff, Marc; Münzberg, Matthias 2017-01-01 Care for severely injured patients requires multidisciplinary teamwork. A decrease in the number of accident victims ultimately affects the routine and skills. PHTLS ("Pre-Hospital Trauma Life Support") courses are established two-day courses for medical and non-medical rescue service personnel, aimed at improving the pre-hospital care of trauma patients worldwide. The study aims the examination of the quality of documentation before and after PHTLS courses as a surrogate endpoint of training effectiveness and awareness. This was a prospective pre-post intervention trial and was part of the mixed-method longitudinal EPPTC (Effect of Paramedic Training on Pre-Hospital Trauma Care) study, evaluating subjective and objective changes among participants and real patient care, as a result of PHTLS courses. The courses provide an overview of the SAMPLE approach for interrogation of anamnestic information, which is believed to be responsible for patient safety as relevant, among others, "Allergies," "Medication," and "Patient History" (AMP). The focus of the course is not the documentation. In total, 320 protocols were analyzed before and after the training. The PHTLS course led to a significant increase (p PHTLS course showed a significant increase in the information content. In summary, we showed that PHTLS training improves documentation quality, which we used as a surrogate endpoint for learning effectiveness and awareness. In this regard, we demonstrated that participants use certain parts of training in real life, thereby suggesting that the learning methods of PHTLS training are effective. These results, however, do not indicate whether patient care has changed. 7. Uncertainty quantification of squeal instability via surrogate modelling Science.gov (United States) Nobari, Amir; Ouyang, Huajiang; Bannister, Paul 2015-08-01 One of the major issues that car manufacturers are facing is the noise and vibration of brake systems. Of the different sorts of noise and vibration, which a brake system may generate, squeal as an irritating high-frequency noise costs the manufacturers significantly. Despite considerable research that has been conducted on brake squeal, the root cause of squeal is still not fully understood. The most common assumption, however, is mode-coupling. Complex eigenvalue analysis is the most widely used approach to the analysis of brake squeal problems. One of the major drawbacks of this technique, nevertheless, is that the effects of variability and uncertainty are not included in the results. Apparently, uncertainty and variability are two inseparable parts of any brake system. Uncertainty is mainly caused by friction, contact, wear and thermal effects while variability mostly stems from the manufacturing process, material properties and component geometries. Evaluating the effects of uncertainty and variability in the complex eigenvalue analysis improves the predictability of noise propensity and helps produce a more robust design. The biggest hurdle in the uncertainty analysis of brake systems is the computational cost and time. Most uncertainty analysis techniques rely on the results of many deterministic analyses. A full finite element model of a brake system typically consists of millions of degrees-of-freedom and many load cases. Running time of such models is so long that automotive industry is reluctant to do many deterministic analyses. This paper, instead, proposes an efficient method of uncertainty propagation via surrogate modelling. A surrogate model of a brake system is constructed in order to reproduce the outputs of the large-scale finite element model and overcome the issue of computational workloads. The probability distribution of the real part of an unstable mode can then be obtained by using the surrogate model with a massive saving of 8. Compaction behavior of surrogate degraded emplaced WIPP waste. Energy Technology Data Exchange (ETDEWEB) Broome, Scott Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bronowski, David R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kuthakun, Souvanny James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Herrick, Courtney Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pfeifle, Thomas W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2014-03-01 The present study results are focused on laboratory testing of surrogate waste materials. The surrogate wastes correspond to a conservative estimate of degraded Waste Isolation Pilot Plant (WIPP) containers and TRU waste materials at the end of the 10,000 year regulatory period. Testing consists of hydrostatic, triaxial, and uniaxial strain tests performed on surrogate waste recipes that were previously developed by Hansen et al. (1997). These recipes can be divided into materials that simulate 50% and 100% degraded waste by weight. The percent degradation indicates the anticipated amount of iron corrosion, as well as the decomposition of cellulosics, plastics, and rubbers (CPR). Axial, lateral, and volumetric strain and axial, lateral, and pore stress measurements were made. Two unique testing techniques were developed during the course of the experimental program. The first involves the use of dilatometry to measure sample volumetric strain under a hydrostatic condition. Bulk moduli of the samples measured using this technique were consistent with those measured using more conventional methods. The second technique involved performing triaxial tests under lateral strain control. By limiting the lateral strain to zero by controlling the applied confining pressure while loading the specimen axially in compression, one can maintain a right-circular cylindrical geometry even under large deformations. This technique is preferred over standard triaxial testing methods which result in inhomogeneous deformation or (3z(Bbarreling(3y. (BManifestations of the inhomogeneous deformation included non-uniform stress states, as well as unrealistic Poissons ratios (> 0.5) or those that vary significantly along the length of the specimen. Zero lateral strain controlled tests yield a more uniform stress state, and admissible and uniform values of Poissons ratio. 9. Biopolicies and biotechnologies: reflections on surrogate maternity in India Directory of Open Access Journals (Sweden) 2010-07-01 Full Text Available This article explores the impact of biotechnology, particularly on assisted reproductive technologies such as surrogate motherhood. The study is based on interviews and field work conducted in the city of Hyderabad in India within the frame of the seminar on “Research Methodology” given by Dr. Rohan D´Souza at the Centre for Studies in Science Policy at the Jawaharlal Nehru University in India. The theoretical framework of this analysis focuses on exploring concepts such as cyborg (Haraway,1991 and subaltern subject (Spivak, 1998 in the context of biotechnological production in India 10. Surrogate based approaches to parameter inference in ocean models KAUST Repository Knio, Omar 2016-01-06 This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients. 11. Multi-level assessment of chronic toxicity of estuarine sediments with the amphipod Gammarus locusta: II. Organism and population-level endpoints. Science.gov (United States) Costa, Filipe O; Neuparth, Teresa; Correia, Ana D; Costa, Maria Helena 2005-07-01 This study aimed to test the performance of the amphipod Gammarus locusta (L.) in chronic sediment toxicity tests. It constitutes part of a multi-level assessment of chronic toxicity of estuarine sediments, integrating organism and population-level endpoints with biochemical markers responses. Here we account for organism and population-level effects, while biomarker responses were reported in a companion article. Five moderately contaminated sediments from Sado and Tagus estuaries were tested, comprising 3 muddy and 2 sandy sediments. These sediments either did not show acute toxicity or were diluted with control sediment as much as required to remove acute toxicity. Subsequent chronic tests consisted of 28-day exposures with survival, individual growth and reproductive traits as endpoints. Two of the muddy sediments induced higher growth rates in the amphipods, and improved reproductive traits. This was understood to be a consequence of the amount of organic matter in the sediment, which was nutritionally beneficial to the amphipods, while concurrently decreasing contaminant bioavailability. Biomarker responses did not reveal toxicant-induced stress in amphipods exposed to these sediments. One of the sandy sediments was acutely toxic at 50% dilution, but in contrast stimulated amphipod growth when diluted 75%. This was presumed to be an indication of a hormetic response. Finally the two remaining contaminated sediments showed pronounced chronic toxicity, affecting survival and reproduction. The sex ratio of survivors was highly biased towards females, and offspring production was severely impaired. The particulars of the responses of this amphipod were examined, as well as strengths versus limitations of the sediment test. This study illustrates the utility of this chronic test for toxicity assessment of contaminated estuarine sediments, with potential application all along Atlantic Europe. 12. Lung Cancer Biomarkers. Science.gov (United States) Villalobos, Pamela; Wistuba, Ignacio I 2017-02-01 The molecular characterization of lung cancer has changed the classification and treatment of these tumors, becoming an essential component of pathologic diagnosis and oncologic therapy decisions. Through the recognition of novel biomarkers, such as epidermal growth factor receptor mutations and anaplastic lymphoma kinase translocations, it is possible to identify subsets of patients who benefit from targeted molecular therapies. The success of targeted anticancer therapies and new immunotherapy approaches has created a new paradigm of personalized therapy and has led to accelerated development of new drugs for lung cancer treatment. This article focuses on clinically relevant cancer biomarkers as targets for therapy and potential new targets for drug development. Copyright © 2016 Elsevier Inc. All rights reserved. 13. [New effect biomarkers]. Science.gov (United States) De Palma, G; Corradi, M; Mutti, A; Baccarelli, A; Pesatori, A; Bertazzi, P A 2004-01-01 The major research goals for researchers developing biomarkers of effect are the development and validation of biomarkers that permit the prediction of the risk of disease in individuals and groups. One important objective is to prevent human cancer. This article reviews the most recent analytical methodologies, validation studies and field trials together with auditing and quality assessment of the necessary data based on scientific grounds. Consideration is given to new developments in the relatively young field of toxicogenomics, possibly leading to the identification of early changes that may lead to both cancer and non-cancer end points. Although the creation and development of reliable databases integrating information from genomic and proteomic research programmes should offer a contribution to the prediction of risks and prevention of diseases related to chemical exposure, the most promising future application of these technologies lies in the molecular diagnosis of diseases whose nosography will probably be redefined. 14. Diesel Surrogate Fuels for Engine Testing and Chemical-Kinetic Modeling: Compositions and Properties Science.gov (United States) Mueller, Charles J.; Cannella, William J.; Bays, J. Timothy; Bruno, Thomas J.; DeFabio, Kathy; Dettman, Heather D.; Gieleciak, Rafal M.; Huber, Marcia L.; Kweon, Chol-Bum; McConnell, Steven S.; Pitz, William J.; Ratcliff, Matthew A. 2016-01-01 The primary objectives of this work were to formulate, blend, and characterize a set of four ultralow-sulfur diesel surrogate fuels in quantities sufficient to enable their study in single-cylinder-engine and combustion-vessel experiments. The surrogate fuels feature increasing levels of compositional accuracy (i.e., increasing exactness in matching hydrocarbon structural characteristics) relative to the single target diesel fuel upon which the surrogate fuels are based. This approach was taken to assist in determining the minimum level of surrogate-fuel compositional accuracy that is required to adequately emulate the performance characteristics of the target fuel under different combustion modes. For each of the four surrogate fuels, an approximately 30 L batch was blended, and a number of the physical and chemical properties were measured. This work documents the surrogate-fuel creation process and the results of the property measurements. PMID:27330248 15. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models CERN Document Server Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A 2015-01-01 Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin... 16. Biomarkers of Selenium Status Directory of Open Access Journals (Sweden) Gerald F. Combs, Jr. 2015-03-01 Full Text Available The essential trace element, selenium (Se, has multiple biological activities, which depend on the level of Se intake. Relatively low Se intakes determine the expression of selenoenzymes in which it serves as an essential constituent. Higher intakes have been shown to have anti-tumorigenic potential; and very high Se intakes can produce adverse effects. This hierarchy of biological activities calls for biomarkers informative at different levels of Se exposure. Some Se-biomarkers, such as the selenoproteins and particularly GPX3 and SEPP1, provide information about function directly and are of value in identifying nutritional Se deficiency and tracking responses of deficient individuals to Se-treatment. They are useful under conditions of Se intake within the range of regulated selenoprotein expression, e.g., for humans <55 μg/day and for animals <20 μg/kg diet. Other Se-biomarkers provide information indirectly through inferences based on Se levels of foods, tissues, urine or feces. They can indicate the likelihood of deficiency or adverse effects, but they do not provide direct evidence of either condition. Their value is in providing information about Se status over a wide range of Se intake, particularly from food forms. There is need for additional Se biomarkers particularly for assessing Se status in non-deficient individuals for whom the prospects of cancer risk reduction and adverse effects risk are the primary health considerations. This would include determining whether supranutritional intakes of Se may be required for maximal selenoprotein expression in immune surveillance cells. It would also include developing methods to determine low molecular weight Se-metabolites, i.e., selenoamino acids and methylated Se-metabolites, which to date have not been detectable in biological specimens. Recent analytical advances using tandem liquid chromatography-mass spectrometry suggest prospects for detecting these metabolites. 17. Progression-free survival, post-progression survival, and tumor response as surrogate markers for overall survival in patients with extensive small cell lung cancer Directory of Open Access Journals (Sweden) Hisao Imai 2015-01-01 Full Text Available Objectives: The effects of first-line chemotherapy on overall survival (OS might be confounded by subsequent therapies in patients with small cell lung cancer (SCLC. We examined whether progression-free survival (PFS, post-progression survival (PPS, and tumor response could be valid surrogate endpoints for OS after first-line chemotherapies for patients with extensive SCLC using individual-level data. Methods: Between September 2002 and November 2012, we analyzed 49 cases of patients with extensive SCLC who were treated with cisplatin and irinotecan as first-line chemotherapy. The relationships of PFS, PPS, and tumor response with OS were analyzed at the individual level. Results: Spearman rank correlation analysis and linear regression analysis showed that PPS was strongly correlated with OS (r = 0.97, p < 0.05, R 2 = 0.94, PFS was moderately correlated with OS (r = 0.58, p < 0.05, R 2 = 0.24, and tumor shrinkage was weakly correlated with OS (r = 0.37, p < 0.05, R 2 = 0.13. The best response to second-line treatment, and the number of regimens employed after progression beyond first-line chemotherapy were both significantly associated with PPS ( p ≤ 0.05. Conclusion: PPS is a potential surrogate for OS in patients with extensive SCLC. Our findings also suggest that subsequent treatment after disease progression following first-line chemotherapy may greatly influence OS. 18. Biomarkers of Ovarian Reserve Directory of Open Access Journals (Sweden) William E. Roudebush 2008-01-01 Full Text Available The primary function of the female ovary is the production of a mature and viable oocyte capable of fertilization and subsequent embryo development and implantation. At birth, the ovary contains a finite number of oocytes available for folliculogenesis. This finite number of available oocytes is termed “the ovarian reserve”. The determination of ovarian reserve is important in the assessment and treatment of infertility. As the ovary ages, the ovarian reserve will decline. Infertility affects approximately 15-20% of reproductive aged couples. The most commonly used biomarker assay to assess ovarian reserve is the measurement of follicle stimulating hormone (FSH on day 3 of the menstrual cycle. However, antimüllerian hormone and inhibin-B are other biomarkers of ovarian reserve that are gaining in popularity since they provide direct determination of ovarian status, whereas day 3 FSH is an indirect measurement. This review examines the physical tools and the hormone biomarkers used to evaluate ovarian reserve. 19. Combining endangered plants and animals as surrogates to identify priority conservation areas in Yunnan, China OpenAIRE Feiling Yang; Jinming Hu; Ruidong Wu 2016-01-01 Suitable surrogates are critical for identifying optimal priority conservation areas (PCAs) to protect regional biodiversity. This study explored the efficiency of using endangered plants and animals as surrogates for identifying PCAs at the county level in Yunnan, southwest China. We ran the Dobson algorithm under three surrogate scenarios at 75% and 100% conservation levels and identified four types of PCAs. Assessment of the protection efficiencies of the four types of PCAs showed that end... 20. Surrogate Mobility and Orientation Affect the Early Neurobehavioral Development of Infant Rhesus Macaques (Macaca mulatta) OpenAIRE Amanda M Dettmer; Ruggerio, Angela M.; Novak, Melinda A.; Meyer, Jerrold S.; Suomi, Stephen J. 2008-01-01 A biological mother’s movement appears necessary for optimal development in infant monkeys. However, nursery-reared monkeys are typically provided with inanimate surrogate mothers that move very little. The purpose of this study was to evaluate the effects of a novel, highly mobile surrogate mother on motor development, exploration, and reactions to novelty. Six infant rhesus macaques (Macaca mulatta) were reared on mobile hanging surrogates (MS) and compared to six infants reared on standard... 1. Potential biomarkers for monitoring therapeutic response in patients with CIDP. Science.gov (United States) Dalakas, Marinos C 2011-06-01 Although the majority of patients with CIDP variably respond to intravenous immunoglobulin (IVIg), steroids, or plasmapheresis, 30% of them are unresponsive or insufficiently responsive to these therapies. The heterogeneity in therapeutic responses necessitates the need to search for biomarkers to determine the most suitable therapy from the outset and explore the best means for monitoring disease activity. The ICE study, which led to the first FDA-approved indication for IVIg in CIDP, has shown that maintenance therapy prevents relapses and axonal loss. In this paper, the multiple actions exerted by IVIg on the immunoregulatory network of CIDP are discussed as potential predictors of response to therapies. Emerging molecular markers, promising in identifying responders to IVIg from non-responders, include modulation of FcγRIIB receptors on monocytes and genome-wide transcription studies related to inflammatory mediators, demyelination, or axonal degeneration. Skin biopsies, Peripheral Blood Lymhocytes, CSF, and sera are accessible surrogate tissues for further exploring these molecules during therapies. 2. Circulating DNA as Potential Biomarker for Cancer Individualized Therapy Institute of Scientific and Technical Information of China (English) Yu Shaorong; Liu Baorui; Lu Jianwei; Feng Jifeng 2013-01-01 Cancer individualized therapy often requires for gene mutation analysis of tumor tissue. However, tumor tissue is not always available in clinical practice, particularly from patients with refractory and recurrence disease. Even if patients have sufifcient tumor tissue for detection, as development of cancer, the gene status and drug sensitivity of tumor tissues could also change. Hence, screening mutations from primary tumor tissues becomes useless, it’s necessary to ifnd a surrogate tumor tissue for individualized gene screening. Circulating DNA is digested rapidly from blood, which could provide real-time information of the released fragment and make the real-time detection possible. Therefore, it’s expected that circulating DNA could be a potential tumor biomarker for cancer individualized therapy. This review focuses on the biology and clinical utility of circulating DNA mainly on gene mutation detection. Besides, its current status and possible direction in this research area is summarized and discussed objectively. 3. IDBD: infectious disease biomarker database. Science.gov (United States) Yang, In Seok; Ryu, Chunsun; Cho, Ki Joon; Kim, Jin Kwang; Ong, Swee Hoe; Mitchell, Wayne P; Kim, Bong Su; Oh, Hee-Bok; Kim, Kyung Hyun 2008-01-01 Biomarkers enable early diagnosis, guide molecularly targeted therapy and monitor the activity and therapeutic responses across a variety of diseases. Despite intensified interest and research, however, the overall rate of development of novel biomarkers has been falling. Moreover, no solution is yet available that efficiently retrieves and processes biomarker information pertaining to infectious diseases. Infectious Disease Biomarker Database (IDBD) is one of the first efforts to build an easily accessible and comprehensive literature-derived database covering known infectious disease biomarkers. IDBD is a community annotation database, utilizing collaborative Web 2.0 features, providing a convenient user interface to input and revise data online. It allows users to link infectious diseases or pathogens to protein, gene or carbohydrate biomarkers through the use of search tools. It supports various types of data searches and application tools to analyze sequence and structure features of potential and validated biomarkers. Currently, IDBD integrates 611 biomarkers for 66 infectious diseases and 70 pathogens. It is publicly accessible at http://biomarker.cdc.go.kr and http://biomarker.korea.ac.kr. 4. Evaluation of the use of surrogate Laminaria digitata in eco-hydraulic laboratory experiments Institute of Scientific and Technical Information of China (English) PAUL Maike; HENRY Pierre-Yves T 2014-01-01 Inert surrogates can avoid husbandry and adaptation problems of live vegetation in laboratories. Surrogates are generally used for experiments on vegetation-hydrodynamics interactions, but it is unclear how well they replicate field conditions. Here, surrogates for the brown macroalgae Laminaria digitata were developed to reproduce its hydraulic roughness. Plant shape, stiffness and buoyancy of L. digitata were evaluated and compared to the properties of inert materials. Different surrogate materials and shapes were exposed to unidirectional flow. It is concluded that buoyancy is an important factor in low flow conditions and a basic shape might be sufficient to model complex shaped plants resulting in the same streamlined shape. 5. Detailed chemical kinetic oxidation mechanism for a biodiesel surrogate Energy Technology Data Exchange (ETDEWEB) Herbinet, O; Pitz, W J; Westbrook, C K 2007-09-20 A detailed chemical kinetic mechanism has been developed and used to study the oxidation of methyl decanoate, a surrogate for biodiesel fuels. This model has been built by following the rules established by Curran et al. for the oxidation of n-heptane and it includes all the reactions known to be pertinent to both low and high temperatures. Computed results have been compared with methyl decanoate experiments in an engine and oxidation of rapeseed oil methyl esters in a jet stirred reactor. An important feature of this mechanism is its ability to reproduce the early formation of carbon dioxide that is unique to biofuels and due to the presence of the ester group in the reactant. The model also predicts ignition delay times and OH profiles very close to observed values in shock tube experiments fueled by n-decane. These model capabilities indicate that large n-alkanes can be good surrogates for large methyl esters and biodiesel fuels to predict overall reactivity, but some kinetic details, including early CO{sub 2} production from biodiesel fuels, can be predicted only by a detailed kinetic mechanism for a true methyl ester fuel. The present methyl decanoate mechanism provides a realistic kinetic tool for simulation of biodiesel fuels. 6. Detailed chemical kinetic oxidation mechanism for a biodiesel surrogate Energy Technology Data Exchange (ETDEWEB) Herbinet, O; Pitz, W J; Westbrook, C K 2007-09-17 A detailed chemical kinetic mechanism has been developed and used to study the oxidation of methyl decanoate, a surrogate for biodiesel fuels. This model has been built by following the rules established by Curran et al. for the oxidation of n-heptane and it includes all the reactions known to be pertinent to both low and high temperatures. Computed results have been compared with methyl decanoate experiments in an engine and oxidation of rapeseed oil methyl esters in a jet stirred reactor. An important feature of this mechanism is its ability to reproduce the early formation of carbon dioxide that is unique to biofuels and due to the presence of the ester group in the reactant. The model also predicts ignition delay times and OH profiles very close to observed values in shock tube experiments fueled by n-decane. These model capabilities indicate that large n-alkanes can be good surrogates for large methyl esters and biodiesel fuels to predict overall reactivity, but some kinetic details, including early CO2 production from biodiesel fuels, can be predicted only by a detailed kinetic mechanism for a true methyl ester fuel. The present methyl decanoate mechanism provides a realistic kinetic tool for simulation of biodiesel fuels. 7. Simultaneous Thermal Analysis of Remediated Nitrate Salt Surrogates Energy Technology Data Exchange (ETDEWEB) Wayne, David Matthew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2016-05-13 The actinide engineering and science group (MET-1) have completed simultaneous thermal analysis and offgas analysis by mass spectrometry (STA-MS) of remediated nitrate salt (RNS) surrogates formulated by the high explosives science and technology group (M-7). The 1.0 to 1.5g surrogate samples were first analyzed as received, then a new set was analyzed with 100-200mL 10M HNO3 +0.3 MHF added, and a third set was analyzed after 200 mL of a concentrated Pu-AM spike (in 10M HNO3 +0.3 MHF) was added. The acid and spike solutions were formulated by the actinide analytical chemistry group (C-AAC) using reagent-grade HNO3 and HF, which was also used to dissolve a small quantity of mixed, high-fired PuO2/ AmO2 oxide. 8. Detailed chemical kinetic oxidation mechanism for a biodiesel surrogate Energy Technology Data Exchange (ETDEWEB) Herbinet, O; Pitz, W J; Westbrook, C K 2007-09-20 A detailed chemical kinetic mechanism has been developed and used to study the oxidation of methyl decanoate, a surrogate for biodiesel fuels. This model has been built by following the rules established by Curran et al. for the oxidation of n-heptane and it includes all the reactions known to be pertinent to both low and high temperatures. Computed results have been compared with methyl decanoate experiments in an engine and oxidation of rapeseed oil methyl esters in a jet stirred reactor. An important feature of this mechanism is its ability to reproduce the early formation of carbon dioxide that is unique to biofuels and due to the presence of the ester group in the reactant. The model also predicts ignition delay times and OH profiles very close to observed values in shock tube experiments fueled by n-decane. These model capabilities indicate that large n-alkanes can be good surrogates for large methyl esters and biodiesel fuels to predict overall reactivity, but some kinetic details, including early CO{sub 2} production from biodiesel fuels, can be predicted only by a detailed kinetic mechanism for a true methyl ester fuel. The present methyl decanoate mechanism provides a realistic kinetic tool for simulation of biodiesel fuels. 9. Detailed chemical kinetic oxidation mechanism for a biodiesel surrogate Energy Technology Data Exchange (ETDEWEB) Herbinet, O; Pitz, W J; Westbrook, C K 2007-09-17 A detailed chemical kinetic mechanism has been developed and used to study the oxidation of methyl decanoate, a surrogate for biodiesel fuels. This model has been built by following the rules established by Curran et al. for the oxidation of n-heptane and it includes all the reactions known to be pertinent to both low and high temperatures. Computed results have been compared with methyl decanoate experiments in an engine and oxidation of rapeseed oil methyl esters in a jet stirred reactor. An important feature of this mechanism is its ability to reproduce the early formation of carbon dioxide that is unique to biofuels and due to the presence of the ester group in the reactant. The model also predicts ignition delay times and OH profiles very close to observed values in shock tube experiments fueled by n-decane. These model capabilities indicate that large n-alkanes can be good surrogates for large methyl esters and biodiesel fuels to predict overall reactivity, but some kinetic details, including early CO2 production from biodiesel fuels, can be predicted only by a detailed kinetic mechanism for a true methyl ester fuel. The present methyl decanoate mechanism provides a realistic kinetic tool for simulation of biodiesel fuels. 10. Premixed flame chemistry of a gasoline primary reference fuel surrogate KAUST Repository Selim, Hatem 2017-03-10 Investigating the combustion chemistry of gasoline surrogate fuels promises to improve detailed reaction mechanisms used for simulating their combustion. In this work, the combustion chemistry of one of the simplest, but most frequently used gasoline surrogates – primary reference fuel 84 (PRF 84, 84 vol% iso-octane and 16 vol% n-heptane), has been examined in a stoichiometric premixed laminar flame. Time-of-flight mass spectrometry coupled with a vacuum ultraviolet (VUV) synchrotron light source for species photoionization was used. Reactants, major end-products, stable intermediates, free radicals, and isomeric species were detected and quantified. Numerical simulations were conducted using a detailed chemical kinetic model with the most recently available high temperature sub-mechanisms for iso-octane and heptane, built on the top of an updated pentane isomers model and AramcoMech 2.0 (C0C4) base chemistry. A detailed interpretation of the major differences between the mechanistic pathways of both fuel components is given. A comparison between the experimental and numerical results is depicted and rate of production and sensitivity analyses are shown for the species with considerable disagreement between the experimental and numerical findings. 11. Bayesian Calibration of the Community Land Model using Surrogates Energy Technology Data Exchange (ETDEWEB) Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Sargsyan, K.; Swiler, Laura P. 2015-01-01 We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditioned on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that accurate surrogate models can be created for CLM in most cases. The posterior distributions lead to better prediction than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters’ distributions significantly. The structural error model reveals a correlation time-scale which can potentially be used to identify physical processes that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive. 12. Fractional flow reserve as a surrogate for inducible myocardial ischaemia. Science.gov (United States) van de Hoef, Tim P; Meuwissen, Martijn; Escaned, Javier; Davies, Justin E; Siebes, Maria; Spaan, Jos A E; Piek, Jan J 2013-08-01 Documentation of inducible myocardial ischaemia, related to the coronary stenosis of interest, is of increasing importance in lesion selection for percutaneous coronary intervention (PCI). Fractional flow reserve (FFR) is an easily understood, routine diagnostic modality that has become part of daily clinical practice, and is used as a surrogate technique for noninvasive assessment of myocardial ischaemia. However, the application of a single, discrete, cut-off value for FFR-guided lesion selection for PCI, and its adoption in contemporary revascularization guidelines, has limited the requirement for a thorough understanding of the physiological basis of FFR. This limitation constitutes an obstacle for the adequate use and interpretation of this technique, and also for the understanding of new and future modalities of physiological functional intracoronary testing. In this Review, we revisit the fundamental elements of coronary physiology in the absence or presence of coronary artery disease. We provide insight into three essential characteristics of FFR as a diagnostic tool in contemporary clinical practice--the theoretical framework of FFR and its associated limitations; the characteristics and role of FFR as a surrogate for noninvasively assessed myocardial ischaemia; and the requirement and associated caveats of potent vasodilatory drugs to induce maximal vasodilatation of the coronary vascular bed. 13. The development of radioactive sample surrogates for training and exercises Energy Technology Data Exchange (ETDEWEB) Martha Finck; Bevin Brush; Dick Jansen; David Chamberlain; Don Dry; George Brooks; Margaret Goldberg 2012-03-01 The development of radioactive sample surrogates for training and exercises Source term information is required for to reconstruct a device used in a dispersed radiological dispersal device. Simulating a radioactive environment to train and exercise sampling and sample characterization methods with suitable sample materials is a continued challenge. The Idaho National Laboratory has developed and permitted a Radioactive Response Training Range (RRTR), an 800 acre test range that is approved for open air dispersal of activated KBr, for training first responders in the entry and exit from radioactively contaminated areas, and testing protocols for environmental sampling and field characterization. Members from the Department of Defense, Law Enforcement, and the Department of Energy participated in the first contamination exercise that was conducted at the RRTR in the July 2011. The range was contaminated using a short lived radioactive Br-82 isotope (activated KBr). Soil samples contaminated with KBr (dispersed as a solution) and glass particles containing activated potassium bromide that emulated dispersed radioactive materials (such as ceramic-based sealed source materials) were collected to assess environmental sampling and characterization techniques. This presentation summarizes the performance of a radioactive materials surrogate for use as a training aide for nuclear forensics. Science.gov (United States) Sushma, C; Sharang, C 2005-01-01 Pan masala is a comparatively recent habit in India and is marketed with and without tobacco. Advertisements of tobacco products have been banned in India since 1st May 2004. The advertisements of plain pan masala, which continue in Indian media, have been suspected to be surrogate for tobacco products bearing the same name. The study was carried out to assess whether these advertisements were for the intended product, or for tobacco products with same brand name. The programme of a popular television Hindi news channel was watched for a 24-h period. Programmes on the same channel and its English counterpart were watched on different days to assess whether the advertisements were repeated. The total duration of telecast of a popular brand of plain pan masala (Pan Parag) was multiplied by the rate charged by the channel to provide the cost of advertisement of this product. The total sale value of the company was multiplied by the proportion of usage of plain pan masala out of gutka plus pan masala habit as observed from a different study, to provide the annual sale value of plain pan masala product under reference. The annual sale value of plain Pan Parag was estimated to be Rs. 67.1 million. The annual cost of the advertisement of the same product on two television channels was estimated at Rs. 244.6 million. The advertisements of plain pan masala seen on Indian television are a surrogate for the tobacco products bearing the same name. 15. Defining useful surrogates for user participation in online medical learning. LENUS (Irish Health Repository) Beddy, Peter 2012-02-01 "School for Surgeons" is a web-based distance learning program which provides online clinical-based tutorials to surgical trainees. Our aim was to determine surrogates of active participation and to assess the efficacy of methods to improve usage. Server logs of the 82 participants in the "School for Surgeons" were assessed for the two terms of the first year of the program. Data collected included total time online, mean session time, page requests, numbers of sessions online and the total number of assignments. An intervention regarding comparative peer usage patterns was delivered to the cohort between terms one and two. Of the 82 trainees enrolled, 83% (85% second term) logged into the program. Of all participants 88% (97% second term) submitted at least one assignment. Median submissions were four (eight second term) per trainee. Assignment submission closely correlated with number of sessions, total time online, downloads and page requests. Peer-based comparative feedback resulted in a significant increase in the number of assignments submitted (p < 0.01). Despite its recent introduction, "School for Surgeons" has a good participation rate. Assignment submission is a valid surrogate for usage. Students can be encouraged to move from passive observation to active participation in a virtual learning environment by providing structured comparative feedback ranking their performance. 16. Proper Orthogonal Decomposition as Surrogate Model for Aerodynamic Optimization Directory of Open Access Journals (Sweden) Valentina Dolci 2016-01-01 Full Text Available A surrogate model based on the proper orthogonal decomposition is developed in order to enable fast and reliable evaluations of aerodynamic fields. The proposed method is applied to subsonic turbulent flows and the proper orthogonal decomposition is based on an ensemble of high-fidelity computations. For the construction of the ensemble, fractional and full factorial planes together with central composite design-of-experiment strategies are applied. For the continuous representation of the projection coefficients in the parameter space, response surface methods are employed. Three case studies are presented. In the first case, the boundary shape of the problem is deformed and the flow past a backward facing step with variable step slope is studied. In the second case, a two-dimensional flow past a NACA 0012 airfoil is considered and the surrogate model is constructed in the (Mach, angle of attack parameter space. In the last case, the aerodynamic optimization of an automotive shape is considered. The results demonstrate how a reduced-order model based on the proper orthogonal decomposition applied to a small number of high-fidelity solutions can be used to generate aerodynamic data with good accuracy at a low cost. 17. Bayesian calibration of the Community Land Model using surrogates Energy Technology Data Exchange (ETDEWEB) Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton 2014-02-01 We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive. 18. PROFILEing idiopathic pulmonary fibrosis: rethinking biomarker discovery Directory of Open Access Journals (Sweden) Toby M. Maher 2013-06-01 Full Text Available Despite major advances in the understanding of the pathogenesis of idiopathic pulmonary fibrosis (IPF, diagnosis and management of the condition continue to pose significant challenges. Clinical management of IPF remains unsatisfactory due to limited availability of effective drug therapies, a lack of accurate indicators of disease progression, and an absence of simple short-term measures of therapeutic response. The identification of more accurate predictors of prognosis and survival in IPF would facilitate counseling of patients and their families, aid communication among clinicians, and would guide optimal timing of referral for transplantation. Improvements in molecular techniques have led to the identification of new disease pathways and a more targeted approach to the development of novel anti-fibrotic agents. However, despite an increased interest in biomarkers of IPF disease progression there are a lack of measures that can be used in early phase clinical trials. Careful longitudinal phenotyping of individuals with IPF together with the application of novel omics-based technology should provide important insights into disease pathogenesis and should address some of the major issues holding back drug development in IPF. The PROFILE (Prospective Observation of Fibrosis in the Lung Clinical Endpoints study is a currently enrolling, prospective cohort study designed to tackle these issues. 19. Metabolomics for Biomarker Discovery in Gastroenterological Cancer Science.gov (United States) Nishiumi, Shin; Suzuki, Makoto; Kobayashi, Takashi; Matsubara, Atsuki; Azuma, Takeshi; Yoshida, Masaru 2014-01-01 The study of the omics cascade, which involves comprehensive investigations based on genomics, transcriptomics, proteomics, metabolomics, etc., has developed rapidly and now plays an important role in life science research. Among such analyses, metabolome analysis, in which the concentrations of low molecular weight metabolites are comprehensively analyzed, has rapidly developed along with improvements in analytical technology, and hence, has been applied to a variety of research fields including the clinical, cell biology, and plant/food science fields. The metabolome represents the endpoint of the omics cascade and is also the closest point in the cascade to the phenotype. Moreover, it is affected by variations in not only the expression but also the enzymatic activity of several proteins. Therefore, metabolome analysis can be a useful approach for finding effective diagnostic markers and examining unknown pathological conditions. The number of studies involving metabolome analysis has recently been increasing year-on-year. Here, we describe the findings of studies that used metabolome analysis to attempt to discover biomarker candidates for gastroenterological cancer and discuss metabolome analysis-based disease diagnosis. PMID:25003943 20. Vertical Flume Testing of WIPP Surrogate Waste Materials Science.gov (United States) Herrick, C. G.; Schuhen, M.; Kicker, D. 2013-12-01 The Waste Isolation Pilot Plant (WIPP) is a U.S. Department of Energy geological repository for the permanent disposal of defense-related transuranic (TRU) waste. The waste is emplaced in rooms excavated in the bedded Salado salt formation at a depth of 655 m below ground surface. After emplacement of the waste, the repository will be sealed and decommissioned. The DOE demonstrates compliance with 40 CFR 194 by means of performance assessment (PA) calculations conducted by Sandia National Laboratories. WIPP PA calculations estimate the probability and consequences of radionuclide releases for a 10,000 year regulatory period. Human intrusion scenarios include cases in which a future borehole is drilled through the repository. Drilling mud flowing up the borehole will apply a hydrodynamic shear stress to the borehole wall which could result in erosion of the waste and radionuclides being carried up the borehole. WIPP PA uses the parameter TAUFAIL to represent the shear strength of the degraded waste. The hydrodynamic shear strength can only be measured experimentally by flume testing. Flume testing is typically performed horizontally, mimicking stream or ocean currents. However, in a WIPP intrusion event, the drill bit would penetrate the degraded waste and drilling mud would flow up the borehole in a predominantly vertical direction. In order to simulate this, a flume was designed and built so that the eroding fluid enters an enclosed vertical channel from the bottom and flows up past a specimen of surrogate waste material. The sample is pushed into the current by a piston attached to a step motor. A qualified data acquisition system controls and monitors the fluid's flow rate, temperature, pressure, and conductivity and the step motor's operation. The surrogate materials used correspond to a conservative estimate of degraded TRU waste at the end of the regulatory period. The recipes were previously developed by SNL based on anticipated future states of the waste 1. Quantifying the improvement of surrogate indices of hepatic insulin resistance using complex measurement techniques. Directory of Open Access Journals (Sweden) John G Hattersley Full Text Available We evaluated the ability of simple and complex surrogate-indices to identify individuals from an overweight/obese cohort with hepatic insulin-resistance (HEP-IR. Five indices, one previously defined and four newly generated through step-wise linear regression, were created against a single-cohort sample of 77 extensively characterised participants with the metabolic syndrome (age 55.6 ± 1.0 years, BMI 31.5 ± 0.4 kg/m(2; 30 males. HEP-IR was defined by measuring endogenous-glucose-production (EGP with [6-6(2H(2] glucose during fasting and euglycemic-hyperinsulinemic clamps and expressed as EGP*fasting plasma insulin. Complex measures were incorporated into the model, including various non-standard biomarkers and the measurement of body-fat distribution and liver-fat, to further improve the predictive capability of the index. Validation was performed against a data set of the same subjects after an isoenergetic dietary intervention (4 arms, diets varying in protein and fiber content versus control. All five indices produced comparable prediction of HEP-IR, explaining 39-56% of the variance, depending on regression variable combination. The validation of the regression equations showed little variation between the different proposed indices (r(2 = 27-32% on a matched dataset. New complex indices encompassing advanced measurement techniques offered an improved correlation (r = 0.75, P<0.001. However, when validated against the alternative dataset all indices performed comparably with the standard homeostasis model assessment for insulin resistance (HOMA-IR (r = 0.54, P<0.001. Thus, simple estimates of HEP-IR performed comparable to more complex indices and could be an efficient and cost effective approach in large epidemiological investigations. 2. Quantifying the improvement of surrogate indices of hepatic insulin resistance using complex measurement techniques. Science.gov (United States) Hattersley, John G; Möhlig, Matthias; Roden, Michael; Arafat, Ayman M; Loeffelholz, Christian V; Nowotny, Peter; Machann, Jürgen; Hierholzer, Johannes; Osterhoff, Martin; Khan, Michael; Pfeiffer, Andreas F H; Weickert, Martin O 2012-01-01 We evaluated the ability of simple and complex surrogate-indices to identify individuals from an overweight/obese cohort with hepatic insulin-resistance (HEP-IR). Five indices, one previously defined and four newly generated through step-wise linear regression, were created against a single-cohort sample of 77 extensively characterised participants with the metabolic syndrome (age 55.6 ± 1.0 years, BMI 31.5 ± 0.4 kg/m(2); 30 males). HEP-IR was defined by measuring endogenous-glucose-production (EGP) with [6-6(2)H(2)] glucose during fasting and euglycemic-hyperinsulinemic clamps and expressed as EGP*fasting plasma insulin. Complex measures were incorporated into the model, including various non-standard biomarkers and the measurement of body-fat distribution and liver-fat, to further improve the predictive capability of the index. Validation was performed against a data set of the same subjects after an isoenergetic dietary intervention (4 arms, diets varying in protein and fiber content versus control). All five indices produced comparable prediction of HEP-IR, explaining 39-56% of the variance, depending on regression variable combination. The validation of the regression equations showed little variation between the different proposed indices (r(2) = 27-32%) on a matched dataset. New complex indices encompassing advanced measurement techniques offered an improved correlation (r = 0.75, Presistance (HOMA-IR) (r = 0.54, P<0.001). Thus, simple estimates of HEP-IR performed comparable to more complex indices and could be an efficient and cost effective approach in large epidemiological investigations. 3. The contribution of physicochemical properties to multiple in vitro cytotoxicity endpoints. Science.gov (United States) Lu, Shuyan; Jessen, Bart; Strock, Christopher; Will, Yvonne 2012-06-01 Attrition due to safety reasons remains a serious problem for the pharmaceutical industry. This has prompted efforts to develop early predictive in vitro screens that can assist in selecting compounds with a more desirable safety profile early on in the drug discovery process. Here we examined the relationship between physicochemical properties, such as partition coefficient (clogP), topological polar surface area (TPSA), acid dissociation constant (pK(a)), and in vitro mechanistic endpoints generated using a high content imaging approach. We demonstrate in our initial analysis that compounds with clogP>2 and pK(a)>5.5 flagged more endpoints than compounds with clogP ≤ 2 and pK(a) ≤ 5.5. In contrast, TPSA did not stand on its own in predicting cytotoxicity. When this knowledge was applied to eight different mechanistic cytotoxicity endpoints (cell loss, apoptosis, ER stress, DNA fragmentation, mitochondrial potential, nuclear size, neutral lipids/steatosis and lysosomal mass), we found that compounds with such properties preferentially flagged in the lysosomal endpoint. We also saw a slight enrichment of such compounds in the endpoints cell loss, DNA fragmentation and nuclear size. We demonstrate that lysosomal compound accumulation is a potential contributor to cell death and possibly organ toxicity. 4. Latent variable indirect response modeling of categorical endpoints representing change from baseline. Science.gov (United States) Hu, Chuanpu; Xu, Zhenhua; Mendelsohn, Alan M; Zhou, Honghui 2013-02-01 Accurate exposure-response modeling is important in drug development. Methods are still evolving in the use of mechanistic, e.g., indirect response (IDR) models to relate discrete endpoints, mostly of the ordered categorical form, to placebo/co-medication effect and drug exposure. When the discrete endpoint is derived using change-from-baseline measurements, a mechanistic exposure-response modeling approach requires adjustment to maintain appropriate interpretation. This manuscript describes a new modeling method that integrates a latent-variable representation of IDR models with standard logistic regression. The new method also extends to general link functions that cover probit regression or continuous clinical endpoint modeling. Compared to an earlier latent variable approach that constrained the baseline probability of response to be 0, placebo effect parameters in the new model formulation are more readily interpretable and can be separately estimated from placebo data, thus allowing convenient and robust model estimation. A general inherent connection of some latent variable representations with baseline-normalized standard IDR models is derived. For describing clinical response endpoints, Type I and Type III IDR models are shown to be equivalent, therefore there are only three identifiable IDR models. This approach was applied to data from two phase III clinical trials of intravenously administered golimumab for the treatment of rheumatoid arthritis, where 20, 50, and 70% improvement in the American College of Rheumatology disease severity criteria were used as efficacy endpoints. Likelihood profiling and visual predictive checks showed reasonable parameter estimation precision and model performance. 5. Assessing multiple endpoints of atrazine ingestion on gravid Northern Watersnakes (Nerodia sipedon) and their offspring. Science.gov (United States) Neuman-Lee, Lorin A; Gaines, Karen F; Baumgartner, Kyle A; Voorhees, Jaymie R; Novak, James M; Mullin, Stephen J 2014-09-01 Ecotoxicological studies that focus on a single endpoint might not accurately and completely represent the true ecological effects of a contaminant. Exposure to atrazine, a widely used herbicide, disrupts endocrine function and sexual development in amphibians, but studies involving live-bearing reptiles are lacking. This study tracks several effects of atrazine ingestion from female Northern Watersnakes (Nerodia sipedon) to their offspring exposed in utero. Twenty-five gravid N. sipedon were fed fish dosed with one of the four levels of atrazine (0, 2, 20, or 200 ppb) twice weekly for the entirety of their gestation period. Endpoints for the mothers included blood estradiol levels measured weekly and survival more than 3 months. Endpoints for the offspring included morphometrics, clutch sex ratio, stillbirth, and asymmetry of dorsal scales and jaw length. Through these multiple endpoints, we show that atrazine ingestion can disrupt estradiol production in mothers, increase the likelihood of mortality from infection, alter clutch sex ratio, cause a higher proportion of stillborn offspring, and affect scale symmetry. We emphasize the need for additional research involving other reptile species using multiple endpoints to determine the full range of impacts of contaminant exposure. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company. 6. Multi-Toxic Endpoints of the Foodborne Mycotoxins in Nematode Caenorhabditis elegans Directory of Open Access Journals (Sweden) Zhendong Yang 2015-12-01 Full Text Available Aflatoxins B1 (AFB1, deoxynivalenol (DON, fumonisin B1 (FB1, T-2 toxin (T-2, and zearalenone (ZEA are the major foodborne mycotoxins of public health concerns. In the present study, the multiple toxic endpoints of these naturally-occurring mycotoxins were evaluated in Caenorhabditis elegans model for their lethality, toxic effects on growth and reproduction, as well as influence on lifespan. We found that the lethality endpoint was more sensitive for T-2 toxicity with the EC50 at 1.38 mg/L, the growth endpoint was relatively sensitive for AFB1 toxic effects, and the reproduction endpoint was more sensitive for toxicities of AFB1, FB1, and ZEA. Moreover, the lifespan endpoint was sensitive to toxic effects of all five tested mycotoxins. Data obtained from this study may serve as an important contribution to knowledge on assessment of mycotoxin toxic effects, especially for assessing developmental and reproductive toxic effects, using the C. elegans model. 7. Early diagnosis of complex diseases by molecular biomarkers, network biomarkers, and dynamical network biomarkers. Science.gov (United States) Liu, Rui; Wang, Xiangdong; Aihara, Kazuyuki; Chen, Luonan 2014-05-01 Many studies have been carried out for early diagnosis of complex diseases by finding accurate and robust biomarkers specific to respective diseases. In particular, recent rapid advance of high-throughput technologies provides unprecedented rich information to characterize various disease genotypes and phenotypes in a global and also dynamical manner, which significantly accelerates the study of biomarkers from both theoretical and clinical perspectives. Traditionally, molecular biomarkers that distinguish disease samples from normal samples are widely adopted in clinical practices due to their ease of data measurement. However, many of them suffer from low coverage and high false-positive rates or high false-negative rates, which seriously limit their further clinical applications. To overcome those difficulties, network biomarkers (or module biomarkers) attract much attention and also achieve better performance because a network (or subnetwork) is considered to be a more robust form to characterize diseases than individual molecules. But, both molecular biomarkers and network biomarkers mainly distinguish disease samples from normal samples, and they generally cannot ensure to identify predisease samples due to their static nature, thereby lacking ability to early diagnosis. Based on nonlinear dynamical theory and complex network theory, a new concept of dynamical network biomarkers (DNBs, or a dynamical network of biomarkers) has been developed, which is different from traditional static approaches, and the DNB is able to distinguish a predisease state from normal and disease states by even a small number of samples, and therefore has great potential to achieve "real" early diagnosis of complex diseases. In this paper, we comprehensively review the recent advances and developments on molecular biomarkers, network biomarkers, and DNBs in particular, focusing on the biomarkers for early diagnosis of complex diseases considering a small number of samples and high 8. Cancer predictive value of cytogenetic markers used in occupational health surveillance programs: a report from an ongoing study by the European Study Group on Cytogenetic Biomarkers and Health DEFF Research Database (Denmark) Hagmar, L; Bonassi, S; Strömberg, U; 1998-01-01 The cytogenetic endpoints in peripheral blood lymphocytes: chromosomal aberrations (CA), sister chromatid exchange (SCE) and micronuclei (MN) are established biomarkers of exposure for mutagens or carcinogens in the work environment. However, it is not clear whether these biomarkers also may serve...... for SCE or MN. A collaborative study between the Nordic and Italian research groups, will enable a more thorough evaluation of the cancer predictivity of the cytogenetic endpoints. We here report on the establishment of a joint data base comprising 5271 subjects, examined 1965-1988 for at least one...... cytogenetic biomarker. Totally, 3540 subjects had been examined for CA, 2702 for SCE and 1496 for MN. These cohorts have been followed-up with respect to subsequent cancer mortality or cancer incidence, and the expected values have been calculated from rates derived from the general populations in each... 9. A critique of biomarkers in environmental toxicology: A case study in birds Energy Technology Data Exchange (ETDEWEB) Bellward, G.D. [Univ. of British Columbia, Vancouver, British Columbia (Canada) 1995-12-31 The authors have been testing the hypothesis that exposure to elevated levels of 2,3,7,8-TCDD and similarly-acting compounds derived from pulp mill effluent adversely affects the reproductive capacity of colonies of great blue herons and double crested cormorants in the local area. Their objectives included developing quantitative TCDD dose-response curves for various toxicologically relevant endpoints in birds, with the goal of finding an appropriate environmental biomarker of dioxin exposure and toxicity. Potential biomarkers studied included ethoxyresorufin O-deethylase (EROD) as a measure of cytochrome P-450 1-A activity, and various hormonally-relevant end-points as measures of dioxin toxicity. The animal model used was the newly hatched chick, after in ovo exposure either in the laboratory or from the environment. Because the TEQ approach is based to a large extent on the use of in vitro and in vivo biomarkers, this study provides a useful example of one of the simplest in vivo models. The authors were able to construct hepatic EROD dose-response curves from the environmentally exposed heron and cormorant chicks, and from TCDD egg injections both early and late in the incubation period. Domestic chicken and pigeons were used as control species. The EROD induction data from the late injection pigeon study was very helpful for predicting appropriate doses for use in the early injection experiments, and for the wild avian species. However, the data was too limited to use for accurately predicting such endpoints as mortality, or effects at the lower end of the dose-response curves. Using various toxic equivalency factors, TEQs for the environmental data were calculated, and compared to the laboratory derived dose-response curves for TCDD. Using specific examples from this environmental case study, the strengths and weaknesses of the use of biomarkers and the TEQ approach will be discussed. 10. Biomarker Identification Using Text Mining Directory of Open Access Journals (Sweden) Hui Li 2012-01-01 Full Text Available Identifying molecular biomarkers has become one of the important tasks for scientists to assess the different phenotypic states of cells or organisms correlated to the genotypes of diseases from large-scale biological data. In this paper, we proposed a text-mining-based method to discover biomarkers from PubMed. First, we construct a database based on a dictionary, and then we used a finite state machine to identify the biomarkers. Our method of text mining provides a highly reliable approach to discover the biomarkers in the PubMed database. 11. Biomarkers in Lysosomal Storage Diseases Directory of Open Access Journals (Sweden) Joaquin Bobillo Lobato 2016-12-01 Full Text Available A biomarker is generally an analyte that indicates the presence and/or extent of a biological process, which is in itself usually directly linked to the clinical manifestations and outcome of a particular disease. The biomarkers in the field of lysosomal storage diseases (LSDs have particular relevance where spectacular therapeutic initiatives have been achieved, most notably with the introduction of enzyme replacement therapy (ERT. There are two main types of biomarkers. The first group is comprised of those molecules whose accumulation is directly enhanced as a result of defective lysosomal function. These molecules represent the storage of the principal macro-molecular substrate(s of a specific enzyme or protein, whose function is deficient in the given disease. In the second group of biomarkers, the relationship between the lysosomal defect and the biomarker is indirect. In this group, the biomarker reflects the effects of the primary lysosomal defect on cell, tissue, or organ functions. There is no “gold standard” among biomarkers used to diagnosis and/or monitor LSDs, but there are a number that exist that can be used to reasonably assess and monitor the state of certain organs or functions. A number of biomarkers have been proposed for the analysis of the most important LSDs. In this review, we will summarize the most promising biomarkers in major LSDs and discuss why these are the most promising candidates for screening systems. 12. A review of selection-based tests of abiotic surrogates for species representation. Science.gov (United States) Beier, Paul; Sutcliffe, Patricia; Hjort, Jan; Faith, Daniel P; Pressey, Robert L; Albuquerque, Fabio 2015-06-01 Because conservation planners typically lack data on where species occur, environmental surrogates--including geophysical settings and climate types--have been used to prioritize sites within a planning area. We reviewed 622 evaluations of the effectiveness of abiotic surrogates in representing species in 19 study areas. Sites selected using abiotic surrogates represented more species than an equal number of randomly selected sites in 43% of tests (55% for plants) and on average improved on random selection of sites by about 8% (21% for plants). Environmental diversity (ED) (42% median improvement on random selection) and biotically informed clusters showed promising results and merit additional testing. We suggest 4 ways to improve performance of abiotic surrogates. First, analysts should consider a broad spectrum of candidate variables to define surrogates, including rarely used variables related to geographic separation, distance from coast, hydrology, and within-site abiotic diversity. Second, abiotic surrogates should be defined at fine thematic resolution. Third, sites (the landscape units prioritized within a planning area) should be small enough to ensure that surrogates reflect species' environments and to produce prioritizations that match the spatial resolution of conservation decisions. Fourth, if species inventories are available for some planning units, planners should define surrogates based on the abiotic variables that most influence species turnover in the planning area. Although species inventories increase the cost of using abiotic surrogates, a modest number of inventories could provide the data needed to select variables and evaluate surrogates. Additional tests of nonclimate abiotic surrogates are needed to evaluate the utility of conserving nature's stage as a strategy for conservation planning in the face of climate change. © 2015 Society for Conservation Biology. 13. Chiral Biomarkers in Meteorites Science.gov (United States) Hoover, Richard B. 2010-01-01 The chirality of organic molecules with the asymmetric location of group radicals was discovered in 1848 by Louis Pasteur during his investigations of the rotation of the plane of polarization of light by crystals of sodium ammonium paratartrate. It is well established that the amino acids in proteins are exclusively Levorotary (L-aminos) and the sugars in DNA and RNA are Dextrorotary (D-sugars). This phenomenon of homochirality of biological polymers is a fundamental property of all life known on Earth. Furthermore, abiotic production mechanisms typically yield recemic mixtures (i.e. equal amounts of the two enantiomers). When amino acids were first detected in carbonaceous meteorites, it was concluded that they were racemates. This conclusion was taken as evidence that they were extraterrestrial and produced by abiologically. Subsequent studies by numerous researchers have revealed that many of the amino acids in carbonaceous meteorites exhibit a significant L-excess. The observed chirality is much greater than that produced by any currently known abiotic processes (e.g. Linearly polarized light from neutron stars; Circularly polarized ultraviolet light from faint stars; optically active quartz powders; inclusion polymerization in clay minerals; Vester-Ulbricht hypothesis of parity violations, etc.). This paper compares the measured chirality detected in the amino acids of carbonaceous meteorites with the effect of these diverse abiotic processes. IT is concluded that the levels observed are inconsistent with post-arrival biological contamination or with any of the currently known abiotic production mechanisms. However, they are consistent with ancient biological processes on the meteorite parent body. This paper will consider these chiral biomarkers in view of the detection of possible microfossils found in the Orgueil and Murchison carbonaceous meteorites. Energy dispersive x-ray spectroscopy (EDS) data obtained on these morphological biomarkers will be 14. Radiation acquisition and RBF neural network analysis on BOF end-point control Science.gov (United States) Zhao, Qi; Wen, Hong-yuan; Zhou, Mu-chun; Chen, Yan-ru 2008-12-01 There are some problems in Basic Oxygen Furnace (BOF) steelmaking end-point control technology at present. A new BOF end-point control model was designed, which was based on the character of carbon oxygen reaction in Basic Oxygen Furnace steelmaking process. The image capture and transformation system was established by Video for Windows (VFW) library function, which is a video software development package promoted by Microsoft Corporation. In this paper, the Radial Basic Function (RBF) neural network model was established by using the real-time acquisition information. The input parameters can acquire easily online and the output parameter is the end-point time, which can compare with the actual value conveniently. The experience results show that the predication result is ideal and the experiment results show the model can work well in the steelmaking adverse environment. 15. The Feynman trajectories: determining the path of a protein using fixed-endpoint assays. Science.gov (United States) Ketteler, Robin 2010-03-01 Richard Feynman postulated in 1948 that the path of an electron can be best described by the sum or functional integral of all possible trajectories rather than by the notion of a single, unique trajectory. As a consequence, the position of an electron does not harbor any information about the paths that contributed to this position. This observation constitutes a classical endpoint observation. The endpoint assay is the desired type of experiment for high-throughput screening applications, mainly because of limitations in data acquisition and handling. Quite contrary to electrons, it is possible to extract information about the path of a protein using endpoint assays, and these types of applications are reviewed in this article. 16. Cervical spinal cord injury:tailoring clinical trial endpoints to relfect meaningful functional improvements Institute of Scientific and Technical Information of China (English) Lisa M Bond; Lisa McKerracher 2014-01-01 Cervical spinal cord injury (SCI) results in partial to full paralysis of the upper and lower extrem-ities. Traditional primary endpoints for acute SCI clinical trials are too broad to assess functional recovery in cervical subjects, raising the possibility of false positive outcomes in trials for cervical SCI. Endpoints focused on the recovery of hand and arm control (e.g., upper extremity motor score, motor level change) show the most potential for use as primary outcomes in upcoming trials of cervical SCI. As the field moves forward, the most reliable way to ensure meaningful clinical testing in cervical subjects may be the development of a composite primary endpoint that measures both neurological recovery and functional improvement. 17. Aliskiren Trial in Type 2 Diabetes Using Cardio-Renal Endpoints (ALTITUDE): rationale and study design DEFF Research Database (Denmark) Parving, Hans-Henrik; Brenner, Barry M; McMurray, John J V; 2009-01-01 , resuscitated death, myocardial infarction, stroke, unplanned hospitalization for heart failure, onset of end-stage renal disease or doubling of baseline serum creatinine concentration. Secondary endpoints include a composite CV endpoint and a composite renal endpoint. CONCLUSION: ALTITUDE will determine...... the residual renal and cardiovascular risk still remains high. Aliskiren a novel oral direct renin inhibitor that unlike ACEi and ARBs, lowers plasma renin activity, angiotensin I and angiotensin II levels, may thereby provide greater benefit compared to ACEi or ARB alone. METHODS: The primary objective...... of the ALTITUDE trial is to determine whether aliskiren 300 mg once daily, reduces cardiovascular and renal morbidity and mortality compared with placebo when added to conventional treatment (including ACEi or ARB). ALTITUDE is an international, randomized, double-blind, placebo-controlled, parallel-group study... 18. A Robust Algorithm for Real-time Endpoint Detection in the Noisy Mobile Environments Institute of Scientific and Technical Information of China (English) WUBian; RENXiaolin; LIUChongqing; ZHANGYaxin 2003-01-01 In speech recognition, the endpoint detection must be robust to noise. In low SNR situations, the conventional energy-based endpoint detection algorithms often fail and the performance of speech recognizer usually degrades distinctly, especially when in mobile environments, the background noise changes dramatically. In this paper, we propose a new algorithm that improves the endpoint detection for speech recognition in low SNR and in various noisy environments. The described algorithm not only uses multiple features but introduces a decision logic to increase the robustness in both low SNR and various noisy mobile environments. To evaluate the new algorithm, we carry out experiments in various noisy mobile environments (e.g. railway station, airport, street etc), and the performance of the algorithm is significantly improved, especially in low SNR situations. At the same time, the proposed algorithm has a low complexity and is suitable for real time embedded systems. 19. Swimming speed alteration in the early developmental stages of Paracentrotus lividus sea urchin as ecotoxicological endpoint. Science.gov (United States) Morgana, Silvia; Gambardella, Chiara; Falugi, Carla; Pronzato, Roberto; Garaventa, Francesca; Faimali, Marco 2016-04-01 Behavioral endpoints have been used for decades to assess chemical impacts at concentrations unlikely to cause mortality. With recently developed techniques, it is possible to investigate the swimming behavior of several organisms under laboratory conditions. The aims of this study were: i) assessing for the first time the feasibility of swimming speed analysis of the early developmental stage sea urchin Paracentrotus lividus by an automatic recording system ii) investigating any Swimming Speed Alteration (SSA) on P. lividus early stages exposed to a chemical reference; iii) identifying the most suitable stage for SSA test. Results show that the swimming speed of all the developmental stages was easily recorded. The swimming speed was inhibited as a function of toxicant concentration. Pluteus were the most appropriate stage for evaluating SSA in P. lividus as ecotoxicological endpoint. Finally, swimming of sea urchin early stages represents a sensitive endpoint to be considered in ecotoxicological investigations. Copyright © 2016 Elsevier Ltd. All rights reserved. 20. Towards Improved Biomarker Research DEFF Research Database (Denmark) Kjeldahl, Karin This thesis takes a look at the data analytical challenges associated with the search for biomarkers in large-scale biological data such as transcriptomics, proteomics and metabolomics data. These studies aim to identify genes, proteins or metabolites which can be associated with e.g. a diet, dis...... is used both for regression and classification purposes. This method has proven its strong worth in the multivariate data analysis throughout an enormous range of applications; a very classic data type is near infrared (NIR) data, but many similar data types have also be very successful... 1. Degree of target utilization influences the location of movement endpoint distributions. Science.gov (United States) Slifkin, Andrew B; Eder, Jeffrey R 2017-03-01 According to dominant theories of motor control, speed and accuracy are optimized when, on the average, movement endpoints are located at the target center and when the variability of the movement endpoint distributions is matched to the width of the target (viz., Meyer, Abrams, Kornblum, Wright, & Smith, 1988). The current study tested those predictions. According to the speed-accuracy trade-off, expanding the range of variability to the amount permitted by the limits of the target boundaries allows for maximization of movement speed while centering the distribution on the target center prevents movement errors that would have occurred had the distribution been off center. Here, participants (N=20) were required to generate 100 consecutive targeted hand movements under each of 15 unique conditions: There were three movement amplitude requirements (80, 160, 320mm) and within each there were five target widths (5, 10, 20, 40, 80mm). According to the results, it was only at the smaller target widths (5, 10mm) that movement endpoint distributions were centered on the target center and the range of movement endpoint variability matched the range specified by the target boundaries. As target width increased (20, 40, 80mm), participants increasingly undershot the target center and the range of movement endpoint variability increasingly underestimated the variability permitted by the target region. The degree of target center undershooting was strongly predicted by the difference between the size of the target and the amount of movement endpoint variability, i.e., the amount of unused space in the target. The results suggest that participants have precise knowledge of their variability relative to that permitted by the target, and they use that knowledge to systematically reduce the travel distance to targets. The reduction in travel distance across the larger target widths might have resulted in greater cost savings than those associated with increases in speed 2. Biomarkers for noninvasive biochemical diagnosis of nonalcoholic steatohepatitis: Tools or decorations? Institute of Scientific and Technical Information of China (English) Yusuf Yilmaz; Enver Dolar 2009-01-01 In light of the growing epidemics of nonalcoholic fatty liver disease (NAFLD), identification and validation of the novel biochemical surrogate markers for nonalcoholic steatohepatitis (NASH) are paramount to reduce the necessity for liver biopsy. The availability of such markers has t remendous potent ial to radically alter the management strategies of NAFLD patients and to monitor the disease activity. Although current biomarkers do not entirely fulfill the many requirements for the identification of patients with NASH, they should not discourage our quest, but remind us that we need to cognize the challenges ahead. 3. Calculation of a velocity distribution from particle trajectory end-points. Science.gov (United States) Rasmussen, Lowell A. 1983-01-01 The longitudinal component of the velocity of a particle at or near a glacier surface is considered, its position as a function of time being termed its trajectory. Functional relationships are derived for obtaining the trajectory from the spatial distribution of velocity and for obtaining the velocity distribution from the trajectory. It is established that the trajectory end-points impose only an integral condition on the velocity distribution and that no individual point on the velocity distribution can be determined if only the end-points are known.-from Author 4. Verifying Elimination Programs with a Special Emphasis on Cysticercosis Endpoints and Postelimination Surveillance Directory of Open Access Journals (Sweden) Sukwan Handali 2012-01-01 Full Text Available Methods are needed for determining program endpoints or postprogram surveillance for any elimination program. Cysticercosis has the necessary effective strategies and diagnostic tools for establishing an elimination program; however, tools to verify program endpoints have not been determined. Using a statistical approach, the present study proposed that taeniasis and porcine cysticercosis antibody assays could be used to determine with a high statistical confidence whether an area is free of disease. Confidence would be improved by using secondary tests such as the taeniasis coproantigen assay and necropsy of the sentinel pigs. 5. Top predators: hot or not? A call for systematic assessment of biodiversity surrogates NARCIS (Netherlands) Cabeza, M.; Arponen, A.; Teeffelen, van A.J.A. 2008-01-01 argue that top predators are justified conservation surrogates based on a case study where raptor presence is associated with high species richness of birds, butterflies and trees. 2. We question the methodology as well as the applicability of their results, and clarify differences between surrogate 6. Comparison of surrogate models with different methods in groundwater remediation process Jiannan Luo; Wenxi Lu 2014-10-01 Surrogate modelling is an effective tool for reducing computational burden of simulation optimization. In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified nitrobenzene contaminated aquifer remediation problem. In the model accuracy analysis process, a 10-fold cross validation method was adopted to evaluate the approximation accuracy of the three surrogate models. The results demonstrated that: RBFANN surrogate model and kriging surrogate model had acceptable approximation accuracy, and further that kriging model’s approximation accuracy was slightly higher than RBFANN model. However, the PR model demonstrated unacceptably poor approximation accuracy. Therefore, the RBFANN and kriging surrogates were selected and used in the optimization process to identify the most cost-effective remediation strategy at a nitrobenzene-contaminated site. The optimal remediation costs obtained with the two surrogate-based optimization models were similar, and had similar computational burden. These two surrogate-based optimization models are efficient tools for optimal groundwater remediation strategy identification. 7. Someone to Lean on: Assessment and Implications of Social Surrogate Use in Childhood Science.gov (United States) Arbeau, Kimberley A.; Coplan, Robert J.; Matheson, Adrienne 2012-01-01 A social surrogate is a person who helps a shy individual deal with the stresses of a social situation. Previous research has only investigated social surrogate use in adults. The purpose of the current study was to develop and evaluate a new self-report measure of social surrogacy in middle childhood and to explore the implications of this… 8. Critical review of norovirus surrogates in food safety research: rationale for considering volunteer studies Science.gov (United States) The inability to propagate human norovirus (NoV) or to clearly differentiate infectious from noninfectious virus particles have led to the use of surrogate viruses, like feline calicivirus (FCV) and murine norovirus-1 (MNV), which are propagatable in cell culture. The use of surrogates is predicate... 9. Somatic coliphages as surrogates for enteroviruses in sludge hygienization treatments. Science.gov (United States) Martín-Díaz, Julia; Casas-Mangas, Raquel; García-Aljaro, Cristina; Blanch, Anicet R; Lucena, Francisco 2016-01-01 Conventional bacterial indicators present serious drawbacks giving information about viral pathogens persistence during sludge hygienization treatments. This calls for the search of alternative viral indicators. Somatic coliphages' (SOMCPH) ability for acting as surrogates for enteroviruses was assessed in 47 sludge samples subjected to novel treatment processes. SOMCPH, infectious enteroviruses and genome copies of enteroviruses were monitored. Only one of these groups, the bacteriophages, was present in the sludge at concentrations that allowed the evaluation of treatment's performance. An indicator/pathogen relationship of 4 log10 (PFU/g dw) was found between SOMCPH and infective enteroviruses and their detection accuracy was assessed. The obtained results and the existence of rapid and standardized methods encourage the inclusion of SOMCPH quantification in future sludge directives. In addition, an existing real-time quantitative polymerase chain reaction (RT-qPCR) for enteroviruses was adapted and applied. 10. A Rigorous Framework for Optimization of Expensive Functions by Surrogates Science.gov (United States) Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W. 1998-01-01 The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example. 11. Sparse polynomial surrogates for aerodynamic computations with random inputs CERN Document Server Savin, Eric; Peter, Jacques 2015-01-01 This paper deals with some of the methodologies used to construct polynomial surrogate models based on generalized polynomial chaos (gPC) expansions for applications to uncertainty quantification (UQ) in aerodynamic computations. A core ingredient in gPC expansions is the choice of a dedicated sampling strategy, so as to define the most significant scenarios to be considered for the construction of such metamodels. A desirable feature of the proposed rules shall be their ability to handle several random inputs simultaneously. Methods to identify the relative "importance" of those variables or uncertain data shall be ideally considered as well. The present work is more particularly dedicated to the development of sampling strategies based on sparsity principles. Sparse multi-dimensional cubature rules based on general one-dimensional Gauss-Jacobi-type quadratures are first addressed. These sets are non nested, but they are well adapted to the probability density functions with compact support for the random in... 12. High-Temperature Oxidation of Plutonium Surrogate Metals and Alloys Energy Technology Data Exchange (ETDEWEB) Sparks, Joshua C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Krantz, Kelsie E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Christian, Jonathan H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Washington, II, Aaron L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL) 2016-07-27 The Plutonium Management and Disposition Agreement (PMDA) is a nuclear non-proliferation agreement designed to remove 34 tons of weapons-grade plutonium from Russia and the United States. While several removal options have been proposed since the agreement was first signed in 2000, processing the weapons-grade plutonium to mixed-oxide (MOX) fuel has remained the leading candidate for achieving the goals of the PMDA. However, the MOX program has received its share of criticisms, which causes its future to be uncertain. One alternative pathway for plutonium disposition would involve oxidizing the metal followed by impurity down blending and burial in the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. This pathway was investigated by use of a hybrid microwave and a muffle furnace with Fe and Al as surrogate materials. Oxidation occurred similarly in the microwave and muffle furnace; however, the microwave process time was significantly faster. 13. Protein prosthesis: β-peptides as reverse-turn surrogates. Science.gov (United States) Arnold, Ulrich; Huck, Bayard R; Gellman, Samuel H; Raines, Ronald T 2013-03-01 The introduction of non-natural modules could provide unprecedented control over folding/unfolding behavior, conformational stability, and biological function of proteins. Success requires the interrogation of candidate modules in natural contexts. Here, expressed protein ligation is used to replace a reverse turn in bovine pancreatic ribonuclease (RNase A) with a synthetic β-dipeptide: β²-homoalanine-β³-homoalanine. This segment is known to adopt an unnatural reverse-turn conformation that contains a 10-membered ring hydrogen bond, but one with a donor-acceptor pattern opposite to that in the 10-membered rings of natural reverse turns. The RNase A variant has intact enzymatic activity, but unfolds more quickly and has diminished conformational stability relative to native RNase A. These data indicate that hydrogen-bonding pattern merits careful consideration in the selection of beneficial reverse-turn surrogates. Copyright © 2012 The Protein Society. 14. Argan oil improves surrogate markers of CVD in humans. Science.gov (United States) Sour, Souad; Belarbi, Meriem; Khaldi, Darine; Benmansour, Nassima; Sari, Nassima; Nani, Abdelhafid; Chemat, Farid; Visioli, Francesco 2012-06-01 Limited - though increasing - evidence suggests that argan oil might be endowed with potential healthful properties, mostly in the areas of CVD and prostate cancer. We sought to comprehensively determine the effects of argan oil supplementation on the plasma lipid profile and antioxidant status of a group of healthy Algerian subjects, compared with matched controls. A total of twenty healthy subjects consumed 15 g/d of argan oil - with toasted bread - for breakfast, during 4 weeks (intervention group), whereas twenty matched controls followed their habitual diet, but did not consume argan oil. The study lasted 30 d. At the end of the study, argan oil-supplemented subjects exhibited higher plasma vitamin E concentrations, lower total and LDL-cholesterol, lower TAG and improved plasma and cellular antioxidant profile, when compared with controls. In conclusion, we showed that Algerian argan oil is able to positively modulate some surrogate markers of CVD, through mechanisms which warrant further investigation. 15. Cholesterol paradox: a correlate does not a surrogate make. Science.gov (United States) DuBroff, Robert 2017-03-01 The global campaign to lower cholesterol by diet and drugs has failed to thwart the developing pandemic of coronary heart disease around the world. Some experts believe this failure is due to the explosive rise in obesity and diabetes, but it is equally plausible that the cholesterol hypothesis, which posits that lowering cholesterol prevents cardiovascular disease, is incorrect. The recently presented ACCELERATE trial dumbfounded many experts by failing to demonstrate any cardiovascular benefit of evacetrapib despite dramatically lowering low-density lipoprotein cholesterol and raising high-density lipoprotein cholesterol in high-risk patients with coronary disease. This clinical trial adds to a growing volume of knowledge that challenges the validity of the cholesterol hypothesis and the utility of cholesterol as a surrogate end point. Inadvertently, the cholesterol hypothesis may have even contributed to this pandemic. This perspective critically reviews this evidence and our reluctance to acknowledge contradictory information. 16. Biomarkers in Vasculitis Science.gov (United States) Monach, Paul A. 2014-01-01 Purpose of review Better biomarkers are needed for guiding management of patients with vasculitis. Large cohorts and technological advances had led to an increase in pre-clinical studies of potential biomarkers. Recent findings The most interesting markers described recently include a gene expression signature in CD8+ T cells that predicts tendency to relapse or remain relapse-free in ANCA-associated vasculitis, and a pair of urinary proteins that are elevated in Kawasaki disease but not other febrile illnesses. Both of these studies used “omics” technologies to generate and then test hypotheses. More conventional hypothesis-based studies have indicated that the following circulating proteins have potential to improve upon clinically available tests: pentraxin-3 in giant cell arteritis and Takayasu’s arteritis; von Willebrand factor antigen in childhood central nervous system vasculitis; eotaxin-3 and other markers related to eosinophils or Th2 immune responses in eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome); and MMP-3, TIMP-1, and CXCL13 in ANCA-associated vasculitis. Summary New markers testable in blood and urine have the potential to assist with diagnosis, staging, assessment of current disease activity, and prognosis. However, the standards for clinical usefulness, in particular the demonstration of either very high sensitivity or very high specificity, have yet to be met for clinically relevant outcomes. PMID:24257367 17. Technological advances in suspended-sediment surrogate monitoring Science.gov (United States) Gray, John R.; Gartner, Jeffrey W. 2009-01-01 Surrogate technologies to continuously monitor suspended sediment show promise toward supplanting traditional data collection methods requiring routine collection and analysis of water samples. Commercially available instruments operating on bulk optic (turbidity), laser optic, pressure difference, and acoustic backscatter principles are evaluated based on cost, reliability, robustness, accuracy, sample volume, susceptibility to biological fouling, and suitable range of mass concentration and particle size distribution. In situ turbidimeters are widely used. They provide reliable data where the point measurements can be reliably correlated to the river's mean cross section concentration value, effects of biological fouling can be minimized, and concentrations remain below the sensor's upper measurement limit. In situ laser diffraction instruments have similar limitations and can cost 6 times the approximate \$5000 purchase price of a turbidimeter. However, laser diffraction instruments provide volumetric-concentration data in 32 size classes. Pressure differential instruments measure mass density in a water column, thus integrating substantially more streamflow than a point measurement. They are designed for monitoring medium-to-large concentrations, are generally unaffected by biological fouling, and cost about the same as a turbidimeter. However, their performance has been marginal in field applications. Acoustic Doppler profilers use acoustic backscatter to measure suspended sediment concentrations in orders of magnitude more streamflow than do instruments that rely on point measurements. The technology is relatively robust and generally immune to effects of biological fouling. Cost of a single-frequency device is about double that of a turbidimeter. Multifrequency arrays also provide the potential to resolve concentrations by clay silt versus sand size fractions. Multifrequency hydroacoustics shows the most promise for revolutionizing collection of continuous 18. Effectiveness of chitosan on the inactivation of enteric viral surrogates. Science.gov (United States) Davis, Robert; Zivanovic, Svetlana; D'Souza, Doris H; Davidson, P Michael 2012-10-01 Chitosan is known to have bactericidal and antifungal activity. Although human noroviruses are the leading cause of non-bacterial gastroenteritis, information on the efficacy of chitosan against foodborne viruses is very limited. The objective of this work was to determine the effectiveness of different molecular weight chitosans against the cultivable human norovirus and enteric virus surrogates, feline calicivirus, FCV-F9, murine norovirus, MNV-1, and bacteriophages, MS2 and phiX174. Five purified chitosans (53, 222, 307, 421, ~1150 kDa) were dissolved in water, 1% acetic acid, or aqueous HCl pH = 4.3, sterilized by membrane filtration, and mixed with equal volume of virus to obtain a final concentration of 0.7% chitosan and 5 log(10) PFU/ml virus. Virus-chitosan suspensions were incubated for 3 h at 37 °C. Untreated viruses in PBS, in PBS with acetic acid, and in PBS with HCl were tested as controls. Each experiment was run in duplicate and replicated at least twice. Water-soluble chitosan (53 kDa) reduced phiX174, MS2, FCV-F9 and MNV-1 titers by 0.59, 2.44, 3.36, and 0.34 log(10) PFU/ml respectively. Chitosans in acetic acid decreased phiX174 by 1.19-1.29, MS2 by 1.88-5.37, FCV-F9 by 2.27-2.94, and MNV-1 by 0.09-0.28 log(10) PFU/ml, respectively. Increasing the MW of chitosan corresponded with an increasing antiviral effect on MS2, but did not appear to play a role for the other three tested viral surrogates. Overall, chitosan treatments showed the greatest reduction for FCV-F9, and MS2 followed by phiX174, and with no significant effect on MNV-1. 19. Potential cryptosporidium surrogates and evaluation of compressible oocysts Energy Technology Data Exchange (ETDEWEB) Li, S.Y.; Goodrich, J.A.; Owens, J.H. [Environmental Protection Agency, Cincinnati, OH (United States)] [and others 1995-10-01 Cryptosporidium has been recognized as an important waterborne agent of gastroenteritis and a biological contaminant in drinking water. The widespread presence of Cryptosporidium in surface source water and either untreated or insufficiently treated drinking water has led to Cryptosporidium outbreaks in the United States and worldwide. Among the conventional control practices, filtration and high temperature distillation appear to be the potentially viable technologies for protection against Cryptosporidium in drinking water. As employed in many water plants, filtration is likely to be the most practical treatment technology utilized for Cryptosporidium removal in the near future. Consequently, accurate and reliable methods for evaluation of Cryptosporidium removal rates for filtration-based systems are necessary to assist States in determining drinking water quality and complying with the up-coming national standard for Cryptosporidium in drinking water. Furthermore, searching for reliable and non-hazardous surrogates for evaluation of treatment plant efficiency has been intensified because of the potential health risk associated with Cryptosporidium. Additionally, during the filtration procedure Cryptosporidium may squeeze and fold through pores size of the filtration systems that are smaller than the diameter of the organism; a fraction of these Cryptosporidium oocysts may still remain a certain degree of viability. These uncertainties are critical for the evaluation and optimization of filtration-based physical treatment systems. The in-house research studies described below consist of two parts. One is a potential surrogate study using bag filtration systems at the US EPA Test & Evaluation Facility in Cincinnati, Ohio. The second is Cryptosporidium compressibility and viability investigation. 20. Surrogate-assisted feature extraction for high-throughput phenotyping. Science.gov (United States) Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi 2017-04-01 Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts. Directory of Open Access Journals (Sweden) Sushma C 2005-01-01 Full Text Available BACKGROUND: Pan masala is a comparatively recent habit in India and is marketed with and without tobacco. Advertisements of tobacco products have been banned in India since 1st May 2004. The advertisements of plain pan masala, which continue in Indian media, have been suspected to be surrogate for tobacco products bearing the same name. The study was carried out to assess whether these advertisements were for the intended product, or for tobacco products with same brand name. MATERIALS AND METHODS: The programme of a popular television Hindi news channel was watched for a 24-h period. Programmes on the same channel and its English counterpart were watched on different days to assess whether the advertisements were repeated. The total duration of telecast of a popular brand of plain pan masala (Pan Parag was multiplied by the rate charged by the channel to provide the cost of advertisement of this product. The total sale value of the company was multiplied by the proportion of usage of plain pan masala out of gutka plus pan masala habit as observed from a different study, to provide the annual sale value of plain pan masala product under reference. RESULTS: The annual sale value of plain Pan Parag was estimated to be Rs. 67.1 million. The annual cost of the advertisement of the same product on two television channels was estimated at Rs. 244.6 million. CONCLUSION: The advertisements of plain pan masala seen on Indian television are a surrogate for the tobacco products bearing the same name. 2. Learning image based surrogate relevance criterion for atlas selection in segmentation Science.gov (United States) Zhao, Tingting; Ruan, Dan 2016-06-01 Picking geometrically relevant atlases from the whole training set is crucial to multi-atlas based image segmentation, especially with extensive data of heterogeneous quality in the Big Data era. Unfortunately, there is very limited understanding of how currently used image similarity criteria reveal geometric relevance, let alone the optimization of them. This paper aims to develop a good image based surrogate relevance criterion to best reflect the underlying inaccessible geometric relevance in a learning context. We cast this surrogate learning problem into an optimization framework, by encouraging the image based surrogate to behave consistently with geometric relevance during training. In particular, we desire a criterion to be small for image pairs with similar geometry and large for those with significantly different segmentation geometry. Validation experiments on corpus callosum segmentation demonstrate the improved quality of the learned surrogate compared to benchmark surrogate candidates. 3. Blood Biomarkers of Ischemic Stroke National Research Council Canada - National Science Library Jickling, Glen C; Sharp, Frank R 2011-01-01 .... Though many candidate blood based biomarkers for ischemic stroke have been identified, none are currently used in clinical practice. With further well designed study and careful validation, the development of blood biomarkers to improve the care of patients with ischemic stroke may be achieved. 4. Biomarkers of spontaneous preterm birth DEFF Research Database (Denmark) Polettini, Jossimara; Cobo, Teresa; Kacerovsky, Marian 2017-01-01 Despite decades of research on risk indicators of spontaneous preterm birth (PTB), reliable biomarkers are still not available to screen or diagnose high-risk pregnancies. Several biomarkers in maternal and fetal compartments have been mechanistically linked to PTB, but none of them are reliable... 5. Which biomarkers reveal neonatal sepsis? Directory of Open Access Journals (Sweden) Kun Wang Full Text Available We address the identification of optimal biomarkers for the rapid diagnosis of neonatal sepsis. We employ both canonical correlation analysis (CCA and sparse support vector machine (SSVM classifiers to select the best subset of biomarkers from a large hematological data set collected from infants with suspected sepsis from Yale-New Haven Hospital's Neonatal Intensive Care Unit (NICU. CCA is used to select sets of biomarkers of increasing size that are most highly correlated with infection. The effectiveness of these biomarkers is then validated by constructing a sparse support vector machine diagnostic classifier. We find that the following set of five biomarkers capture the essential diagnostic information (in order of importance: Bands, Platelets, neutrophil CD64, White Blood Cells, and Segs. Further, the diagnostic performance of the optimal set of biomarkers is significantly higher than that of isolated individual biomarkers. These results suggest an enhanced sepsis scoring system for neonatal sepsis that includes these five biomarkers. We demonstrate the robustness of our analysis by comparing CCA with the Forward Selection method and SSVM with LASSO Logistic Regression. 6. Role of biomarkers in monitoring exposures to chemicals: present position, future prospects. Science.gov (United States) Watson, William P; Mutti, Antonio 2004-01-01 Biomarkers are becoming increasingly important in toxicology and human health. Many research groups are carrying out studies to develop biomarkers of exposure to chemicals and apply these for human monitoring. There is considerable interest in the use and application of biomarkers to identify the nature and amounts of chemical exposures in occupational and environmental situations. Major research goals are to develop and validate biomarkers that reflect specific exposures and permit the prediction of the risk of disease in individuals and groups. One important objective is to prevent human cancer. This review presents a commentary and consensus views about the major developments on biomarkers for monitoring human exposure to chemicals. A particular emphasis is on monitoring exposures to carcinogens. Significant developments in the areas of new and existing biomarkers, analytical methodologies, validation studies and field trials together with auditing and quality assessment of data are discussed. New developments in the relatively young field of toxicogenomics possibly leading to the identification of individual susceptibility to both cancer and non-cancer endpoints are also considered. The construction and development of reliable databases that integrate information from genomic and proteomic research programmes should offer a promising future for the application of these technologies in the prediction of risks and prevention of diseases related to chemical exposures. Currently adducts of chemicals with macromolecules are important and useful biomarkers especially for certain individual chemicals where there are incidences of occupational exposure. For monitoring exposure to genotoxic compounds protein adducts, such as those formed with haemoglobin, are considered effective biomarkers for determining individual exposure doses of reactive chemicals. For other organic chemicals, the excreted urinary metabolites can also give a useful and complementary indication of 7. Alpha-1 antitrypsin and granulocyte colony-stimulating factor as serum biomarkers of disease severity in ulcerative colitis DEFF Research Database (Denmark) Soendergaard, Christoffer; Nielsen, Ole Haagen; Seidelin, Jakob Benedict 2015-01-01 biomarkers are currently needed for identification of patients with mild or moderate disease activity. Using a commercially available platform, we aimed at identifying serum biomarkers that are able to grade the disease severity. METHODS: Serum samples from 65 patients with UC with varying disease activity......-stimulating factor produced a predictive model with an AUC of 0.72 when differentiating mild and moderate UC, and an AUC of 0.96 when differentiating moderate and severe UC, the latter being as reliable as CRP. CONCLUSIONS: Alpha-1 antitrypsin is identified as a potential serum biomarker of mild-to-moderate disease......BACKGROUND: Initial assessment of patients with ulcerative colitis (UC) is challenging and relies on apparent clinical symptoms and measurements of surrogate markers (e.g., C-reactive protein [CRP] or similar acute phase proteins). As CRP only reliably identifies patients with severe disease, novel... 8. Toll-Like Receptors and Cytokines as Surrogate Biomarkers for Evaluating Vaginal Immune Response following Microbicide Administration Directory of Open Access Journals (Sweden) 2008-01-01 Full Text Available Topical microbicides are intended for frequent use by women in reproductive age. Hence, it is essential to evaluate their impact on mucosal immune function in the vagina. In the present study, we evaluated nisin, a naturally occurring antimicrobial peptide (AMP, for its efficacy as an intravaginal microbicide. Its effect on the vaginal immune function was determined by localizing Toll-like receptors (TLRs-3, 9 and cytokines (IL-4, 6 , 10 and TNF-α in the rabbit cervicovaginal epithelium following intravaginal administration of high dose of nisin gel for 14 consecutive days. The results revealed no alteration in the expression of TLRs and cytokines at both protein and mRNA levels. However, in SDS gel-treated group, the levels were significantly upregulated with the induction of NF-κB signalling cascade. Thus, TLRs and cytokines appear as sensitive indicators for screening immunotoxic potential of candidate microbicides. 9. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties KAUST Repository Ahmed, Ahfaz 2015-03-01 Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates 10. Biomarkers in inflammatory bowel diseases DEFF Research Database (Denmark) Bennike, Tue; Birkelund, Svend; Stensballe, Allan 2014-01-01 with medications with the concomitant risk of adverse events. In addition, identification of disease and course specific biomarker profiles can be used to identify biological pathways involved in the disease development and treatment. Knowledge of disease mechanisms in general can lead to improved future...... development of preventive and treatment strategies. Thus, the clinical use of a panel of biomarkers represents a diagnostic and prognostic tool of potentially great value. The technological development in recent years within proteomic research (determination and quantification of the complete protein content......) has made the discovery of novel biomarkers feasible. Several IBD-associated protein biomarkers are known, but none have been successfully implemented in daily use to distinguish CD and UC patients. The intestinal tissue remains an obvious place to search for novel biomarkers, which blood, urine... 11. Epigenetic biomarkers in liver cancer. Science.gov (United States) Banaudha, Krishna K; Verma, Mukesh 2015-01-01 Liver cancer (hepatocellular carcinoma or HCC) is a major cancer worldwide. Research in this field is needed to identify biomarkers that can be used for early detection of the disease as well as new approaches to its treatment. Epigenetic biomarkers provide an opportunity to understand liver cancer etiology and evaluate novel epigenetic inhibitors for treatment. Traditionally, liver cirrhosis, proteomic biomarkers, and the presence of hepatitis viruses have been used for the detection and diagnosis of liver cancer. Promising results from microRNA (miRNA) profiling and hypermethylation of selected genes have raised hopes of identifying new biomarkers. Some of these epigenetic biomarkers may be useful in risk assessment and for screening populations to identify who is likely to develop cancer. Challenges and opportunities in the field are discussed in this chapter. 12. Meeting report: Measuring endocrine-sensitive endpoints within the first years of life DEFF Research Database (Denmark) Arbuckle, T.E.; Hauser, R.; Swan, S.H. 2008-01-01 An international workshop tided "Assessing Endocrine-Related Endpoints within the First Years of Life" was held 30 April-1 May 2007, in Ottawa, Ontario, Canada. Representatives from a number of pregnancy cohort studies in North America and Europe presented options for measuring various endocrine-... 13. 77 FR 49447 - Endpoints for Clinical Trials in Kidney Transplantation; Public Workshop Science.gov (United States) 2012-08-16 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration Endpoints for Clinical Trials in Kidney Transplantation; Public Workshop AGENCY: Food and Drug Administration, HHS. ACTION: Notice of public workshop. The Food and Drug Administration (FDA) is announcing... 14. Reporting and evaluation of HIV-related clinical endpoints in two multicenter international clinical trials DEFF Research Database (Denmark) Lifson, A; Rahme, FS; Belloso, WH; 2006-01-01 PURPOSE: The processes for reporting and review of progression of HIV disease clinical endpoints are described for two large phase III international clinical trials. METHOD: SILCAAT and ESPRIT are multicenter randomized HIV trials evaluating the impact of interleukin-2 on disease progression... 15. Subjective endpoints in clinical trials: the case for blinded independent central review Directory of Open Access Journals (Sweden) Walovitch R 2013-09-01 Full Text Available Richard Walovitch,1 Bin Yao,2 Patrick Chokron,1 Helen Le,1 Glenn Bubley3 1WorldCare Clinical, LLC, Boston, MA, USA; 2Amgen, Inc, Thousand Oaks, CA, USA; 3Director of Genitourinary Medical Oncology, Beth Israel Deaconess Medical Center, Boston, MA, USA Abstract: Primary efficacy and safety endpoints in clinical trials are often subjective assessments made by site personnel. For international confirmatory trials conducted over broad geographic regions and different clinical practice settings, variability in these subjective assessments can be substantial. Centralized endpoint assessment committees (EACs offer a mechanism through which to reduce assessment bias and potentially increase assessment precision and accuracy, particularly in open-label trials. An overview of regulatory agencies' rationales for an EAC is reviewed. In addition, the two main types of EACs, the blinded independent central review, and the consensus panel are compared. Selection of endpoints for EAC evaluation and design of EAC process to maximize EAC value proposition are also discussed. Keywords: endpoint assessment committee, FDA, central review, BICR, adjudication, consensus panel 16. Systematic adjudication of myocardial infarction end-points in an international clinical trial NARCIS (Netherlands) K.W. Mahaffey (Kenneth); R.A. Harrington (Robert Alex); K.M. Akkerhuis (Martijn); N.S. Kleiman (Neal); L.G. Berdan (Lisa); B.S. Crenshaw (Brian); B.E. Tardiff (Barbara); C.B. Granger (Christopher); I. DeJong (Ingrid); M. Bhapkar (Manju); P. Widimsky (Petr); R. Corbalon (Ramón); K.L. Lee (Kerry); J.W. Deckers (Jaap); M.L. Simoons (Maarten); E.J. Topol (Eric); R.M. Califf (Robert) 2001-01-01 textabstractBackground. Clinical events committees (CEC) are used routinely to adjudicate suspected end-points in cardiovascular trials, but little information has been published about the various processes used. We reviewed results of the CEC process used to identify and adjudicate suspected 17. Systematic adjudication of myocardial infarction end-points in an international clinical trial. NARCIS (Netherlands) K.W. Mahaffey (Kenneth); R.A. Harrington (Robert Alex); N.S. Kleiman (Neal); L.G. Berdan (Lisa); B.S. Crenshaw (Brian); B.E. Tardiff (Barbara); C.B. Granger (Christopher); I. DeJong (Ingrid); M. Bhapkar (Manju); P. Widimsky (Petr); R. Corbalon (Ramón); K.L. Lee (Kerry); J.W. Deckers (Jaap); M.L. Simoons (Maarten); E.J. Topol (Eric); R.M. Califf (Robert); K.M. Akkerhuis (Martijn) 2001-01-01 textabstractBACKGROUND: Clinical events committees (CEC) are used routinely to adjudicate suspected end-points in cardiovascular trials, but little information has been published about the various processes used. We reviewed results of the CEC process used to identify and adjudicate suspected 18. End-point construction and systematic titration error in linear titration curves-complexation reactions NARCIS (Netherlands) Coenegracht, P.M.J.; Duisenberg, A.J.M. 1975-01-01 The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are consid 19. Development of a Computer Model for Prediction of PCB Degradation Endpoints Energy Technology Data Exchange (ETDEWEB) Just, E.M.; Klasson, T. 1999-12-07 Several researchers have demonstrated the transformation if polychlorinated biphenyls (PCBs) by both aerobic and anaerobic bacteria. This transformation, or conversion, is characteristic and often dependent on PCB congener structure and in addition, dictates the products or endpoints. Since transformation is linked to microbial activities, bioremediation has been hailed as a possible solution for PCB-contaminated soils and sediments, and several demonstration activities have verified laboratory results. This paper presents results from mathematical modeling of PCB transformation as a means of predicting possible endpoints of bioremediation. Since transformation can be influenced by both starting composition of the PCBs and microbial activity, this paper systematically evaluates several of the most common transformation patterns. The predicted data are also compared with experimental results. For example, the correlation between laboratory-observed and predicted endpoint data was, in some cases, as good as 0.98 (perfect correlation = 1.0). In addition to predicting chemical endpoints, the possible human effects of the PCBs are discussed through the use of documented dioxin-like toxicity and accumulation in humans before and after transformation. 20. Coulometric Titration of Ethylenediaminetetraacetate (EDTA) with Spectrophotometric Endpoint Detection: An Experiment for the Instrumental Analysis Laboratory Science.gov (United States) Williams, Kathryn R.; Young, Vaneica Y.; Killian, Benjamin J. 2011-01-01 Ethylenediaminetetraacetate (EDTA) is commonly used as an anticoagulant in blood-collection procedures. In this experiment for the instrumental analysis laboratory, students determine the quantity of EDTA in commercial collection tubes by coulometric titration with electrolytically generated Cu[superscript 2+]. The endpoint is detected… 1. Baseline characteristics in the Aliskiren Trial in Type 2 Diabetes Using Cardio-Renal Endpoints (ALTITUDE) DEFF Research Database (Denmark) Parving, Hans-Henrik; Brenner, Barry M; McMurray, John J V 2012-01-01 Patients with type 2 diabetes are at enhanced risk for macro- and microvascular complications. Albuminuria and/or reduced kidney function further enhances the vascular risk. We initiated the Aliskiren Trial in Type 2 Diabetes Using Cardio-Renal Endpoints (ALTITUDE). Aliskiren, a novel direct renin... 2. The Regular Free-Endpoint Linear Quadratic Problem with Indefinite Cost NARCIS (Netherlands) Trentelman, Hendrikus 1989-01-01 This paper studies an open problem in the context of linear quadratic optimal control, the free-endpoint regular linear quadratic problem with indefinite cost functional. It is shown that the optimal cost for this problem is given by a particular solution of the algebraic Riccati equation. This 3. Transmission assessment surveys (TAS) to define endpoints for lymphatic filariasis mass drug administration DEFF Research Database (Denmark) Chu, Brian K.; Deming, Michael; Biritwum, Nana-Kwadwo 2013-01-01 Lymphatic filariasis (LF) is targeted for global elimination through treatment of entire at-risk populations with repeated annual mass drug administration (MDA). Essential for program success is defining and confirming the appropriate endpoint for MDA when transmission is presumed to have reached... 4. Development of an in vitro genotoxicity screening assay: combining different genotoxic endpoints NARCIS (Netherlands) Mahabir, A.G. 2010-01-01 Genotoxic agents are a major threat to the integritiy of chromosomes and viability of cells, specially if the damage is not repaired, because it can lead to chromosome instability, cell cycle arrest, cell dysfunction, induction of apoptosis or carcinogenesis. For genotoxicity, two main endpoints are 5. A comparative study of classical and biochemical endpoints for phytotoxicity testing of chlorobenzoic acids Institute of Scientific and Technical Information of China (English) LI Pei-jun; YIN Pei-jie; ZHOU Qi-xing; SHI Xing-qun; XIONG Xian-zhe 2005-01-01 The phytotoxicity of chlorobenzoic acids(CBAs) was studied and the biochemical endpoints' suitability and sensibility was evaluated. Two terrestrial plant species in the same family were exposed to different concentrations of CBAS and tested their germination according to the guideline of Organization for Economic Cooperation and Development(OECP, 1984). The results showed that CBA doseinhibition rate of classical endpoint had the distinct linear relationship in the range of 10%-50% inhibition rate for root elongation( p <0.01), and the dose variances of CBAs had the greater influence on the inhibition rate of germination than on inhibition rate of root elongation. The CBA dose half effect concentration-inhibition rate of two antioxidant enzyme activity superoxide dismutase(SOD) and catalase (CAT) had the quadratic relationship, and CBA dose-inhibition rate of the peroxides(POD) activity had the linear relationship( p<0.05). Comparing the half effect concentration (EC50 ) of two kinds of endpoints, the POD activity was more sensitive than classical endpoint, however, SOD and CAT activity were not sensitive in the experiment. 6. AN EXISTENCE THEOREM OF POSITIVE SOLUTIONS FOR ELASTIC BEAM EQUATION WITH BOTH FIXED END-POINTS Institute of Scientific and Technical Information of China (English) 2001-01-01 By using the degree theory on cone an existence theorem of positive solution for a class of fourth-order two-point BVP's is obtained. This class of BVP's usually describes the deformation of the elastic beam with both fixed end-points. 7. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process Science.gov (United States) Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang 2010-12-01 A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process. 8. Establishing mussel behavior as a biomarker in ecotoxicology. Science.gov (United States) Hartmann, Jason T; Beggel, Sebastian; Auerswald, Karl; Stoeckle, Bernhard C; Geist, Juergen 2016-01-01 Most freshwater mussel species of the Unionoida are endangered, presenting a conservation issue as they are keystone species providing essential services for aquatic ecosystems. As filter feeders with limited mobility, mussels are highly susceptible to water pollution. Despite their exposure risk, mussels are underrepresented in standard ecotoxicological methods. This study aimed to demonstrate that mussel behavioral response to a chemical stressor is a suitable biomarker for the advancement of ecotoxicology methods that aids mussel conservation. Modern software and Hall sensor technology enabled mussel filtration behavior to be monitored real-time at very high resolution. With this technology, we present our method using Anodonta anatina and record their response to de-icing salt pollution. The experiment involved an environmentally relevant 'pulse-exposure' design simulating three subsequent inflow events. Three sublethal endpoints were investigated, Filtration Activity, Transition Frequency (number of changes from opened to closed, or vice versa) and Avoidance Behavior. The mussels presented a high variation in filtration behavior, behaving asynchronously. At environmentally relevant de-icing salt exposure scenarios, A. anatina behavior patterns were significantly affected. Treated mussels' Filtration Activity decreased during periods of very high and long de-icing salt exposure (pecotoxicology studies. Avoidance Behavior proved to be a potentially suitable endpoint for calculating mussel behavior effect concentration. Therefore we recommend adult mussel behavior as a suitable biomarker for future ecotoxicological research. This method could be applied to other bivalve species and for physical and environmental stressors, such as particulate matter and temperature. Copyright © 2015 Elsevier B.V. All rights reserved. 9. Molecular biomarkers of neurodegeneration. Science.gov (United States) Höglund, Kina; Salter, Hugh 2013-11-01 Neuronal dysfunction and degeneration are central events of a number of major diseases with significant unmet need. Neuronal dysfunction may not necessarily be the result of cell death, but may also be due to synaptic damage leading to impaired neuronal cell signaling or long-term potentiation. Once degeneration occurs, it is unclear whether axonal or synaptic loss comes first or whether this precedes neuronal cell death. In this review we summarize the pathophysiology of four major neurodegenerative diseases; Alzheimer's disease, Parkinson's disease, multiple sclerosis and amyotrophic lateral sclerosis (Lou Gehrig's disease) For each of these diseases, we describe how biochemical biomarkers are currently understood in relation to the pathophysiology and in terms of neuronal biology, and we discuss the clinical and diagnostic utility of these potential tools, which are at present limited. We discuss how markers may be used to drive drug development and clinical practice. 10. Towards Improved Biomarker Research DEFF Research Database (Denmark) Kjeldahl, Karin This thesis takes a look at the data analytical challenges associated with the search for biomarkers in large-scale biological data such as transcriptomics, proteomics and metabolomics data. These studies aim to identify genes, proteins or metabolites which can be associated with e.g. a diet...... is used both for regression and classification purposes. This method has proven its strong worth in the multivariate data analysis throughout an enormous range of applications; a very classic data type is near infrared (NIR) data, but many similar data types have also be very successful....... On that background, the general characteristics of omics data are described and related to the characteristics of classical NIR-type data. This shows that omics data, which are generally much bigger data sets than classical data, are not just simple extensions of NIR data. The sample type, analytical method... 11. Biomarkers for lymphoma Science.gov (United States) Zangar, Richard C.; Varnum, Susan M. 2014-09-02 A biomarker, method, test kit, and diagnostic system for detecting the presence of lymphoma in a person are disclosed. The lymphoma may be Hodgkin's lymphoma or non-Hodgkin's lymphoma. The person may be a high-risk subject. In one embodiment, a plasma sample from a person is obtained. The level of at least one protein listed in Table S3 in the plasma sample is measured. The level of at least one protein in the plasma sample is compared with the level in a normal or healthy subject. The lymphoma is diagnosed based upon the level of the at least one protein in the plasma sample in comparison to the normal or healthy level. 12. Inflammatory biomarkers for AMD. Science.gov (United States) Stanton, Chloe M; Wright, Alan F 2014-01-01 Age-related macular degeneration (AMD) is the leading cause of blindness worldwide, affecting an estimated 50 million individuals aged over 65 years.Environmental and genetic risk-factors implicate chronic inflammation in the etiology of AMD, contributing to the formation of drusen, retinal pigment epithelial cell dysfunction and photoreceptor cell death. Consistent with a role for chronic inflammation in AMD pathogenesis, several inflammatory mediators, including complement components, chemokines and cytokines, are elevated at both the local and systemic levels in AMD patients. These mediators have diverse roles in the alternative complement pathway, including recruitment of inflammatory cells, activation of the inflammasome, promotion of neovascularisation and in the resolution of inflammation. The utility of inflammatory biomarkers in assessing individual risk and progression of the disease is controversial. However, understanding the role of these inflammatory mediators in AMD onset, progression and response to treatment may increase our knowledge of disease pathogenesis and provide novel therapeutic options in the future. 13. Surfactant protein D, Club cell protein 16, Pulmonary and activation-regulated chemokine, C-reactive protein, and Fibrinogen biomarker variation in chronic obstructive lung disease DEFF Research Database (Denmark) Johansson, Sofie Lock; Vestbo, J.; Sorensen, G. L. 2014-01-01 Chronic obstructive pulmonary disease (COPD) is a multifaceted condition that cannot be fully described by the severity of airway obstruction. The limitations of spirometry and clinical history have prompted researchers to investigate a multitude of surrogate biomarkers of disease for the assessm......Chronic obstructive pulmonary disease (COPD) is a multifaceted condition that cannot be fully described by the severity of airway obstruction. The limitations of spirometry and clinical history have prompted researchers to investigate a multitude of surrogate biomarkers of disease...... for the assessment of patients, prediction of risk, and guidance of treatment. The aim of this review is to provide a comprehensive summary of observations for a selection of recently investigated pulmonary inflammatory biomarkers (Surfactant protein D (SP-D), Club cell protein 16 (CC-16), and Pulmonary...... and activation-regulated chemokine (PARC/CCL-18)) and systemic inflammatory biomarkers (C-reactive protein (CRP) and fibrinogen) with COPD. The relevance of these biomarkers for COPD is discussed in terms of their biological plausibility, their independent association to disease and hard clinical outcomes... 14. Design of cohort studies in chronic diseases using routinely collected databases when a prescription is used as surrogate outcome Directory of Open Access Journals (Sweden) Egger Peter 2011-04-01 Full Text Available Abstract Background There has been little research on design of studies based on routinely collected data when the clinical endpoint of interest is not recorded, but can be inferred from a prescription. This often happens when exploring the effect of a drug on chronic diseases. Using the LifeLink claims database in studying the possible anti-inflammatory effects of statins in rheumatoid arthritis (RA, oral steroids (OS were treated as surrogate of inflammatory flare-ups. We compared two cohort study designs, the first using time to event outcomes and the second using quantitative amount of the surrogate. Methods RA patients were extracted from the LifeLink database. In the first study, patients were split into two sub-cohorts based on whether they were using OS within a specified time window of the RA index date (first record of RA. Using Cox models we evaluated the association between time-varying exposure to statins and (i initiation of OS therapy in the non-users of OS at RA index date and (ii cessation of OS therapy in the users of OS at RA index date. In the second study, we matched new statin users to non users on age and sex. Zero inflated negative binomial models were used to contrast the number of days' prescriptions of OS in the year following date of statin initiation for the two exposure groups. Results In the unmatched study, the statin exposure hazard ratio (HR of initiating OS in the 31451 non-users of OS at RA index date was 0.96(95% CI 0.9,1.1 and the statin exposure HR of cessation of OS therapy in the 6026 users of OS therapy at RA index date was 0.95 (0.87,1.05. In the matched cohort of 6288 RA patients the statin exposure rate ratio for duration on OS therapy was 0.88(0.76,1.02. There was digit preference for outcomes in multiples of 7 and 30 days. Conclusions The 'time to event' study design was preferable because it better exploits information on all available patients and provides a degree of robustness toward confounding 15. Assessment of the efficacy of functional food ingredients-introducing the concept "kinetics of biomarkers". Science.gov (United States) Verhagen, Hans; Coolen, Stefan; Duchateau, Guus; Hamer, Mark; Kyle, Janet; Rechner, Andreas 2004-07-13 Functional foods are "foods and beverages with claimed health benefits based on scientific evidence". Health claims need to be substantiated scientifically. The future of functional foods will heavily rely on proven efficacy in well-controlled intervention studies with human volunteers. In order to have the maximum output of human trials, improvements are needed with respect to study design and optimization of study protocols. Efficacy at realistic intake levels needs to be established in studies with humans via the use of suitable biomarkers, unless the endpoint can be measured directly. The human body is able to deal with chemical entities irrespective of their origin, and the pharmaceutical terms "absorption, distribution, metabolism and excretion" have their equivalent when biomarkers are concerned. Whereas only "diurnal variation" or "circadian rhythm" is sometimes considered, little attention is paid to "kinetics of biomarkers". "Kinetics of biomarkers" comprises "formation, distribution, metabolism and excretion". However, this is at present neither an established science nor common practice in nutrition research on functional foods. As a consequence, sampling times and matrices, for example, are chosen on the basis of historical practice and convenience (for volunteers and scientists) but not on the basis of in depth insight. The concept of kinetics of biomarkers is illustrated by a variety of readily comprehensible examples, such as malaria, cholesterol, polyphenols, glutathione-S-transferase alpha, F2-isoprostanes, interleukin-6, and plasma triacylglycerides. 16. [Novel biomarkers for diabetic nephropathy]. Science.gov (United States) Araki, Shin-ichi 2014-02-01 Diabetic nephropathy is a leading cause of end-stage renal disease worldwide. An early clinical sign of this complication is an increase of urinary albumin excretion, called microalbuminuria, which is not only a predictor of the progression of nephropathy, but also an independent risk factor for cardiovascular disease. Although microalbuminuria is clinically important to assess the prognosis of diabetic patients, it may be insufficient as an early and specific biomarker of diabetic nephropathy because of a large day-to-day variation and lack of a good correlation of microalbuminuria with renal dysfunction and pathohistological changes. Thus, more sensitive and specific biomarkers are needed to improve the diagnostic capability of identifying patients at high risk. The factors involved in renal tubulo-interstitial damage, the production and degradation of extracellular matrix, microinflammation, etc., are investigated as candidate molecules. Despite numerous efforts so far, the assessment of these biomarkers is still a subject of ongoing investigations. Recently, a variety of omics and quantitative techniques in systems biology are rapidly emerging in the field of biomarker discovery, including proteomics, transcriptomics, and metabolomics, and they have been applied to search for novel putative biomarkers of diabetic nephropathy. Novel biomarkers or their combination with microalbuminuria provide a better diagnostic accuracy than microalbuminuria alone, and may be useful for establishing personal medicine. Furthermore, the identification of novel biomarkers may provide insight into the mechanisms underlying diabetic nephropathy. 17. Laboratory Testing of Waste Isolation Pilot Plant Surrogate Waste Materials Science.gov (United States) Broome, S.; Bronowski, D.; Pfeifle, T.; Herrick, C. G. 2011-12-01 The Waste Isolation Pilot Plant (WIPP) is a U.S. Department of Energy geological repository for the permanent disposal of defense-related transuranic (TRU) waste. The waste is emplaced in rooms excavated in the bedded Salado salt formation at a depth of 655 m below the ground surface. After emplacement of the waste, the repository will be sealed and decommissioned. WIPP Performance Assessment modeling of the underground material response requires a full and accurate understanding of coupled mechanical, hydrological, and geochemical processes and how they evolve with time. This study was part of a broader test program focused on room closure, specifically the compaction behavior of waste and the constitutive relations to model this behavior. The goal of this study was to develop an improved waste constitutive model. The model parameters are developed based on a well designed set of test data. The constitutive model will then be used to realistically model evolution of the underground and to better understand the impacts on repository performance. The present study results are focused on laboratory testing of surrogate waste materials. The surrogate wastes correspond to a conservative estimate of the degraded containers and TRU waste materials after the 10,000 year regulatory period. Testing consists of hydrostatic, uniaxial, and triaxial tests performed on surrogate waste recipes that were previously developed by Hansen et al. (1997). These recipes can be divided into materials that simulate 50% and 100% degraded waste by weight. The percent degradation indicates the anticipated amount of iron corrosion, as well as the decomposition of cellulosics, plastics, and rubbers. Axial, lateral, and volumetric strain and axial and lateral stress measurements were made. Two unique testing techniques were developed during the course of the experimental program. The first involves the use of dilatometry to measure sample volumetric strain under a hydrostatic condition. Bulk 18. A general framework to learn surrogate relevance criterion for atlas based image segmentation Science.gov (United States) Zhao, Tingting; Ruan, Dan 2016-09-01 Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates. 19. Surrogate mobility and orientation affect the early neurobehavioral development of infant rhesus macaques (Macaca mulatta). Science.gov (United States) Dettmer, Amanda M; Ruggiero, Angela M; Novak, Melinda A; Meyer, Jerrold S; Suomi, Stephen J 2008-05-01 A biological mother's movement appears necessary for optimal development in infant monkeys. However, nursery-reared monkeys are typically provided with inanimate surrogate mothers that move very little. The purpose of this study was to evaluate the effects of a novel, highly mobile surrogate mother on motor development, exploration, and reactions to novelty. Six infant rhesus macaques (Macaca mulatta) were reared on mobile hanging surrogates (MS) and compared to six infants reared on standard stationary rocking surrogates (RS) and to 9-15 infants reared with their biological mothers (MR) for early developmental outcome. We predicted that MS infants would develop more similarly to MR infants than RS infants. In neonatal assessments conducted at Day 30, both MS and MR infants showed more highly developed motor activity than RS infants on measures of grasping (p = .009), coordination (p = .038), spontaneous crawl (p = .009), and balance (p = .003). At 2-3 months of age, both MS and MR infants displayed higher levels of exploration in the home cage than RS infants (p = .016). In a novel situation in which only MS and RS infants were tested, MS infants spent less time near their surrogates in the first five minutes of the test session than RS infants (p = .05), indicating a higher level of comfort. Collectively, these results suggest that when nursery-rearing of infant monkeys is necessary, a mobile hanging surrogate may encourage more normative development of gross motor skills and exploratory behavior and may serve as a useful alternative to stationary or rocking surrogates. 20. A general framework to learn surrogate relevance criterion for atlas based image segmentation. Science.gov (United States) Zhao, Tingting; Ruan, Dan 2016-09-07 Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates. 1. Development of Parkinson's disease biomarkers. Science.gov (United States) Prakash, Kumar M; Tan, Eng-King 2010-12-01 Parkinson's disease (PD) is the most common neurodegenerative movement disorder, affecting over 6 million people worldwide. It is anticipated that the number of affected individuals may increase significantly in the most populous nations by 2030. During the past 20 years, much progress has been made in identifying and assessing various potential clinical, biochemical, imaging and genetic biomarkers for PD. Despite the wealth of information, development of a validated biomarker for PD is still ongoing. It is hoped that reliable and well-validated biomarkers will provide critical clues to assist in the diagnosis and management of Parkinson's disease patients in the near future. 2. Evaluation of the use of salivary lead levels as a surrogate of blood lead or plasma lead levels in lead exposed subjects Energy Technology Data Exchange (ETDEWEB) Barbosa, Fernando [Universidade de Sao Paulo, Departamento de Analises Clinicas, Toxicologicas e Bromatologicas, Faculdade de Ciencias Farmaceuticas de Ribeirao Preto, Ribeirao Preto, SP (Brazil); Correa Rodrigues, Maria H.; Buzalaf, Maria R. [Universidade de Sao Paulo, Departamento de Ciencias Biologicas/Bioquimica, Faculdade de Odontologia de Bauru, Bauru, SP (Brazil); Krug, Francisco J. [Universidade de Sao Paulo, Centro de Energia Nuclear na Agricultura, Piracicaba, SP (Brazil); Gerlach, Raquel F. [Universidade de Sao Paulo, Departamento de Morfologia, Estomatologia e Fisiologia, Faculdade de Odontologia de Ribeirao Preto, Ribeirao Preto, SP (Brazil); Tanus-Santos, Jose E. [Universidade de Sao Paulo, Departamento de Farmacologia, Faculdade de Medicina de Ribeirao Preto, Ribeirao Preto, SP (Brazil) 2006-10-15 We conducted a study to evaluate the use of parotid salivary lead (Pb-saliva) levels as a surrogate of the blood lead (Pb-B) or plasma lead levels (Pb-P) to diagnose lead exposure. The relationship between these biomarkers was assessed in a lead exposed population. Pb-saliva and Pb-P were determined by inductively coupled plasma mass spectrometry, while in whole blood lead was determined by graphite furnace atomic absorption spectrometry. We studied 88 adults (31 men and 57 women) from 18 to 60 years old. Pb-saliva levels varied from 0.05 to 4.4 {mu}g/l, with a mean of 0.85 {mu}g/l. Blood lead levels varied from 32.0 to 428.0 {mu}g/l in men (mean 112.3 {mu}g/l) and from 25.0 to 263.0 {mu}g/l (mean 63.5 {mu}g/l) in women. Corresponding Pb-Ps were 0.02-2.50 {mu}g/l (mean 0.77 {mu}g/l) and 0.03-1.6 {mu}g/l (mean 0.42 {mu}g/l) in men and women, respectively. A weak correlation was found between Log Pb-saliva and Log Pb-B (r=0.277, P<0.008), and between Log Pb-saliva and Log Pb-P (r=0.280, P=0.006). The Pb-saliva/Pb-P ratio ranged from 0.20 to 18.0. Age or gender does not affect Pb-saliva levels or Pb-saliva/Pb-P ratio. Taken together, these results suggest that salivary lead may not be used as a biomarker to diagnose lead exposure nor as a surrogate of plasma lead levels at least for low to moderately lead exposed population. (orig.) 3. Results from Second Round of Remediated Nitrate Salt Surrogate Formulation and Testing Energy Technology Data Exchange (ETDEWEB) Brown, Geoffrey Wayne [Los Alamos National Laboratory; Leonard, Philip [Los Alamos National Laboratory; Hartline, Ernest Leon [Los Alamos National Laboratory; Tian, Hongzhao [Los Alamos National Laboratory 2016-04-04 High Explosives and Technology (M-7) completed the second round of formulation and testing of Remediated Nitrate Salt (RNS) surrogates on March 17, 2016. This report summarizes the results of the work and also includes additional documentation required under test plan PLAN-TA9-2443 Rev B, "Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing Standard Procedure", released February 16, 2016. All formulation and testing was carried out according to PLAN-TA9-2443 Rev B. Results from the first round of formulation and testing were documented in memorandum M7-16-6042, "Results from First Round of Remediated Nitrate Salt Surrogate Formulation and Testing." 4. Assessing the potential of surrogate EPS to mimic natural biofilm mechanical properties Science.gov (United States) Thom, Moritz; Schimmels, Stefan 2017-04-01 Biofilms growing on benthic sediments may increase the resistance towards erosion considerably by the sticky nature of extracellular polymeric substances (EPS). The EPS is a biopolymer which is secreted by the microorganisms inhabiting the biofilm matrix and may be regarded as natural glue. However, laboratory studies on the biostabilization effect mediated by biofilms are often hampered by the unavailability of "environmental" flumes in which light intensities, water temperature and nutrient content can be controlled. To allow investigations on biostabilization in "traditional" flume settings the use of surrogate materials is studied. Another advantage of using appropriate surrogates is the potential to reduce the experimental time, as compared to cultivating natural biofilms, the surrogates can readily be designed to mimic biofilms at different growth stages. Furthermore, the use of surrogates which are expected to have more homogeneous mechanical properties could facilitate fundamental studies to improve our knowledge on biostabilization. Even though a number of studies have already utilized EPS surrogates it is not clear how to mix them to correctly mimic natural EPS mechanical properties. In this study the adhesiveness (a measure of stickiness) on the surface of several EPS surrogates (e.g. Xanthan Gum, sodium alginate) is measured. These surrogates which are originally used in the food industry as rheology modifiers are mixed by adding water to a powder at a desired concentration (C). The measured surface adhesion of different surrogates at different concentrations ranged from 0.5 to 6.7 N/m2, which is well in line with values found for laboratory cultured biofilms. We found that the surrogate characteristics differed largely especially in regard to a) the response of the adhesiveness to increased concentrations (powder to water) and b) in their rheological characteristics. A seemingly promising surrogate for the use in biostabilization studies is Xanthan Gum 5. Children of surrogate mothers: psychological well-being, family relationships and experiences of surrogacy. Science.gov (United States) 2014-01-01 What impact does surrogacy have on the surrogates' own children? The children of surrogate mothers do not experience any negative consequences as a result of their mother's decision to be a surrogate, irrespective of whether or not the surrogate uses her own egg. Participants were recruited as part of a study of the long-term effects of surrogacy for surrogates and their family members. Data were collected from 36 children of surrogates at a single time point. Participants whose mother had been a surrogate 5-15 years prior to interview and who were aged over 12 years were eligible to take part. Thirty-six participants (14 male and 22 female) aged 12-25 years were interviewed (response rate = 52%). Questionnaires assessing psychological health and family functioning were administered. Forty-four per cent (15) of participants' mothers had undergone gestational surrogacy, 39% (14) had used their own egg (genetic surrogacy) and 19% (7) had completed both types of surrogacy. Most surrogates' children (86%, 31) had a positive view of their mother's surrogacy. Forty-seven per cent (17) of children were in contact with the surrogacy child and all reported good relationships with him/her. Forty per cent (14) of children referred to the child as a sibling or half-sibling and this did not differ between genetic and gestational surrogacy. Most children (89%, 32), reported a positive view of family life, with all enjoying spending time with their mother. Mean scores on the questionnaire assessments of psychological health and self-esteem were within the normal range and did not differ by surrogacy type. The sample size for this study was relatively small and not all children chose to take part, therefore their views cannot be known. Nevertheless, this is the first study to assess the experiences of surrogacy from the perspective of the surrogates' own children. There may be some bias from the inclusion of siblings from the same family. Findings of this study show that family 6. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity Science.gov (United States) Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B. 2012-12-01 Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally 7. One universal common endpoint in mouse models of amyotrophic lateral sclerosis. Directory of Open Access Journals (Sweden) Jesse A Solomon Full Text Available There is no consensus among research laboratories around the world on the criteria that define endpoint in studies involving rodent models of amyotrophic lateral sclerosis (ALS. Data from 4 nutrition intervention studies using 162 G93A mice, a model of ALS, were analyzed to determine if differences exist between the following endpoint criteria: CS 4 (functional paralysis of both hindlimbs, CS 4+ (CS 4 in addition to the earliest age of body weight loss, body condition deterioration or righting reflex, and CS 5 (CS 4 plus righting reflex >20 s. The age (d; mean ± SD at which mice reached endpoint was recorded as the unit of measurement. Mice reached CS 4 at 123.9±10.3 d, CS 4+ at 126.6±9.8 d and CS 5 at 127.6±9.8 d, all significantly different from each other (P<0.001. There was a significant positive correlation between CS 4 and CS 5 (r = 0.95, P<0.001, CS 4 and CS 4+ (r = 0.96, P<0.001, and CS 4+ and CS 5 (r = 0.98, P<0.001, with the Bland-Altman plot showing an acceptable bias between all endpoints. Logrank tests showed that mice reached CS 4 24% and 34% faster than CS 4+ (P = 0.046 and CS 5 (P = 0.006, respectively. Adopting CS 4 as endpoint would spare a mouse an average of 4 days (P<0.001 from further neuromuscular disability and poor quality of life compared to CS 5. Alternatively, CS 5 provides information regarding proprioception and severe motor neuron death, both could be important parameters in establishing the efficacy of specific treatments. Converging ethics and discovery, would adopting CS 4 as endpoint compromise the acquisition of insight about the effects of interventions in animal models of ALS? 8. Effectiveness of amphibians as biodiversity surrogates in pond conservation. Science.gov (United States) Ilg, Christiane; Oertli, Beat 2017-04-01 Amphibian decline has led to worldwide conservation efforts, including the identification and designation of sites for their protection. These sites could also play an important role in the conservation of other freshwater taxa. In 89 ponds in Switzerland, we assessed the effectiveness of amphibians as a surrogate for 4 taxonomic groups that occur in the same freshwater ecosystems as amphibians: dragonflies, aquatic beetles, aquatic gastropods, and aquatic plants. The ponds were all of high value for amphibian conservation. Cross-taxon correlations were tested for species richness and conservation value, and Mantel tests were used to investigate community congruence. Species richness, conservation value, and community composition of amphibians were weakly congruent with these measures for the other taxonomic groups. Paired comparisons for the 5 groups considered showed that for each metric, amphibians had the lowest degree of congruence. Our results imply that site designation for amphibian conservation will not necessarily provide protection for freshwater biodiversity as a whole. To provide adequate protection for freshwater species, we recommend other taxonomic groups be considered in addition to amphibians in the prioritization and site designation process. © 2016 Society for Conservation Biology. 9. Numerical investigation for erratic behavior of Kriging surrogate model Energy Technology Data Exchange (ETDEWEB) Kwon, Hyun Gil; Yi, Seul Gi [KAIST, Daejeon (Korea, Republic of); Choi, Seong Im [Virginia Polytechnic Institute and State University, Blacksburg (United States) 2014-09-15 Kriging model is one of popular spatial/temporal interpolation models in engineering field since it could reduce the time resources for the expensive analysis. But generation of the Kriging model is hardly a sinecure because internal semi-variogram structure of the Kriging often reveals numerically unstable or erratic behaviors. In present study, the issues in the maximum likelihood estimation which are the vital-parts of the construction of the Kriging model, is investigated. These issues are divided into two aspects; Issue I is for the erratic response of likelihood function itself, and Issue II is for numerically unstable behaviors in the correlation matrix. For both issues, studies for specific circumstances which might raise the issue, and the reason of that are conducted. Some practical ways further are suggested to cope with them. Furthermore, the issue is studied for practical problem; aerodynamic performance coefficients of two-dimensional airfoil predicted by CFD analysis. Result shows that such erratic behavior of Kriging surrogate model can be effectively resolved by proposed solution. In conclusion, it is expected this paper could be helpful to prevent such an erratic and unstable behavior. 10. Protected Gold Nanoparticles with Thioethers and Amines As Surrogate Ligands Directory of Open Access Journals (Sweden) M. Rafiq H. Siddiqui 2013-01-01 Full Text Available Dodecyl sulfide, dodecyl amine, and hexylamine were shown to act as surrogate ligands (L via metastable gold nanoparticles. By collating analytical and spectroscopic data obtained simultaneously, empirical formula Au24L was assigned. These impurity-free nanoparticles obtained in near quantitative yields showing exceptional gold assays (up to 98%Au were prepared by a modification of the two-phase method. Replacement reactions on the Au24L showed that Au:L ratios may be increased (up to Au55:L (L= (H25C122S or decreased (Au12:L (L= H2NC12H25 and H2NC6H13 as desired. This work encompassing the role of analytical techniques used, that is, elemental analysis, variable temperature 1H NMR, FAB mass spectrometry, UV-Vis spectroscopy, thin film X-ray diffraction, and high-resolution electron microscopy (HREM has implications in the study of size control, purity, stability, and metal assays of gold nanoparticles. 11. Investigation of ethosomes as surrogate carriers for bioactives Directory of Open Access Journals (Sweden) Devina Verma 2016-01-01 Full Text Available Background: Ethosomal vesicular system delivering a bioactive phytochemical, chrysin, was developed for transdermal delivery to increase its permeability and penetrability. Materials and Methods: Ethosomal system was optimized by keeping lecithin and ethanol concentration as independent variable while size and size distribution were taken as dependent variables. The optimized formulation was then subjected to various in vitro characterization parameters. Results: Ethosomal vesicle with an optimum size and polydispersity index of 134 ± 35 nm and 0.153, respectively, and entrapment efficiency of 80.05 ± 2.6% was considered as optimized and subjected to characterization. The scanning electron microscopy and transmission electron microscopy showed spherical entities with uniform surface whereas in vitro permeation and retention study showed the sustained mode of drug release and better skin retention as compared to hydroethanolic solution of the drug. The confocal laser scanning microscopy study reiterated high penetrability of vesicles into the skin. Histopathological and Fourier transform infrared spectroscopy analysis revealed its mechanism of penetration. Conclusion : The study thus demonstrated the ability of the ethosomal vesicles as surrogate carriers for delivery of bioactive agents through the skin for better amelioration of skin inflammation and other diseases. 12. Genotoxic and teratogenic effect of freshwater sediment samples from the Rhine and Elbe River (Germany) in zebrafish embryo using a multi-endpoint testing strategy. Science.gov (United States) Garcia-Käufer, M; Gartiser, S; Hafner, C; Schiwy, S; Keiter, S; Gründemann, C; Hollert, H 2015-11-01 The embryotoxic potential of three model sediment samples with a distinct and well-characterized pollutant burden from the main German river basins Rhine and Elbe was investigated. The Fish Embryo Contact Test (FECT) in zebrafish (Danio rerio) was applied and submitted to further development to allow for a comprehensive risk assessment of such complex environmental samples. As particulate pollutants are constructive constituents of sediments, they underlay episodic source-sink dynamics, becoming available to benthic organisms. As bioavailability of xenobiotics is a crucial factor for ecotoxicological hazard, we focused on the direct particle-exposure pathway, evaluating throughput-capable endpoints and considering toxicokinetics. Fish embryo and larvae were exposed toward reconstituted (freeze-dried) sediment samples on a microcosm-scale experimental approach. A range of different developmental embryonic stages were considered to gain knowledge of potential correlations with metabolic competence during the early embryogenesis. Morphological, physiological, and molecular endpoints were investigated to elucidate induced adverse effects, placing particular emphasis on genomic instability, assessed by the in vivo comet assay. Flow cytometry was used to investigate the extent of induced cell death, since cytotoxicity can lead to confounding effects. The implementation of relative toxicity indices further provides inter-comparability between samples and related studies. All of the investigated sediments represent a significant ecotoxicological hazard by disrupting embryogenesis in zebrafish. Beside the induction of acute toxicity, morphological and physiological embryotoxic effects could be identified in a concentration-response manner. Increased DNA strand break frequency was detected after sediment contact in characteristic non-monotonic dose-response behavior due to overlapping cytotoxic effects. The embryonic zebrafish toxicity model along with the in vivo comet 13. Urinary Biomarkers of Brain Diseases Directory of Open Access Journals (Sweden) Manxia An 2015-12-01 Full Text Available Biomarkers are the measurable changes associated with a physiological or pathophysiological process. Unlike blood, urine is not subject to homeostatic mechanisms. Therefore, greater fluctuations could occur in urine than in blood, better reflecting the changes in human body. The roadmap of urine biomarker era was proposed. Although urine analysis has been attempted for clinical diagnosis, and urine has been monitored during the progression of many diseases, particularly urinary system diseases, whether urine can reflect brain disease status remains uncertain. As some biomarkers of brain diseases can be detected in the body fluids such as cerebrospinal fluid and blood, there is a possibility that urine also contain biomarkers of brain diseases. This review summarizes the clues of brain diseases reflected in the urine proteome and metabolome. 14. Improving tuberculosis diagnostics with biomarkers Directory of Open Access Journals (Sweden) Shu CC 2015-05-01 Full Text Available Chin-Chung Shu,1,2 Jann-Yuan Wang,2 Li-Na Lee,2,3 Chong-Jen Yu,2 Kwen-Tay Luh3 1Department of Traumatology, 2Department of Internal Medicine, 3Department of Laboratory Medicine, National Taiwan University Hospital, Taipei, Taiwan Abstract: Although many laboratory methods have been developed to expedite the diagnosis of active tuberculosis (TB and Mycobacterium tuberculosis (Mtb infection, delays in diagnosis remain a major problem in clinical practice. Biomarkers may contribute favorably or unfavorably to TB diagnosis in a clinical suspect TB case with inconclusive diagnostic findings. A good understanding of the effectiveness and practical limitations of these biomarkers is important to improve diagnosis. This review summarizes currently used biomarkers, mainly as validation, and focuses on latent TB infection, active pulmonary TB, and tuberculous pleural effusion. Keywords: tuberculosis, biomarker, diagnosis, latent tuberculosis infection, pleural effusion 15. Procalcitonine als biomarker voor infecties NARCIS (Netherlands) de Jonge, J C; de Lange, D W; Bij de Vaate, E A; van Leeuwen, H; Arends, J E 2016-01-01 - Inappropriate use of antibiotics in patients without bacterial infection contributes significantly to worldwide antibiotic resistance.- The goal of this review is to summarise evidence from randomised trials investigating the value of the biomarker procalcitonin (PCT) in patients with symptoms of 16. Biomarkers of latent TB infection DEFF Research Database (Denmark) Ruhwald, Morten; Ravn, Pernille 2009-01-01 For the last 100 years, the tuberculin skin test (TST) has been the only diagnostic tool available for latent TB infection (LTBI) and no biomarker per se is available to diagnose the presence of LTBI. With the introduction of M. tuberculosis-specific IFN-gamma release assays (IGRAs), a new area...... of in vitro immunodiagnostic tests for LTBI based on biomarker readout has become a reality. In this review, we discuss existing evidence on the clinical usefulness of IGRAs and the indefinite number of potential new biomarkers that can be used to improve diagnosis of latent TB infection. We also present...... early data suggesting that the monocyte-derived chemokine inducible protein-10 may be useful as a novel biomarker for the immunodiagnosis of latent TB infection.... 17. Biomarkers for preclinical Alzheimer's disease. Science.gov (United States) Tan, Chen-Chen; Yu, Jin-Tai; Tan, Lan 2014-01-01 Currently, there is a pressing need to shift the focus to accurate detection of the earliest phase of increasingly preclinical Alzheimer's disease (AD). Meanwhile, the growing recognition that the pathophysiological process of AD begins many years prior to clinically obvious symptoms and the concept of a presymptomatic or preclinical stage of AD are becoming more widely accepted. Advances in clinical identification of new measurements will be critical not only in the discovery of sensitive, specific, and reliable biomarkers of preclinical AD but also in the development of tests that will aid in the early detection and differential diagnosis of dementia and in monitoring disease progression. The goal of this review is to provide an overview of biomarkers for preclinical AD, with emphasis on neuroimaging and neurochemical biomarkers. We conclude with a discussion of emergent directions for AD biomarker research. 18. Linking biomarkers to reproductive success of caged fathead minnows in streams with increasing urbanization Science.gov (United States) Crago, J.; Corsi, S.R.; Weber, D.; Bannerman, R.; Klaper, R. 2011-01-01 Reproductive and oxidative stress biomarkers have been recommended as tools to assess the health of aquatic organisms. Though validated in the laboratory, there are few studies that tie a change in gene expression to adverse reproductive or population outcomes in the field. This paper looked at 17 streams with varying degrees of urbanization to assess the use of biomarkers associated with reproduction or stress in predicting reproductive success of fathead minnows. In addition, the relationship between biomarkers and water quality measures in streams with varying degrees of urbanization was examined. Liver vitellogenin mRNA was correlated with reproduction within a period of 11. d prior to sampling irrespective of habitat, but its correlation with egg output declined at 12. d and beyond indicating its usefulness as a short-term biomarker but its limits as a biomarker of total reproductive output. Stress biomarkers such as glutathione S-transferase may be better correlated with factors affecting reproduction over a longer term. There was a significant correlation between GST mRNA and a variety of anthropogenic pollutants. There was also an inverse correlation between glutathione S-transferase and the amount of the watershed designated as wetland. Egg production over the 21-d was negatively correlated with the amount of urbanization and positively correlated to wetland habitats. This study supports the development of multiple biomarkers linking oxidative stress and other non-reproductive endpoints to changes in aquatic habitats will be useful for predicting the health of fish populations and identifying the environmental factors that may need mitigation for sustainable population management. ?? 2010 Elsevier Ltd. 19. Detection of Bordetella pertussis from Clinical Samples by Culture and End-Point PCR in Malaysian Patients Directory of Open Access Journals (Sweden) Tan Xue Ting 2013-01-01 Full Text Available Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1% (. There was no significant association between type of samples collected and end-point PCR results (. Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method. 20. Biomarkers of replicative senescence revisited DEFF Research Database (Denmark) Nehlin, Jan 2016-01-01 Biomarkers of replicative senescence can be defined as those ultrastructural and physiological variations as well as molecules whose changes in expression, activity or function correlate with aging, as a result of the gradual exhaustion of replicative potential and a state of permanent cell cycle...... with their chronological age and present health status, help define their current rate of aging and contribute to establish personalized therapy plans to reduce, counteract or even avoid the appearance of aging biomarkers.... 1. Biomarkers of satiation and satiety. Science.gov (United States) de Graaf, Cees; Blom, Wendy A M; Smeets, Paul A M; Stafleu, Annette; Hendriks, Henk F J 2004-06-01 This review's objective is to give a critical summary of studies that focused on physiologic measures relating to subjectively rated appetite, actual food intake, or both. Biomarkers of satiation and satiety may be used as a tool for assessing the satiating efficiency of foods and for understanding the regulation of food intake and energy balance. We made a distinction between biomarkers of satiation or meal termination and those of meal initiation related to satiety and between markers in the brain [central nervous system (CNS)] and those related to signals from the periphery to the CNS. Various studies showed that physicochemical measures related to stomach distension and blood concentrations of cholecystokinin and glucagon-like peptide 1 are peripheral biomarkers associated with meal termination. CNS biomarkers related to meal termination identified by functional magnetic resonance imaging and positron emission tomography are indicators of neural activity related to sensory-specific satiety. These measures cannot yet serve as a tool for assessing the satiating effect of foods, because they are not yet feasible. CNS biomarkers related to satiety are not yet specific enough to serve as biomarkers, although they can distinguish between extreme hunger and fullness. Three currently available biomarkers for satiety are decreases in blood glucose in the short term (2-4 d) negative energy balance; and ghrelin concentrations, which have been implicated in both short-term and long-term energy balance. The next challenge in this research area is to identify food ingredients that have an effect on biomarkers of satiation, satiety, or both. These ingredients may help consumers to maintain their energy intake at a level consistent with a healthy body weight. 2. Finite-size effects, pseudocritical quantities and signatures of the chiral critical endpoint of QCD CERN Document Server Palhares, L F; Kodama, T 2009-01-01 We investigate finite-size effects on the phase diagram of strong interactions, and discuss their influence (and utility) on experimental signatures in high-energy heavy ion collisions. We calculate the modification of the pseudocritical transition line and isentropic trajectories, and discuss how this affects proposed signatures of the chiral critical endpoint. We argue that a finite-size scaling analysis may be crucial in the process of data analysis in the Beam Energy Scan program at RHIC and in future experiments at FAIR-GSI. We propose the use of extrapolations, full scaling plots and a chi-squared method as tools for searching the critical endpoint of QCD and determining its universality class. 3. In-situ end-point detection during ion-beam etching of multilayer dielectric gratings Institute of Scientific and Technical Information of China (English) Hua Lin; Lifeng Li; Lijiang Zeng 2005-01-01 @@ An in-situ end-point detection technique for ion-beam etching is presented. A laser beam of the same wavelength and polarization as those in the intended application of the grating is fed into the vacuum chamber, and the beam retro-diffracted by the grating under etching is extracted and detected outside the chamber. This arrangement greatly simplifies the end-point detection. Modeling the grating diffraction with a rigorous diffraction grating computer program, we can satisfactorily simulate the evolution of the diffraction intensity during the etching process and consequently, we can accurately predict the end-point.Employing the proposed technique, we have reproducibly fabricated multilayer dielectric gratings with diffraction efficiencies of more than 92%. 4. Construction of Endpoint Constrained Cubic Rational Curve with Chord-Length Parameterization Institute of Scientific and Technical Information of China (English) LI Pei-pei; ZHANG Xin; ZHANG Ai-wu 2013-01-01 This paper discusses the problem that constructing a curve to satisfy the given endpoint constraints and chord-length parameters. Based on the research of Lu, the curve construction method for the entire tangent angles region 0 1(α,α)∈(-π,π) × (-π,π) is given. Firstly, to ensure the weights are always positive, the three characteristics of cubic rational Bezier curve is proved, then the segment construction idea for the other tangent angles are presented in view of the three characteristics. The curve constructed with the new method satisfies the endpoint constraint and chord-length parameters, it’s G1 continuous in every segment curve, and the shapes of the curve are well. 5. A mixed approach for proving non-inferiority in clinical trials with binary endpoints. Science.gov (United States) Rousson, Valentin; Seifert, Burkhardt 2008-04-01 When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints. 6. Coulometric trace determination of water by using Karl Fischer reagent and potentiometric end-point detection. Science.gov (United States) Cedergren, A 1974-06-01 A new approach to the determination of water via the Karl Fischer reaction is described. Iodine is coulometrically generated and the end-point corresponding to a slight excess of iodine, is detected potentiometrically with a non-polarized platinum electrode. Samples of 1-500 mul containing 0.05-200 mug of water were analysed with a standard deviation of 0.015 mug in the range 0.05-20 mug of H(2)O. A specially constructed electrolysis cell was used in combination with an LKB 16300 Coulometric Analyzer and the time for a complete analysis was 1-4 min, depending on sample size. The reagent composition has been optimized in order to enhance the rate of the main reaction and to minimize the extent of side-reactions. Decreasing the temperature reduced the extent of side-reactions. The displacement of end-point potential on dilution was studied and a correction is discussed. 7. Development of drugs for celiac disease: review of endpoints for Phase 2 and 3 trials Science.gov (United States) Gottlieb, Klaus; Dawson, Jill; Hussain, Fez; Murray, Joseph A. 2015-01-01 Celiac disease is a lifelong disorder for which there is currently only one known, effective treatment: a gluten-free diet. New treatment approaches have recently emerged; several drugs are in Phase 2 trials and results appear promising; however, discussion around regulatory endpoints is in its infancy. We will briefly discuss the drugs that are under development and then shift our attention to potential trial endpoints, such as patient-reported outcomes, histology, serology, gene expression analysis and other tests. We will outline the differing requirements for proof-of-concept Phase 2 trials and Phase 3 registration trials, with a particular emphasis on current thinking in regulatory agencies. We conclude our paper with recommendations and a glossary of regulatory terms, to enable readers who are less familiar with regulatory language to take maximum advantage of this review. PMID:25725041 8. Motor Power Signal Analysis for End-Point Detection of Chemical Mechanical Planarization Directory of Open Access Journals (Sweden) Hongkai Li 2017-06-01 Full Text Available In the integrated circuit (IC manufacturing, in-situ end-point detection (EPD is an important issue in the chemical mechanical planarization (CMP process. In the paper, we chose the motor power signal of the polishing platen as the monitoring object. We then used the moving average method, which was appropriate for in-situ calculation process and made it easy to code for software development, to smooth the signal curve, and then studied the signal variation during the actual CMP process. The results demonstrated that the motor power signal contained the end-point feature of the metal layer removal, and the processed signal curve facilitated the feature extraction and it was relatively steady before and after the layer transition stage. In addition, the motor power signal variation of the polishing head was explored and further analysis of time delay was performed. 9. Analysis of biomarker data a practical guide CERN Document Server Looney, Stephen W 2015-01-01 A "how to" guide for applying statistical methods to biomarker data analysis Presenting a solid foundation for the statistical methods that are used to analyze biomarker data, Analysis of Biomarker Data: A Practical Guide features preferred techniques for biomarker validation. The authors provide descriptions of select elementary statistical methods that are traditionally used to analyze biomarker data with a focus on the proper application of each method, including necessary assumptions, software recommendations, and proper interpretation of computer output. In addition, the book discusses 10. A multiple endpoint approach to predict the hepatotoxicity of pharmaceuticals in vitro OpenAIRE Truisi, Germaine Loredana 2014-01-01 A new approach was evaluated to predict the hepatotoxic potential of pharmaceuticals. For this purpose, primary rat and human hepatocytes cultured in an optimised sandwich configuration were used; thus, allowing the long-term, repeat-dosing of drugs. The strategy based on the evaluation of multiple endpoints, including cytotoxicity, biokinetic profiling, transcriptomics and proteomics. Pharmaceuticals with known toxicities and pharmacokinetic properties were used as model compounds. 11. Development of Pain Endpoint Models for Use in Prostate Cancer Clinical Trials and Drug Approval Science.gov (United States) 2016-10-01 substantially impair functioning and quality of life. Regulatory standards for the design of symptom endpoints have evolved substantially over the...1d. Submit protocol to departmental review committees at UNC (Month 14) Completed – NOV 2012 1e. Obtain IRB approval at UNC (Months 19) Note...Cabozantinib demonstrated clinically meaningful pain palliation, reduced or eliminated patients’ narcotic use, and improved patient functioning , thus 12. Muscle Synergies Heavily Influence the Neural Control of Arm Endpoint Stiffness and Energy Consumption. Science.gov (United States) Inouye, Joshua M; Valero-Cuevas, Francisco J 2016-02-01 Much debate has arisen from research on muscle synergies with respect to both limb impedance control and energy consumption. Studies of limb impedance control in the context of reaching movements and postural tasks have produced divergent findings, and this study explores whether the use of synergies by the central nervous system (CNS) can resolve these findings and also provide insights on mechanisms of energy consumption. In this study, we phrase these debates at the conceptual level of interactions between neural degrees of freedom and tasks constraints. This allows us to examine the ability of experimentally-observed synergies--correlated muscle activations--to control both energy consumption and the stiffness component of limb endpoint impedance. In our nominal 6-muscle planar arm model, muscle synergies and the desired size, shape, and orientation of endpoint stiffness ellipses, are expressed as linear constraints that define the set of feasible muscle activation patterns. Quadratic programming allows us to predict whether and how energy consumption can be minimized throughout the workspace of the limb given those linear constraints. We show that the presence of synergies drastically decreases the ability of the CNS to vary the properties of the endpoint stiffness and can even preclude the ability to minimize energy. Furthermore, the capacity to minimize energy consumption--when available--can be greatly affected by arm posture. Our computational approach helps reconcile divergent findings and conclusions about task-specific regulation of endpoint stiffness and energy consumption in the context of synergies. But more generally, these results provide further evidence that the benefits and disadvantages of muscle synergies go hand-in-hand with the structure of feasible muscle activation patterns afforded by the mechanics of the limb and task constraints. These insights will help design experiments to elucidate the interplay between synergies and the mechanisms 13. Scientific and Societal Considerations in Selecting Assessment Endpoints for Environmental Decision Making Directory of Open Access Journals (Sweden) Elizabeth M. Strange 2002-01-01 Full Text Available It is sometimes argued that, from an ecological point of view, population-, community-, and ecosystem-level endpoints are more relevant than individual-level endpoints for assessing the risks posed by human activities to the sustainability of natural resources. Yet society values amenities provided by natural resources that are not necessarily evaluated or protected by assessment tools that focus on higher levels of biological organization. For example, human-caused stressors can adversely affect recreational opportunities that are valued by society even in the absence of detectable population-level reductions in biota. If protective measures are not initiated until effects at higher levels of biological organization are apparent, natural resources that are ecologically important or highly valued by the public may not be adequately protected. Thus, environmental decision makers should consider both scientific and societal factors in selecting endpoints for ecological risk assessments. At the same time, it is important to clearly distinguish the role of scientists, which is to evaluate ecological effects, from the role of policy makers, which is to determine how to address the uncertainty in scientific assessment in making environmental decisions and to judge what effects are adverse based on societal values and policy goals. 14. Scientific and societal considerations in selecting assessment endpoints for environmental decision making. Science.gov (United States) Strange, Elizabeth M; Lipton, Joshua; Beltman, Douglas; Snyder, Blaine D 2002-03-08 It is sometimes argued that, from an ecological point of view, population-, community-, and ecosystem-level endpoints are more relevant than individual-level endpoints for assessing the risks posed by human activities to the sustainability of natural resources. Yet society values amenities provided by natural resources that are not necessarily evaluated or protected by assessment tools that focus on higher levels of biological organization. For example, human-caused stressors can adversely affect recreational opportunities that are valued by society even in the absence of detectable population-level reductions in biota. If protective measures are not initiated until effects at higher levels of biological organization are apparent, natural resources that are ecologically important or highly valued by the public may not be adequately protected. Thus, environmental decision makers should consider both scientific and societal factors in selecting endpoints for ecological risk assessments. At the same time, it is important to clearly distinguish the role of scientists, which is to evaluate ecological effects, from the role of policy makers, which is to determine how to address the uncertainty in scientific assessment in making environmental decisions and to judge what effects are adverse based on societal values and policy goals. 15. Reduction of animal suffering in rabies vaccine potency testing by introduction of humane endpoints. Science.gov (United States) Takayama-Ito, Mutsuyo; Lim, Chang-Kweng; Nakamichi, Kazuo; Kakiuchi, Satsuki; Horiya, Madoka; Posadas-Herrera, Guillermo; Kurane, Ichiro; Saijo, Masayuki 2017-03-01 Potency controls of inactivated rabies vaccines for human use are confirmed by the National Institutes of Health challenge test in which lethal infection with severe neurological symptoms should be observed in approximately half of the mice inoculated with the rabies virus. Weight loss, decreased body temperature, and the presence of rabies-associated neurological signs have been proposed as humane endpoints. The potential for reduction of animal suffering by introducing humane endpoints in the potency test for inactivated rabies vaccine for human use was investigated. The clinical signs were scored and body weight was monitored. The average times to death following inoculation were 10.49 and 10.99 days post-inoculation (dpi) by the potency and challenge control tests, respectively, whereas the average times to showing Score-2 signs (paralysis, trembling, and coma) were 6.26 and 6.55 dpi, respectively. Body weight loss of more than 15% appeared at 5.82 and 6.42 dpi. The data provided here support the introduction of obvious neuronal signs combined with a body weight loss of ≥15% as a humane endpoint to reduce the time of animal suffering by approximately 4 days. Copyright © 2017 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved. 16. End-point impedance measurements across dominant and nondominant hands and robotic assistance with directional damping. Science.gov (United States) Erden, Mustafa Suphi; Billard, Aude 2015-06-01 The goal of this paper is to perform end-point impedance measurements across dominant and nondominant hands while doing airbrush painting and to use the results for developing a robotic assistance scheme. We study airbrush painting because it resembles in many ways manual welding, a standard industrial task. The experiments are performed with the 7 degrees of freedom KUKA lightweight robot arm. The robot is controlled in admittance using a force sensor attached at the end-point, so as to act as a free-mass and be passively guided by the human. For impedance measurements, a set of nine subjects perform 12 repetitions of airbrush painting, drawing a straight-line on a cartoon horizontally placed on a table, while passively moving the airbrush mounted on the robot's end-point. We measure hand impedance during the painting task by generating sudden and brief external forces with the robot. The results show that on average the dominant hand displays larger impedance than the nondominant in the directions perpendicular to the painting line. We find the most significant difference in the damping values in these directions. Based on this observation, we develop a "directional damping" scheme for robotic assistance and conduct a pilot study with 12 subjects to contrast airbrush painting with and without robotic assistance. Results show significant improvement in precision with both dominant and nondominant hands when using robotic assistance. 17. Toxicity assessment through multiple endpoint bioassays in soils posing environmental risk according to regulatory screening values. Science.gov (United States) Rodriguez-Ruiz, A; Asensio, V; Zaldibar, B; Soto, M; Marigómez, I 2014-01-01 Toxicity profiles of two soils (a brownfield in Legazpi and an abandoned iron mine in Zugaztieta; Basque Country) contaminated with several metals (As, Zn, Pb and Cu in Legazpi; Zn, Pb, Cd and Cu in Zugaztieta) and petroleum hydrocarbons (in Legazpi) were determined using a multi-endpoint bioassay approach. Investigated soils exceeded screening values (SVs) of regulatory policies in force (Basque Country; Europe). Acute and chronic toxicity bioassays were conducted with a selected set of test species (Vibrio fischeri, Dictyostelium discoideum, Lactuca sativa, Raphanus sativus and Eisenia fetida) in combination with chemical analysis of soils and elutriates, as well as with bioaccumulation studies in earthworms. The sensitivity of the test species and the toxicity endpoints varied depending on the soil. It was concluded that whilst Zugaztieta soil showed very little or no toxicity, Legazpi soil was toxic according to almost all the toxicity tests (solid phase Microtox, D. discoideum inhibition of fruiting body formation and developmental cycle solid phase assays, lettuce seed germination and root elongation test, earthworm acute toxicity and reproduction tests, D. discoideum cell viability and replication elutriate assays). Thus, albeit both soils had similar SVs, their ecotoxicological risk, and therefore the need for intervening, was different for each soil as unveiled after toxicity profiling based on multiple endpoint bioassays. Such a toxicity profiling approach is suitable to be applied for scenario-targeted soil risk assessment in those cases where applicable national/regional soil legislation based on SVs demands further toxicity assessment. 18. Autoregressive transitional ordinal model to test for treatment effect in neurological trials with complex endpoints Directory of Open Access Journals (Sweden) 2016-11-01 Full Text Available Abstract Background A number of potential therapeutic approaches for neurological disorders have failed to provide convincing evidence of efficacy, prompting pharmaceutical and health companies to discontinue their involvement in drug development. Limitations in the statistical analysis of complex endpoints have very likely had a negative impact on the translational process. Methods We propose a transitional ordinal model with an autoregressive component to overcome previous limitations in the analysis of Upper Extremity Motor Scores, a relevant endpoint in the field of Spinal Cord Injury. Statistical power and clinical interpretation of estimated treatment effects of the proposed model were compared to routinely employed approaches in a large simulation study of two-arm randomized clinical trials. A revisitation of a key historical trial provides further comparison between the different analysis approaches. Results The proposed model outperformed all other approaches in virtually all simulation settings, achieving on average 14 % higher statistical power than the respective second-best performing approach (range: -1 %, +34 %. Only the transitional model allows treatment effect estimates to be interpreted as conditional odds ratios, providing clear interpretation and visualization. Conclusion The proposed model takes into account the complex ordinal nature of the endpoint under investigation and explicitly accounts for relevant prognostic factors such as lesion level and baseline information. Superior statistical power, combined with clear clinical interpretation of estimated treatment effects and widespread availability in commercial software, are strong arguments for clinicians and trial scientists to adopt, and further extend, the proposed approach. 19. Midwest Surrogate Species and Prairie Reconstruction Funding Final Report, FY 2016 Data.gov (United States) US Fish and Wildlife Service, Department of the Interior — Final report on funding received from the Natural Resources Program Center to support surrogate species planning and implementation and the Prairie Reconstruction... 20. Surrogate Models for Online Monitoring and Process Troubleshooting of NBR Emulsion Copolymerization Directory of Open Access Journals (Sweden) 2016-03-01 Full Text Available Chemical processes with complex reaction mechanisms generally lead to dynamic models which, while beneficial for predicting and capturing the detailed process behavior, are not readily amenable for direct use in online applications related to process operation, optimisation, control, and troubleshooting. Surrogate models can help overcome this problem. In this research article, the first part focuses on obtaining surrogate models for emulsion copolymerization of nitrile butadiene rubber (NBR, which is usually produced in a train of continuous stirred tank reactors. The predictions and/or profiles for several performance characteristics such as conversion, number of polymer particles, copolymer composition, and weight-average molecular weight, obtained using surrogate models are compared with those obtained using the detailed mechanistic model. In the second part of this article, optimal flow profiles based on dynamic optimisation using the surrogate models are obtained for the production of NBR emulsions with the objective of minimising the off-specification product generated during grade transitions. 1. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model Energy Technology Data Exchange (ETDEWEB) Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of) 2016-06-15 Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples. 2. Endospore surface properties of commonly used Bacillus anthracis surrogates vary in aqueous solution Science.gov (United States) The hydrophobic character and electrophoretic mobility of microorganisms are vital aspects of understanding their interactions with the environment. These properties are fundamental in fate-and-transport, physiological, and virulence studies, and thus integral in surrogate select... 3. Trimethylsilylethynyl ketones as surrogates for ethynyl ketones in the double Michael reaction. Science.gov (United States) Holeman, Derrick S; Rasne, Ravindra M; Grossman, Robert B 2002-05-01 Trimethylsilylethynyl ketones can be desilylated in the presence of a tethered carbon diacid and induced to undergo a double Michael reaction in situ. The trimethylsilylethynyl ketones can serve as surrogates of ethynyl ketones that are difficult to prepare or isolate. 4. Clinical trial design principles and endpoint definitions for transcatheter mitral valve repair and replacement: part 2: endpoint definitions: A consensus document from the Mitral Valve Academic Research Consortium. Science.gov (United States) Stone, Gregg W; Adams, David H; Abraham, William T; Kappetein, Arie Pieter; Généreux, Philippe; Vranckx, Pascal; Mehran, Roxana; Kuck, Karl-Heinz; Leon, Martin B; Piazza, Nicolo; Head, Stuart J; Filippatos, Gerasimos; Vahanian, Alec S 2015-08-01 Mitral regurgitation (MR) is one of the most prevalent valve disorders and has numerous aetiologies, including primary (organic) MR, due to underlying degenerative/structural mitral valve (MV) pathology, and secondary (functional) MR, which is principally caused by global or regional left ventricular remodelling and/or severe left atrial dilation. Diagnosis and optimal management of MR requires integration of valve disease and heart failure specialists, MV cardiac surgeons, interventional cardiologists with expertise in structural heart disease, and imaging experts. The introduction of trans- catheter MV therapies has highlighted the need for a consensus approach to pragmatic clinical trial design and uniform endpoint definitions to evaluate outcomes in patients with MR. The Mitral Valve Academic Research Consortium is a collaboration between leading academic research organizations and physician-scientists specializing in MV disease from the United States and Europe. Three in-person meetings were held in Virginia and New York during which 44 heart failure, valve, and imaging experts, MV surgeons and interventional cardiologists, clinical trial specialists and statisticians, and representatives from the U.S. Food and Drug Administration considered all aspects of MV pathophysiology, prognosis, and therapies, culminating in a 2-part document describing consensus recommendations for clinical trial design (Part 1) and endpoint definitions (Part 2) to guide evaluation of transcatheter and surgical therapies for MR. The adoption of these recommendations will afford robustness and consistency in the comparative effectiveness evaluation of new devices and approaches to treat MR. These principles may be useful for regulatory assessment of new transcatheter MV devices, as well as for monitoring local and regional outcomes to guide quality improvement initiatives. 5. Idaho National Laboratory Test Area North: Application of Endpoints to Guide Adaptive Remediation at a Complex Site: INL Test Area North: Application of Endpoints Energy Technology Data Exchange (ETDEWEB) Lee, M. Hope [PNNL Soil and Groundwater Program; Truex, Mike [PNNL; Freshley, Mark [PNNL; Wellman, Dawn [PNNL 2016-09-01 Complex sites are defined as those with difficult subsurface access, deep and/or thick zones of contamination, large areal extent, subsurface heterogeneities that limit the effectiveness of remediation, or where long-term remedies are needed to address contamination (e.g., because of long-term sources or large extent). The Test Area North at the Idaho National Laboratory, developed for nuclear fuel operations and heavy metal manufacturing, is used as a case study. Liquid wastes and sludge from experimental facilities were disposed in an injection well, which contaminated the subsurface aquifer located deep within fractured basalt. The wastes included organic, inorganic, and low-level radioactive constituents, with the focus of this case study on trichloroethylene. The site is used as an example of a systems-based framework that provides a structured approach to regulatory processes established for remediation under existing regulations. The framework is intended to facilitate remedy decisions and implementation at complex sites where restoration may be uncertain, require long timeframes, or involve use of adaptive management approaches. The framework facilitates site, regulator, and stakeholder interactions during the remedial planning and implementation process by using a conceptual model description as a technical foundation for decisions, identifying endpoints, which are interim remediation targets or intermediate decision points on the path to an ultimate end, and maintaining protectiveness during the remediation process. At the Test Area North, using a structured approach to implementing concepts in the endpoint framework, a three-component remedy is largely functioning as intended and is projected to meet remedial action objectives by 2095 as required. The remedy approach is being adjusted as new data become available. The framework provides a structured process for evaluating and adjusting the remediation approach, allowing site owners, regulators, and 6. Significance of Including a Surrogate Arousal for Sleep Apnea-Hypopnea Syndrome Diagnosis by Respiratory Polygraphy Science.gov (United States) Masa, Juan F.; Corral, Jaime; Gomez de Terreros, Javier; Duran-Cantolla, Joaquin; Cabello, Marta; Hernández-Blasco, Luis; Monasterio, Carmen; Alonso, Alberto; Chiner, Eusebi; Aizpuru, Felipe; Zamorano, Jose; Cano, Ricardo; Montserrat, Jose M.; Garcia-Ledesma, Estefania; Pereira, Ricardo; Cancelo, Laura; Martinez, Angeles; Sacristan, Lirios; Salord, Neus; Carrera, Miguel; Sancho-Chust, José N.; Embid, Cristina 2013-01-01 Rationale: Respiratory polygraphy is an accepted alternative to polysomnography (PSG) for sleep apnea/hypopnea syndrome (SAHS) diagnosis, although it underestimates the apnea-hypopnea index (AHI) because respiratory polygraphy cannot identify arousals. Objectives: We performed a multicentric, randomized, blinded crossover study to determine the agreement between home respiratory polygraphy (HRP) and PSG, and between simultaneous respiratory polygraphy (respiratory polygraphy with PSG) (SimultRP) and PSG by means of 2 AHI scoring protocols with or without hyperventilation following flow reduction considered as a surrogate arousal. Methods: We included suspected SAHS patients from 8 hospitals. They were assigned to home and hospital protocols at random. We determined the agreement between respiratory polygraphy AHI and PSG AHI scorings using Bland and Altman plots and diagnostic agreement using receiver operating characteristic (ROC) curves. The agreement in therapeutic decisions (continuous positive airway pressure treatment or not) between HRP and PSG scorings was done with likelihood ratios and post-test probability calculations. Results: Of 366 randomized patients, 342 completed the protocol. AHI from HRP scorings (with and without surrogate arousal) had similar agreement with PSG. AHI from SimultRP with surrogate arousal scoring had better agreement with PSG than AHI from SimultRP without surrogate arousal. HRP with surrogate arousal scoring had slightly worse ROC curves than HRP without surrogate arousal, and the opposite was true for SimultRP scorings. HRP with surrogate arousal showed slightly better agreement with PSG in therapeutic decisions than for HRP without surrogate arousal. Conclusion: Incorporating a surrogate arousal measure into HRP did not substantially increase its agreement with PSG when compared with the usual procedure (HRP without surrogate arousal). Citation: Masa JF; Corral J; Gomez de Terreros J; Duran-Cantolla J; Cabello M; Hern 7. Using multiscale spatial models to assess potential surrogate habitat for an imperiled reptile. Directory of Open Access Journals (Sweden) Jennifer M Fill Full Text Available In evaluating conservation and management options for species, practitioners might consider surrogate habitats at multiple scales when estimating available habitat or modeling species' potential distributions based on suitable habitats, especially when native environments are rare. Species' dependence on surrogates likely increases as optimal habitat is degraded and lost due to anthropogenic landscape change, and thus surrogate habitats may be vital for an imperiled species' survival in highly modified landscapes. We used spatial habitat models to examine a potential surrogate habitat for an imperiled ambush predator (eastern diamondback rattlesnake, Crotalus adamanteus; EDB at two scales. The EDB is an apex predator indigenous to imperiled longleaf pine ecosystems (Pinus palustris of the southeastern United States. Loss of native open-canopy pine savannas and woodlands has been suggested as the principal cause of the species' extensive decline. We examined EDB habitat selection in the Coastal Plain tidewater region to evaluate the role of marsh as a potential surrogate habitat and to further quantify the species' habitat requirements at two scales: home range (HR and within the home range (WHR. We studied EDBs using radiotelemetry and employed an information-theoretic approach and logistic regression to model habitat selection as use vs.We failed to detect a positive association with marsh as a surrogate habitat at the HR scale; rather, EDBs exhibited significantly negative associations with all landscape patches except pine savanna. Within home range selection was characterized by a negative association with forest and a positive association with ground cover, which suggests that EDBs may use surrogate habitats of similar structure, including marsh, within their home ranges. While our HR analysis did not support tidal marsh as a surrogate habitat, marsh may still provide resources for EDBs at smaller scales. 8. Surrogate end points in women's health research: science, protoscience, and pseudoscience. Science.gov (United States) Grimes, David A; Schulz, Kenneth F; Raymond, Elizabeth G 2010-04-01 A surrogate end point (e.g., a laboratory test or image) serves as a proxy for a clinical end point of importance (e.g., fracture, thrombosis, or death). Adoption and use of surrogate end points lacking validation, especially in cardiovascular medicine, have caused thousands of patients' deaths, a serious violation of the ethical principle of beneficence. Copyright 2010 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved. 9. Chromosome translocations measured by fluorescence in-situ hybridization: A promising biomarker Energy Technology Data Exchange (ETDEWEB) Lucas, J.N.; Straume, T. 1995-10-01 A biomarker for exposure and risk assessment would be most useful if it employs an endpoint that is highly quantitative, is stable with time, and is relevant to human risk. Recent advances in chromosome staining using fluorescence in situ hybridization (FISH) facilitate fast and reliable measurement of reciprocal translocations, a kind of DNA damage linked to both prior exposure and risk. In contrast to other biomarkers available, the frequency of reciprocal translocations in individuals exposed to whole-body radiation is stable with time post exposure, has a rather small inter-individual variability, and can be measured accurately at the low levels. Here, the authors discuss results from their studies demonstrating that chromosome painting can be used to reconstruct radiation dose for workers exposed within the dose limits, for individuals exposed a long time ago, and even for those who have been diagnosed with leukemia but not yet undergone therapy. 10. Multi-biomarker Profiling and Recurrent Hospitalizations in Heart Failure Directory of Open Access Journals (Sweden) Antoni Bayes-Genis 2016-10-01 Full Text Available Background: Despite advances in pharmacologic therapy and devices, patients with heart failure (HF continue to have significant rehospitalization rates and risk prediction remains challenging. We sought to explore the value of a multi-biomarker panel (including NT-proBNP, hs-TnT, and ST2 on top of clinical assessment for long-term prediction of recurrent hospitalizations in HF.Methods and Results: NT-proBNP, hs-TnT, and ST2 levels were measured in 891 consecutive ambulatory HF patients. The independent association between the multi-biomarker panel and recurrent hospitalizations was assessed through a multivariable negative binomial regression and expressed as incidence rates ratios. McFadden pseudoR2 and goodness-of-fit measures were also used. The total number of unplanned hospitalizations (all-cause, cardiovascular CV-, and HF-related were selected as the primary endpoints. At a mean follow-up of 4.2±2.1 years, 1623 all-cause hospitalizations in 498 patients (55.9%, 710 CV-related hospitalizations in 331 patients (37.2%, and 444 HF-related hospitalizations in 214 patients (24.1% were registered. The crude incidence of all-cause, CV-, and HF-related recurrent hospitalizations was significantly higher for patients with the multi-biomarker panel above the cut-point (hs-TnT>14 ng/L, NT-proBNP>1000 ng/L, and ST2>35 ng/mL (all P<0.001. For all-cause, CV-, and HF-related recurrent hospitalizations, the McFadden R2, Akaike information criterion, and Bayesian information criterion supported the superiority of incorporating the multi-biomarker panel into a clinical predictive model.Conclusions: A multi-biomarker approach that incorporates NT-proBNP, hs-TnT, and ST2 better identifies HF patients at risk for recurrent hospitalizations. Elucidation of new biophysiological targets for recurrent hospitalizations may identify patient profiles for focused intervention. 11. Comparison of burrowing and stimuli-evoked pain behaviors as end-points in rat models of inflammatory pain and peripheral neuropathic pain Directory of Open Access Journals (Sweden) Arjun eMuralidharan 2016-05-01 Full Text Available Establishment and validation of ethologically-relevant, non-evoked behavioral end-points as surrogate measures of spontaneous pain in rodent pain models has been proposed as a means to improve preclinical to clinical research translation in the pain field. Here, we compared the utility of burrowing behavior with hypersensitivity to applied mechanical stimuli for pain assessment in rat models of chronic inflammatory and peripheral neuropathic pain. Briefly, groups of male Sprague-Dawley rats were habituated to the burrowing environment and trained over a 5-day period. Rats that burrowed ≤450g of gravel on any two days of the individual training phase were excluded from the study. The remaining rats received either a unilateral intraplantar injection of Freund’s complete adjuvant (FCA or saline, or underwent unilateral chronic constriction injury (CCI of the sciatic nerve- or sham-surgery. Baseline burrowing behavior and evoked pain behaviors were assessed prior to model induction, and twice-weekly until study completion on day 14. For FCA- and CCI-rats, but not the corresponding groups of sham-rats, evoked mechanical hypersensitivity developed in a temporal manner in the ipsilateral hindpaws. Although burrowing behavior also decreased in a temporal manner for both FCA- and CCI-rats, there was considerable inter-animal variability. By contrast, mechanical hyperalgesia and mechanical allodynia in the ipsilateral hindpaws of FCA- and CCI-rats respectively, exhibited minimal inter-animal variability. Our data collectively show that burrowing behavior is altered in rodent models of chronic inflammatory pain and peripheral neuropathic pain. However, large group sizes are needed to ensure studies are adequately powered due to considerable inter-animal variability. 12. Desorption of a methamphetamine surrogate from wallboard under remediation conditions Science.gov (United States) Poppendieck, Dustin; Morrison, Glenn; Corsi, Richard 2015-04-01 Thousands of homes in the United States are found to be contaminated with methamphetamine each year. Buildings used to produce illicit methamphetamine are typically remediated by removing soft furnishings and stained materials, cleaning and sometimes encapsulating surfaces using paint. Methamphetamine that has penetrated into paint films, wood and other permanent materials can be slowly released back into the building air over time, exposing future occupants and re-contaminating furnishings. The objective of this study was to determine the efficacy of two wallboard remediation techniques for homes contaminated with methamphetamine: 1) enhancing desorption by elevating temperature and relative humidity while ventilating the interior space, and 2) painting over affected wallboard to seal the methamphetamine in place. The emission of a methamphetamine surrogate, N-isopropylbenzylamine (NIBA), from pre-dosed wallboard chambers over 20 days at 32 °C and two values of relative humidity were studied. Emission rates from wallboard after 15 days at 32 °C ranged from 35 to 1400 μg h-1 m-2. Less than 22% of the NIBA was removed from the chambers over three weeks. Results indicate that elevating temperatures during remediation and latex painting of impacted wallboard will not significantly reduce freebase methamphetamine emissions from wallboard. Raising the relative humidity from 27% to 49% increased the emission rates by a factor of 1.4. A steady-state model of a typical home using the emission rates from this study and typical residential building parameters and conditions shows that adult inhalation reference doses for methamphetamine will be reached when approximately 1 g of methamphetamine is present in the wallboard of a house. 13. Survival of norovirus surrogate on various food-contact surfaces. Science.gov (United States) Kim, An-Na; Park, Shin Young; Bae, San-Cheong; Oh, Mi-Hwa; Ha, Sang-Do 2014-09-01 Norovirus (NoV) is an environmental threat to humans, which spreads easily from one infected person to another, causing foodborne and waterborne diseases. Therefore, precautions against NoV infection are important in the preparation of food. The aim of this study was to investigate the survival of murine norovirus (MNV), as a NoV surrogate, on six different food-contact surfaces: ceramic, wood, rubber, glass, stainless steel, and plastic. We inoculated 10(5) PFU of MNV onto the six different surface coupons that were then kept at room temperature for 28 days. On the food-contact surfaces, the greatest reduction in MNV was 2.28 log10 PFU/coupon, observed on stainless steel, while the lowest MNV reduction was 1.29 log10 PFU/coupon, observed on wood. The rank order of MNV reduction, from highest to lowest, was stainless steel, plastic, rubber, glass, ceramic, and wood. The values of d R (time required to reduce the virus by 90%) on survival plots of MNV determined by a modified Weibull model were 277.60 h (R(2) = 0.99) on ceramic, 492.59 h (R(2) = 0.98) on wood, 173.56 h on rubber (R(2) = 0.98), 97.18 h (R(2) = 0.94) on glass, 91.76 h (R(2) = 0.97) on stainless steel, and 137.74 h (R(2) = 0.97) on plastic. The infectivity of MNV on all food-contact surfaces remained after 28 days. These results show that MNV persists in an infective state on various food-contact surfaces for long periods. This study may provide valuable information for the control of NoV on various food-contact surfaces, in order to prevent foodborne disease. 14. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil. Directory of Open Access Journals (Sweden) Brian France Full Text Available Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping and post-decon to determine that the site is free of contamination (clearance sampling. Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. 15. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil Science.gov (United States) France, Brian; Bell, William; Chang, Emily; Scholten, Trudy 2015-01-01 Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. PMID:26714315 16. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil. Science.gov (United States) France, Brian; Bell, William; Chang, Emily; Scholten, Trudy 2015-01-01 Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping) and post-decon to determine that the site is free of contamination (clearance sampling). Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil) were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation. 17. Interactions between Human Norovirus Surrogates and Acanthamoeba spp. Science.gov (United States) Hsueh, Tun-Yun; Gibson, Kristen E 2015-06-15 Human noroviruses (HuNoVs) are the most common cause of food-borne disease outbreaks, as well as virus-related waterborne disease outbreaks in the United States. Here, we hypothesize that common free-living amoebae (FLA)-ubiquitous in the environment, known to interact with pathogens, and frequently isolated from water and fresh produce-could potentially act as reservoirs of HuNoV and facilitate the environmental transmission of HuNoVs. To investigate FLA as reservoirs for HuNoV, the interactions between two Acanthamoeba species, A. castellanii and A. polyphaga, as well as two HuNoV surrogates, murine norovirus type 1 (MNV-1) and feline calicivirus (FCV), were evaluated. The results showed that after 1 h of amoeba-virus incubation at 25°C, 490 and 337 PFU of MNV-1/ml were recovered from A. castellanii and A. polyphaga, respectively, while only few or no FCVs were detected. In addition, prolonged interaction of MNV-1 with amoebae was investigated for a period of 8 days, and MNV-1 was demonstrated to remain stable at around 200 PFU/ml from day 2 to day 8 after virus inoculation in A. castellanii. Moreover, after a complete amoeba life cycle (i.e., encystment and excystment), infectious viruses could still be detected. To determine the location of virus associated with amoebae, immunofluorescence experiments were performed and showed MNV-1 transitioning from the amoeba surface to inside the amoeba over a 24-h period. These results are significant to the understanding of how HuNoVs may interact with other microorganisms in the environment in order to aid in its persistence and survival, as well as potential transmission in water and to vulnerable food products such as fresh produce. 18. Design, development, and analysis of a surrogate for pulmonary injury prediction. Science.gov (United States) Danelson, Kerry A; Gayzik, F Scott; Stern, Amber Rath; Hoth, J Jason; Stitzel, Joel D 2011-10-01 Current anthropomorphic test devices (ATDs) measure chest acceleration and deflection to assess risk of injury to the thorax. This study presents a lung surrogate prototype designed to expand the injury assessment capabilities of ATDs to include a risk measure for pulmonary contusion (PC). The surrogate augments these existing measures by providing pressure data specific to the lung and its lobes. The prototype was created from a rendering of a 50th percentile male lung inflated to normal inspiration, obtained from clinical CT data. Surrogate size, lobe volume, and airway cross sections were selected to match the morphology of the lung. Elastomeric urethane was molded via rapid prototyping to create a functional prototype. Pressure sensors in each of the five terminal airways independently monitored pressure traces in the lobes during impacts to the surrogate. Software was created to analyze the surrogate impact pressure data, determine the lobe with the greatest pressure rise for a particular impact, and estimate the initial speed of surface deformation. Calibration testing indicates an approximately linear relationship between peak lobe pressure and surface impact speed. No type I or II errors were demonstrated during lobe detection testing. During repeatability testing, the standard deviation was between 2 and 4% of the mean peak pressure. Ongoing research will focus on correlating surrogate data, pressure pulses, or surface deformation, to risk functions for PC. 19. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points Science.gov (United States) Peng, Haijun; Wang, Wei 2016-10-01 An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods. 20. Modeling of Heating and Evaporation of FACE I Gasoline Fuel and its Surrogates KAUST Repository Elwardani, Ahmed Elsaid 2016-04-05 The US Department of Energy has formulated different gasoline fuels called \\'\\'Fuels for Advanced Combustion Engines (FACE)\\'\\' to standardize their compositions. FACE I is a low octane number gasoline fuel with research octane number (RON) of approximately 70. The detailed hydrocarbon analysis (DHA) of FACE I shows that it contains 33 components. This large number of components cannot be handled in fuel spray simulation where thousands of droplets are directly injected in combustion chamber. These droplets are to be heated, broken-up, collided and evaporated simultaneously. Heating and evaporation of single droplet FACE I fuel was investigated. The heating and evaporation model accounts for the effects of finite thermal conductivity, finite liquid diffusivity and recirculation inside the droplet, referred to as the effective thermal conductivity/effective diffusivity (ETC/ED) model. The temporal variations of the liquid mass fractions of the droplet components were used to characterize the evaporation process. Components with similar evaporation characteristics were merged together. A representative component was initially chosen based on the highest initial mass fraction. Three 6 components surrogates, Surrogate 1-3, that match evaporation characteristics of FACE I have been formulated without keeping same mass fractions of different hydrocarbon types. Another two surrogates (Surrogate 4 and 5) were considered keeping same hydrocarbon type concentrations. A distillation based surrogate that matches measured distillation profile was proposed. The calculated molar mass, hydrogen-to-carbon (H/C) ratio and RON of Surrogate 4 and distillation based one are close to those of FACE I.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47978684306144714, "perplexity": 6774.884869533525}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00366.warc.gz"}
http://mathhelpforum.com/calculus/72541-can-you-solve-print.html
# Can you solve this !! • Feb 8th 2009, 02:33 PM Banned for attempted hacking Can you solve this !! Prove that ( x^100) / e^(x^70) + e^(x^70) /(x^100) > 2 for every positive x. Prove that cos 20 =/= a/b where a and b are integers . Nice questions i did them i just wanted to see other solutions. This should be easy for a lot of you i looked around the forum there are a lot of beautiful mathematicians around. • Feb 8th 2009, 03:34 PM Jester Quote: Originally Posted by ╔(σ_σ)╝ Prove that ( x^100) / e^(x^70) + e^(x^70) /(x^100) > 2 for every positive x. Prove that cos 20 =/= a/b where a and b are integers . Nice questions i did them i just wanted to see other solutions. This should be easy for a lot of you i looked around the forum there are a lot of beautiful mathematicians around. For the first one, let $u = \frac{x^{100}}{e^{x^{70}}}$, then $\frac{x^{100}}{e^{x^{70}}} + \frac{e^{x^{70}} }{x^{100}} = u + \frac{1}{u}$ so you want to show $u + \frac{1}{u} > 2$ which is to show that $u^2 - 2u + 1 > 0$ or $(u-1)^2 > 0$ which is true provide that $u \ne 1$. Now u is bounded above (using some calculus) so $u \le u_{max} < 1$ establishing the inequality. • Feb 8th 2009, 03:43 PM Banned for attempted hacking Beautiful. That's what you need to do. You have a trained eye, or u must have seen this before. • Feb 8th 2009, 03:47 PM Jester Quote: Originally Posted by ╔(σ_σ)╝ Beautiful. That's what you need to do. You have a trained eye, or u must have seen this before. I've been doing math for a while (Rofl) • Feb 8th 2009, 03:56 PM Banned for attempted hacking Quote: Originally Posted by danny arrigo I've been doing math for a while (Rofl) Cool Now about the second one (Rofl). Experience won't help. • Feb 8th 2009, 05:06 PM Banned for attempted hacking Amazing no one can solve the second one ! (Thinking) • Feb 8th 2009, 05:33 PM ThePerfectHacker Quote: Originally Posted by ╔(σ_σ)╝ Prove that cos 20 =/= a/b where a and b are integers . This is an idea, I did not even try doing this. By the triple angle identity we have a cubic polynomial equaling $\cos 60 = \frac{\sqrt{3}}{2}$. Now if we square both sides of the polynomial to make everything rational we just need to argue the polynomial has no rational solutions. • Feb 8th 2009, 05:37 PM Banned for attempted hacking Quote: Originally Posted by ThePerfectHacker This is an idea, I did not even try doing this. By the triple angle identity we have a cubic polynomial equaling $\cos 60 = \frac{\sqrt{3}}{2}$. Now if we square both sides of the polynomial to make everything rational we just need to argue the polynomial has no rational solutions. $\cos 60 =/= \frac{\sqrt{3}}{2}$ $\cos 60 = \frac{\sqrt{1}}{2}$ Finish it. • Feb 8th 2009, 05:47 PM ThePerfectHacker Let $x=\cos 20$ then $4x^3 - 3x = \tfrac{\sqrt{3}}{2}$. If $x$ was rational then $4x^3 - 3x$ would be rational but it is not. Thus, $x$ must be irrational. • Feb 8th 2009, 06:15 PM Banned for attempted hacking Quote: Originally Posted by ThePerfectHacker Let $x=\cos 20$ then $4x^3 - 3x = \tfrac{\sqrt{3}}{2}$. If $x$ was rational then $4x^3 - 3x$ would be rational but it is not. Thus, $x$ must be irrational. Why do you keep saying this ? $\cos 60 = \frac{\sqrt{3}}{2}$ -- This is wrong cos 60 = 1/2 • Feb 8th 2009, 06:19 PM ThePerfectHacker Quote: Originally Posted by ╔(σ_σ)╝ Why do you keep saying this ? $\cos 60 = \frac{\sqrt{3}}{2}$ -- This is wrong cos 60 = 1/2 Sorry! That was a trivial error. But the idea still works because: $4x^3 - 3x = \tfrac{1}{2} \implies 8x^3 - 6x -1 = 0$. Apply rational roots theorem and conclude there are no rational solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87129807472229, "perplexity": 2021.6223165861372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00251.warc.gz"}
http://physics.stackexchange.com/questions/69384/methods-to-stabilize-and-maintain-extremely-low-humidity-in-a-lab-environment
Methods to Stabilize and Maintain Extremely Low Humidity in a Lab Environment My atomic physics lab is in a building that experiences huge swings in humidity levels during the year due to the monsoon season Our building provides temperature, but not humidity control. Using just the building temperature control results in the following lab climates: 10 months out of the year, the room is at T $\approx 24.4 ^oC$ Relative Humidity $< 10\%$ 2 months out of the year, the lab is at T $\approx 24.4^oC$ Relative Humidity $\approx 50\%$ This season variation necessitates significant recalibration twice per year at the beginning and end of the monsoon season. The sensitive components are mainly opto-mechanical. The lab currently has a dehumidifier that is spec'd at 45 pints per day during the wet season. This specification indicates how much water the unit will remove from the air in a given day when the air is saturated with water (100% relative humidity). The problem with such a specification is that 100% relative humidity is a way different environment than 40% or 50% humidity. On a wet day, this unit reduces the lab relative humidity by about 10% from 55% to 45%. This is still far from the lab's climate most of the year. It is a trade off, though, because it will also raise the lab temperature by about 1 degree C, which necessitates other recalibration. I am investigating options to further reduce the humidity. The lab is approximately 5 meters X 10 meters X 3 meters in size. Most of the experiment is on a very full optics table that is 1.5 meters X 4 meters. There are lots of cables and water tubing that require access to the table, making climate isolation of the table difficult (although not impossible). A few options under consideration are the following: 1: Introduce an additional higher capacity dehumidifier Pros: • Fast and easy implementation Cons: • it is unknown how efficient a dehumidifier will function when the relative humidity is only 45%. • Manufacturers do not specify how well the unit will work at low humidity levels, only at 80% +. 2: Fill sensitive areas with positive pressure Nitrogen Pros: • Excellent climate control • Minimal impact on room temperature Cons: • requires significant reconfiguration of experimental setup. • Requires refilling Nitrogen tank frequently, a recurring cost. 3: Isolate experiment from lab climate using large plastic enclosures and recirculate air in this enclosure Pros: • Excellent environment isolation Cons: • requires significant reconfiguration of laboratory and would likely restrict access to areas of the experiment. • It could also likely result in a temperature increase of the experiment area. Introducing an additional room dehumidifier would be the easiest option by far. So my Question is: does anyone know how efficient dehumidifiers works in dry environments? E.g., if I were to purchase an additional dehumidifier, would could I achieve a humidity level of less than 30% or does the humidity level asymptote off at some level due to a limit on the efficiency of dehumidifiers? I realize an alternative would be to humidify the lab 10 months out of the year. However, having low humidity is extremely convenient for rapidly water-cooling components. During our wet season, our water-cooling results in considerable condensation on our components. - Chalk up to the list of Nitrogen's cons: danger to life and limb in case of a leak. –  Deer Hunter Jun 27 '13 at 20:41 Joe - check out Chapter 23 of ASHRAE's HVAC Systems and Equipment Handbook (2008). –  Deer Hunter Jun 27 '13 at 20:46 I found an interesting external link: Desiccant Dehumidification vs. Mechanical Refrigeration ( bry-air.com/… ) In summary, their recommendation is that desiccant based dehumidifiers are necessary if you need relative humidity levels between 1% and 45% –  Joe Jun 27 '13 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.55952388048172, "perplexity": 2237.0227982307774}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021389272/warc/CC-MAIN-20140305120949-00027-ip-10-183-142-35.ec2.internal.warc.gz"}
https://brilliant.org/problems/locked-deep-down/
Locked deep down Do you like carrots? Given this $$100 \times 38$$ maze with two exits, find out the number of steps (left, right, up and down), including the step out of the maze, from the worst point inside, i.e, the point from which it takes the longest to get out. The maze is presented in an obvious format. Absence of | or - represent absence of walls. Walls on the boundaries which are absent are exits. To clarify, the title image is a joke. It has got nothing to do with the maze the problem links to. Example The worst point of the following $$3 \times 5$$ maze is the lower left corner. It takes 9 steps to get out from that point 1 2 3 4 5 6 7 +-+-+-+-+-+ | | +-+ +-+ + + | | | | + +-+-+ + + | | | +-+ +-+-+-+ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49243101477622986, "perplexity": 946.2593304342565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00118.warc.gz"}
https://stacks.math.columbia.edu/tag/0128
Lemma 12.16.14. Let $\mathcal{A}$ be an abelian category. Let $A \to B \to C$ be a complex of filtered objects of $\mathcal{A}$. Assume $\alpha : A \to B$ and $\beta : B \to C$ are strict morphisms of filtered objects. Then $\text{gr}(\mathop{\mathrm{Ker}}(\beta )/\mathop{\mathrm{Im}}(\alpha )) = \mathop{\mathrm{Ker}}(\text{gr}(\beta ))/\mathop{\mathrm{Im}}(\text{gr}(\alpha )))$. Proof. This follows formally from Lemma 12.16.12 and the fact that $\mathop{\mathrm{Coim}}(\alpha ) \cong \mathop{\mathrm{Im}}(\alpha )$ and $\mathop{\mathrm{Coim}}(\beta ) \cong \mathop{\mathrm{Im}}(\beta )$ by Lemma 12.16.4. $\square$ There are also: • 2 comment(s) on Section 12.16: Filtrations In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9974792003631592, "perplexity": 498.17949818517843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142902-00398.warc.gz"}
http://listserv.tau.ac.il/cgi-bin/wa?A2=ind0911&L=ivritex&T=0&P=877
Date: Sun, 22 Nov 2009 19:00:25 +1100 Reply-To: Hebrew TeX list <[log in to unmask]> Sender: Hebrew TeX list <[log in to unmask]> From: Vafa Khalighi <[log in to unmask]> Subject: Re: Correct brackets in \eqref In-Reply-To: <[log in to unmask]> Content-Type: multipart/alternative; > 1. \makeatletter \makeatother mean? > \makeatletter tells TeX to treat @ as a letter (category 11), and \makeatother tells TeX to treat @ as other chacracter (category 12) which is the default category for @ in LaTeX. See chapter 7 of the TeXbook for more details. We usually use @ for internal macros. > 2. \textup{\tagform@ mean? > > \textup is the same things as \upshape and \tagform just is a horizontal box, and inside the box, you have got \normalfont and italic corrections and other things. See line 966--967 of amsmath.sty for more details. -- Vafa [text/html] Back to: Top of message | Previous page | Main IVRITEX page LISTSERV.TAU.AC.IL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977462291717529, "perplexity": 16562.296015777563}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119661285.56/warc/CC-MAIN-20141024030101-00050-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.kuniga.me/blog/2020/10/11/deutsch-jozsa-algorithm.html
The Deutsch-Jozsa Algorithm kuniga.me > NP-Incompleteness > The Deutsch-Jozsa Algorithm The Deutsch-Jozsa Algorithm 11 Oct 2020 David Elieser Deutsch is a British scientist at the University of Oxford, being a pioneer of the field of quantum computation by formulating a description for a quantum Turing machine. Richard Jozsa is an Australian mathematician at the University of Cambridge and is a co-inventor of quantum teleportation. Together they proposed the Deutsch-Jozsa Algorithm which, although not useful in practice, it provides an example where a quantum algorithm can outperform a classic one. In this post we’ll describe the algorithm and the basic theory of quantum computation behind it. Quantum Mechanics Abstracted In this post we’ll work with abstractions on top of Quantum mechanics concepts, namely qubits and quantum gates. A lot of properties we’ll leverage such as superposition and teleportation arise from the theory of Quantum mechanics but we’ll not delve into their explanation, but rather take them as facts to keep things simpler. We’ll also try to avoid making real-world interpretations of the theoretical results, since it’s known to be counter-intutive, sometimes paradoxical and overall not agreed upon [1]. The Qubit We’ll start by defining the quatum analog to a classical bit, which is named a qubit or simply qbit. A common statement about a qubit is that it can be both 0 and 1 at the same time, but as we mentioned in the previous section, we’ll avoid making interpreations such as these and will focus on its mathematical intepretation of a qubit instead. The Dirac notation defines the pair $\langle \cdot \mid$ and $\mid \cdot \rangle$, called respectively bra and ket (possibly a word play from bracket). A qubit is often represented by the ket symbol: $\ket{\psi}$. The State of a Qubit We can think of a qubit as a pair of complex numbers subject to some constraint. More formally, the set of values of a qubit is a vector space represented by the complex linear combination of an orthonormal basis. The orthonormal base used is often $\ket{0}, \ket{1}$, which are also called computational basis states. In other words, any qubit can be represented as: $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$ where $\alpha$ and $\beta$ are complex numbers and called the amplitude, and $\abs{\alpha}^2 + \abs{\beta}^2 = 1$. It’s worth recalling that the magnitude of a complex number number $c = a + bi$ is $\abs{c} = \sqrt{a^2 + b^2}$, so both $\abs{\alpha}$ and $\abs{\beta}$ are non-negative numbers. We could have opted to use matrix notation and represent our qubit as: $\begin{bmatrix} \psi_{1} \\ \psi_{2} \\ \end{bmatrix} = \alpha \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} + \beta \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}$ Multiple Qubits A state with 2-qubits can be written as: $\ket{\psi} = \alpha_{00} \ket{00} + \alpha_{01} \ket{01} + \alpha_{10} \ket{10} + \alpha_{11} \ket{11}$ Note that the size of the base is $2^n$ if $n$ is the number of qubits. We might also see this notation that factors common terms, so for example the above can be rewritten as: $\ket{\psi} = \ket{0} (\alpha_{00} \ket{0} + \alpha_{01} \ket{1}) + \ket{1} (\alpha_{10} \ket{0} + \alpha_{11} \ket{1})$ We can also use multiple variables to represent a multi-qubit, so for example a 2-qubit can be denoted by $\ket{x, y}$ where $\ket{x}$ and $\ket{y}$ are single qubit variables. Finally, we can represent repeated qubits using the operator $\otimes$, called tensor. For example, the 4-qubit state $\ket{0000}$ can be represented as $\ket{0}^{\otimes 4}$. Measuring the state of a Qubit If we measure a single qubit state like $\ket{\psi} = \alpha \ket{0} + \beta \ket{1}$ the measurement will return $\ket{0}$ with probability $\abs{\alpha}^2$ and $\ket{1}$ with probability $\abs{\beta}^2$ (recall that $\abs{\alpha}^2 + \abs{\beta}^2 = 1$, so this is a valid probability distribution). One important thing to note is that this process is irreversible, once the measurement is made, the qubit will assume the measured state. We can also measure partial qubits of a multi-qubit state. Suppose we have a 2-qubit state: $\ket{\psi} = \alpha_{00} \ket{00} + \alpha_{01} \ket{01} + \alpha_{10} \ket{10} + \alpha_{11} \ket{11}$ And we measure the first qubit. It will return $\ket{0}$ with probability $\abs{\alpha_{00}}^2 + \abs{\alpha_{01}}^2$ and $\ket{1}$ with probability $\abs{\alpha_{01}}^2 + \abs{\alpha_{11}}^2$. Then the first qubit will assume the measured value. Say we measured $\ket{0}$, then the new state is $\ket{\psi} = \frac{\alpha_{00} \ket{00} + \alpha_{01} \ket{01}}{\sqrt{\abs{\alpha_{00}}^2 + \abs{\alpha_{01}}^2}}$ Where the denominator is a normalizing factor so the amplitudes form a valid probability distribution. Transforming a Qubit: Quantum Gates In the same way classical gates can be used to transform a bit or bits, we have the analogous quantum gates. The most basic classical gate is the NOT gate which transforms 0 into 1 and vice-versa. The analogous quantum gate flips $\alpha$ and $\beta$ of a state, which is a more general form of a NOT gate, since it also turns $\ket{0}$ into $\ket{1}$ (when $\alpha = 1$, $\beta = 0$) and $\ket{1}$ into $\ket{0}$ (when $\alpha = 0$, $\beta = 1$). In matricial form, we want to find a transformation from column vector $[\alpha \, \beta]^T$ into $[\beta \, \alpha]^T$. We can do so using a 2 x 2 matrix: $\begin{bmatrix} \beta \\ \alpha \\ \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \begin{bmatrix} \alpha \\ \beta \\ \end{bmatrix}$ More generally any quantum gate on $n$-qubits can be represented by a $2^n \otimes 2^n$ matrix called a unitary matrix. A unitary matrix $U$ is such that $U^\dagger U = I$. Where $U^\dagger$ is the adjoint of $U$, which is the result of transposing $U$ and taking conjugate (i.e. negating the imaginary part of the complex number) of the elements. The Hadamard gate appears in many constructs and can be defined by: $H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix}$ Thus if applied on $\alpha \ket{0} + \beta \ket{1}$ It yields $\alpha \frac{\ket{0} + \ket{1}}{\sqrt{2}} + \beta \frac{\ket{0} - \ket{1}}{\sqrt{2}}$ The Hadamard gate can be drawn like a classic gate: The CNOT Gate The CNOT gate, also known as controlled-NOT, takes two qubits, called control and target. It can be represented by a 4 x 4 matrix: $U_{CN} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$ So for a state given by: $\ket{\psi} = \alpha_{00} \ket{00} + \alpha_{01} \ket{01} + \alpha_{10} \ket{10} + \alpha_{11} \ket{11}$ We’ll end up with $\ket{\psi} = \alpha_{00} \ket{00} + \alpha_{01} \ket{01} + \alpha_{11} \ket{10} + \alpha_{10} \ket{11}$ Where the 3-rd and 4-th terms got swapped. Let’s consider some special cases. If the first qubit is $\ket{0}$ (that is, $\alpha_{10} = \alpha_{11} = 0$) then the initial state is $\ket{\psi} = \alpha_{00} \ket{00} + \alpha_{01} \ket{01}$ and applying the gate preserves the state. If the first qubit is $\ket{1}$ (that is, $\alpha_{00} = \alpha_{01} = 0$), then the initial state is $\ket{\psi} = \alpha_{11} \ket{10} + \alpha_{10} \ket{11}$ and the resulting state is as if the NOT gate had been applied: $\ket{\psi}' = \alpha_{10} \ket{10} + \alpha_{11} \ket{11}$ In other words, the first qubit controls whether the second qubit will be NOTed, hence the name control. In a classical world, this could be achived by the XOR operation between the first and second bits, denoted by the symbol $\oplus$, so the same notation is used to represent the result of the second qubit. Summarizing, if we’re given qubits $\ket{x, y}$, this gate returns $\ket{x, x \oplus y}$: Quantum Circuits A quantum circuit is simply a composition of one or more quantum gates, analogous to a classical circuit. Quantum Parallelism Suppose we have a gate, which we’ll call $U_f$, that transforms a 2-qubit state $\ket{x,y}$ into $\ket{x, y \oplus f(x)}$ where $f(x)$ is any function that transforms a qubit into a $\ket{0}$ or $\ket{1}$ (i.e. a computational basis state) and $\oplus$ the XOR operator as defined in the CNOT gate. We’ll treat it as a blackbox but it can be shown to be a valid quantum gate (i.e. it has a corresponding unitary matrix transformation). Say $\ket{x} = \frac{\ket{0} + \ket{1}}{\sqrt{2}}$ and $\ket{y} = \ket{0}$, where the state $\ket{x}$ can be obtained by applying the Hadamard gate over $\ket{0}$. Then the resulting state will be $\frac{\ket{0, f(0)} + \ket{1, f(1)}}{\sqrt{2}}$ This is interesting because it contains the evaluation of $f(x)$ for both values $\ket{0}$ and $\ket{1}$. We can generalize $\ket{x}$ to have $n$ qubits and apply the Hadamard gate to each of them, which can be denoted as $H^{\otimes n}$. For $n = 2$ if we apply $H^{\otimes 2}$ to $\ket{00}$ we get: $\bigg( \frac{\ket{0} + \ket{1}}{\sqrt{2}} \bigg) \bigg( \frac{\ket{0} + \ket{1}}{\sqrt{2}} \bigg) = \frac{\ket{00} + \ket{01} + \ket{10} + \ket{11}}{2}$ In general it’s possible to show that if we apply $H^{\otimes n}$ to $\ket{0}^{\otimes n}$ we get: $H^{\otimes n}(\ket{0}^{\otimes n}) = \frac{1}{\sqrt{2^n}} \sum_x \ket{x}$ Where $x$ is all binary numbers with $n$ bits. Going back to our original gate $U_f$, if we set $\ket{x} = H^{\otimes n}(\ket{0}^{\otimes n})$, we’ll get the state $\frac{1}{\sqrt{2^n}} \sum_x \ket{x} \ket{f(x)}$ The insight is that we now have a state encoding $2^n$ values of $f(x)$ and we achieved that using only $O(n)$ gates. Unfortunately as we saw in Measuring the state of a Qubit, there’s no way to extract all these values from a quantum state, and once we perform a measurement only one of the values of $x$ will be returned. We’ll see next how to “entagle” these values such that any measurement will result in a value resulting from computing $f(x)$ for all values of $x$. The Deutsch Algorithm The Deutsch Algorithm is a more complex circuit using the $U_f$ gate from the previous session, which involves applying the Hadamard gate to both input qubits and then to the first qubit of the output: Let’s follow the state at each step of the circuit: $\ket{\psi_0} = \ket{01}$ Applying the Hadamard gate to each of the qubits: $\ket{\psi_1} = \bigg[ \frac{\ket{0} + \ket{1}}{\sqrt{2}} \bigg] \bigg[ \frac{\ket{0} - \ket{1}}{\sqrt{2}} \bigg]$ Let’s assume $\ket{x} = \frac{\ket{0} + \ket{1}}{\sqrt{2}}$ and $\ket{y} = \frac{\ket{0} - \ket{1}}{\sqrt{2}}$. Now, what is the result of applying $U_f$ over $\ket{\psi_1}$? First let’s assume the first qubit is either $\ket{0}$ or $\ket{1}$ (i.e. a computational basis). Now suppose $f(x) = \ket{0}$. Then $f(x) \oplus y = y$ and $U_f$ will yield the same state $\ket{x} \ket{y}$. If $f(x) = \ket{1}$, then $f(x) \oplus y$ is $y$ with its terms flipped, which in this particular case is $-y$, so in general we have: $U_f(\ket{x} \ket{y}) = (-1)^{f(x)} \ket{x} \ket{y}$ However, $\ket{x}$ is not in a computational base state, but we can use the linearity principle when applying a function over a quantum state, that is: $f(\ket{x}) = f(\alpha \ket{0} + \beta \ket{1}) = \alpha f(\ket{0}) + \beta f(\ket{1})$ Since $\ket{x} = \frac{\ket{0} + \ket{1}}{\sqrt{2}}$ the output of $U_f(\ket{x} \ket{y})$ is $U_f(\ket{x} \ket{y}) = \frac{(-1)^{f(\ket{0})} \ket{0} + (-1)^{f(\ket{1})} \ket{1}}{\sqrt{2}} \ket{y}$ We can group the results in two cases: one where $f(\ket{0}) = f(\ket{1})$, in which case $(-1)^{f(\ket{0})}$ and $(-1)^{f(\ket{1})}$ have the same sign, say $z = \pm 1$: $U_f(\ket{x} \ket{y}) = \frac{z \ket{0} + z \ket{1}}{\sqrt{2}} \ket{y} = \pm \frac{\ket{0} + \ket{1}}{\sqrt{2}} \ket{y} = \pm \ket{x} \ket{y}$ another wwhere $f(\ket{0}) \neq f(\ket{1})$ when $(-1)^{f(\ket{0})}$ and $(-1)^{f(\ket{1})}$ have opposite signs, so say $(-1)^{f(\ket{0})} = z$ and $(-1)^{f(\ket{0})} = -z$: $U_f(\ket{x} \ket{y}) = \frac{z \ket{0} - z \ket{1}}{\sqrt{2}} \ket{y} = \pm \frac{\ket{0} - \ket{1}}{\sqrt{2}} \ket{y} = \pm \ket{\bar x} \ket{y}$ Where we define $\ket{\bar{x}} = \frac{\ket{0} - \ket{1}}{\sqrt{2}}$, that is, it’s $\ket{x}$ with the sign of $\ket{1}$ negated. Summarizing, $\ket{\psi_2} = \begin{cases} \pm \ket{x}\ket{y} & \text{if } f(0) = f(1) \\ \pm \ket{\bar x}\ket{y}, & \text{if } f(0) \neq f(1) \end{cases}$ To compute $\ket{\psi_3}$ we just need to apply the Hadamard gate on the firt qubit. We can show that $\ket{H(x)} = \ket{0}$ and $\ket{H(\bar x)} = \ket{1}$, so: $\ket{\psi_3} = \begin{cases} \pm \ket{0}\ket{y} & \text{if } f(0) = f(1) \\ \pm \ket{1}\ket{y}, & \text{if } f(0) \neq f(1) \end{cases}$ This can be further compacted by noting $\ket{f(0) \oplus f(1)} = \ket{0}$ if $f(0) = f(1)$ and $\ket{f(0) \oplus f(1)} = \ket{1}$ otherwise: $\ket{\psi_3} = \ket{f(0) \oplus f(1)}\ket{y}$ This also makes it clearer that if we measure the first qubit, regardless of the result, it has to have computed both $f(0)$ and $f(1)$. We don’t have access to the individual values of the function evaluations, but we can access the result of a computation that evaluated $2$ functions in one operation. This gain will be more obvious next where we generalize this to $n$ qubits. The Deutsch-Jozsa Algorithm The Deutsch-Jozsa Algorithm is a generalization of the Deutsch for $n$-qubits. The circuit is almost exactly the same: The difference is that instead of one qubit for the state on top, we generalize to $n$-qubits. Let’s analyze the state at each step of the circuit: $\ket{\psi_0} = \ket{0}^{\oplus n} \ket{1}$ Applying a Hadamard gate to $\ket{0}^{\oplus n}$ yields $\frac{1}{\sqrt{2^n}} \sum_x \ket{x}$ as we saw in Quantum Parallelism, so $\ket{\psi_1}$ is: $\ket{\psi_1} = \frac{1}{\sqrt{2^n}} \sum_x \ket{x} \ket{y}$ Where $\ket{y} = \frac{\ket{0} - \ket{1}}{\sqrt{2}}$ as in the Deutsch Algorithm. Let’s call $\ket{X} = \frac{1}{\sqrt{2^n}} \sum_x \ket{x}$. As we saw in the previous session, if we assume $x$ in some computation basis state, then $U_f(\ket{x} \ket{y}) = (-1)^{f(x)} \ket{x} \ket{y}$ This remains true for any number of qubits because both $f(\ket{x})$ and $\ket{y}$ are still one qubit. Leveraging the linearity of terms in $\ket{X}$ we have $\ket{\psi_2} = U_f(\ket{X} \ket{y}) = \frac{1}{\sqrt{2^n}} \sum_x U_f(\ket{x} \ket{y}) = \frac{1}{\sqrt{2^n}} \sum_x (-1)^{f(x)} \ket{x} \ket{y}$ Let’s define $\ket{X_2} = \frac{1}{\sqrt{2^n}} \sum_x (-1)^{f(x)} \ket{x}$ We now need to apply the Hadamard gate to $\ket{X_2}$. We know how to compute the Hadamard gate for $\ket{0}^{\oplus n}$, but let’s see how to do this for an arbitrary computation basis state. To get an intuition, we can try to find a pattern for 1-qubit: $H(\ket{0}) = \frac{\ket{0} + \ket{1}}{\sqrt{2}}$ $H(\ket{1}) = \frac{\ket{0} - \ket{1}}{\sqrt{2}}$ They look very similar except for the signs on $\ket{1}$, if we could parametrize that sign based on the input and which term from the output we’re at, this could be succintly represented as a summation. Turns out it’s possible: let $z$ be a computation basis state from the output (that is, $\ket{0}$ and $\ket{1}$). If we define the term $(-1)^{xz} \ket{z}$, we can show that $H(\ket{x}) = \sum_{z} \frac{(-1)^{xz} \ket{z}}{\sqrt{2}}$ It’s possible to generalize this to $n$-qubits: $H^{\otimes n}(\ket{x}) = \sum_{z} \frac{(-1)^{x \cdot z} \ket{z}}{\sqrt{2^n}}$ Where $x \cdot z$ is the inner product modulo 2. Again, this assumes $\ket{x}$ is in a computation basis state. Our $\ket{X_2}$ is not, but its terms are, so we can simply use the linearity principle: $H^{\otimes n}(\ket{X_2}) = \frac{1}{\sqrt{2^n}} \sum_x (-1)^{f(x)} H^{\otimes n} (\ket{x}) = \frac{1}{2^n} \sum_x (-1)^{f(x)} \sum_z (-1)^{x \cdot z} \ket{z}$ We can exchange the summation over $x$ with that of $z$ for a cleaner form: $H^{\otimes n}(\ket{X_2}) = \frac{\sum_z \sum_x (-1)^{x \cdot z + f(x)} \ket{z}}{2^n}$ Using this, we can finally compute the final state of the circuit: $\ket{\psi_3} = \frac{\sum_z \sum_x (-1)^{x \cdot z + f(x)} \ket{z}}{2^n} \ket{y}$ If we measure the state of the first $n$-qubits, we’ll obtain the term for a given $z$, say $\frac{\sum_x (-1)^{x \cdot z + f(x)} \ket{z}}{\sqrt{2^n}}$ and that contains a computation involving all the $2^n$ computational base states, and we did so with only $O(n)$ operations! What can we do with this? We’ll next present a contrived problem which can be solved using this result. The Deutsch’s Problem The Deutsch can be described as follows: let $f(x)$ be a function that takes a $n$-bit number and return true or false. It can be either a constant function, one that returns true (or false, but not both) for all its inputs, or a balanced function, which returns true for exactly half of it’s input. To be super clear, an example of a constant function is: An example of a balanced function is: It’s easy to tell which type of functions the examples above are, but in general, we’d need to evaluate a function for at least the majority of all the possible inputs, that is $2^n/2 + 1$, which would make this classication process an exponential one using a classical computer. For a quantum computer, we can assume we used $U_f$ and have the state: $\ket{\psi_3} = \frac{\sum_z \sum_x (-1)^{x \cdot z + f(x)} \ket{z}}{2^n} \ket{y}$ Let $\ket{X_3}$ be the non-$y$ part of this state: $\ket{X_3} = \frac{\sum_z \sum_x (-1)^{x \cdot z + f(x)} \ket{z}}{2^n}$ Suppose we performed a measurement on the qubit and got the state $z = \ket{0}^{\otimes n}$. The correspond amplitude of this state is $\alpha_{0^{\otimes n}} = \frac{\sum_x (-1)^{x \cdot z + f(x)}}{2^n} = \frac{\sum_x (-1)^{f(x)}}{2^n}$ The last step comes from $x \cdot \vec{0} = 0$, and the probability of getting that state is $\abs{\alpha_{0^{\otimes n}}}^2$. Now suppose $f(x)$ is constant and always returns $k$. Then $\alpha_{0^{\otimes n}} = (-1)^k \frac{\sum_x 1}{2^n} = (-1)^k \frac{2^n}{2^n} = \pm 1$ This means that if $f(x)$ is constant, then $\abs{\alpha_{0^{\otimes n}}}^2 = 1$ and since the sum of the square of amplitudes must be 1, this means the other amplitues are 0, and we’ll obtain state $z = \ket{0}^{\otimes n}$ with 100% probability. Now suppose $f(x)$ is balanced. Then half of the terms in $\sum_x (-1)^{f(x)}$ will be positive ($f(x) = 0$), and half negative ($f(x) = 1$), so the $\alpha_{0^{\otimes n}} = 0$ and the probability of obtaining $z = \ket{0}^{\otimes n}$ is 0. This gives a simple proxy to determine whether $f(x)$ is constant or balanced. If we measure $\ket{X_3}$ and get $\alpha_{0^{\otimes n}}$, the function is constant, otherwise it’s balanced, and we determined this in $O(n)$ operations as opposed to thr $O(2^n)$ of a classical computer. Conclusion In this post we covered the Deutsch-Jozsa Algorithm, which on one hand provides an example in which a quantum computation outperforms a classical one, but on the other hand it’s simple enough that it requires only the basics of quantum computing to be understood, which allowed for a self-contained post. I started learning quantum computing via Michael Nielsen’s Youtube videos [3]. Though the series was never finished, he also wrote a book with Issac Chuang, Quantum Computation and Quantum Information, which seems very thorough and well praised, so I’m going to use that for my education. Here are some difficulties I encountered so far. One is getting used to the notation: I think I get the idea of the ket operator, but it’s still not natural to remember when to wrap variables inside it. I also found it important to be aware of when the state of a qubit has to be in a computation basis or it can be in a more general state. The book seems to be switching between these two types of state implicitly at times and is hard to follow at first. References • [1] Beyond Weird - Phillip Ball • [2] Quantum Computation and Quantum Information - Nielsen, M. and Chuang, I. • [3] Quantum computing for the determined - Nielsen, M.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971950888633728, "perplexity": 263.25750397457927}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00253.warc.gz"}
http://aas.org/archives/BAAS/v36n3/head2004/275.htm
8th HEAD Meeting, 8-11 September, 2004 Session 8 Pulsars and Magnetars Poster, Wednesday, September 8, 2004, 9:00am-10:00pm ## [8.17] XMM-Newton Observation of the High Magnetic Field Radio Pulsar PSR B0154+61 M. E. Gonzalez, V. M. Kaspi (McGill University), A. G. Lyne (University of Manchester), M. J. Pivovaroff (Lawrence Livermore National Laboratory) We present results from a deep X-ray observation of the radio pulsar B0154+61 performed with the \textit{XMM-Newton} satellite. The pulsar has a characteristic age of 197 kyr, a rotation period of 2.3 seconds and an inferred dipole surface magnetic field strength of 2.1\times1013 G, some of the highest values in the radio pulsar population. Our analysis shows that no X-ray emission is detected from the position of B0154+61 with \textit{XMM-Newton}. Using a blackbody model, the derived upper limits on the pulsar's temperature and luminosity are <73 eV and <1.4\times1032 ergs s-1, respectively (assuming a distance of 1.7~kpc and a column density NH<3\times1021 cm-2). When compared to the values predicted by neutron star cooling models, the above limits are found to favor those requiring rapid cooling, especially when corrections for the presence of a light-element atmosphere and relatively high magnetic field on the neutron star are made. However, the uncertainties in distance, column density and atmospheric composition prevent a definite conclusion. In addition, the limits on the temperature and luminosity of B0154+61 are found to be much lower than those exhibited by the "anomalous X-ray pulsars" (AXPs), although their spin characteristics are comparable, thus leaving unanswered the question of a radio pulsar/AXP connection.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941314160823822, "perplexity": 4119.796776518191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00248-ip-10-180-206-219.ec2.internal.warc.gz"}
https://bibbase.org/network/publication/simmons-wu-knight-lopez-assessingtheinfluenceoffieldandgisbasedinquiryonstudentattitudeandconceptualknowledgeinanundergraduateecologylab-2008
Assessing the Influence of Field- and GIS-based Inquiry on Student Attitude and Conceptual Knowledge in an Undergraduate Ecology Lab. Simmons, M. E., Wu, X. B., Knight, S. L., & Lopez, R. R. CBE Life Sci Educ, 7(3):338–345, September, 2008. Combining field experience with use of information technology has the potential to create a problem-based learning environment that engages learners in authentic scientific inquiry. This study, conducted over a 2-yr period, determined differences in attitudes and conceptual knowledge between students in a field lab and students with combined field and geographic information systems (GIS) experience. All students used radio-telemetry equipment to locate fox squirrels, while one group of students was provided an additional data set in a GIS to visualize and quantify squirrel locations. Pre/postsurveys and tests revealed that attitudes improved in year 1 for both groups of students, but differences were minimal between groups. Attitudes generally declined in year 2 due to a change in the authenticity of the field experience; however, attitudes for students that used GIS declined less than those with field experience only. Conceptual knowledge also increased for both groups in both years. The field-based nature of this lab likely had a greater influence on student attitude and conceptual knowledge than did the use of GIS. Although significant differences were limited, GIS did not negatively impact student attitude or conceptual knowledge but potentially provided other benefits to learners. @article{simmons_assessing_2008, title = {Assessing the {Influence} of {Field}- and {GIS}-based {Inquiry} on {Student} {Attitude} and {Conceptual} {Knowledge} in an {Undergraduate} {Ecology} {Lab}}, volume = {7}, url = {http://www.lifescied.org/cgi/content/abstract/7/3/338}, doi = {10.1187/cbe.07-07-0050}, abstract = {Combining field experience with use of information technology has the potential to create a problem-based learning environment that engages learners in authentic scientific inquiry. This study, conducted over a 2-yr period, determined differences in attitudes and conceptual knowledge between students in a field lab and students with combined field and geographic information systems (GIS) experience. All students used radio-telemetry equipment to locate fox squirrels, while one group of students was provided an additional data set in a GIS to visualize and quantify squirrel locations. Pre/postsurveys and tests revealed that attitudes improved in year 1 for both groups of students, but differences were minimal between groups. Attitudes generally declined in year 2 due to a change in the authenticity of the field experience; however, attitudes for students that used GIS declined less than those with field experience only. Conceptual knowledge also increased for both groups in both years. The field-based nature of this lab likely had a greater influence on student attitude and conceptual knowledge than did the use of GIS. Although significant differences were limited, GIS did not negatively impact student attitude or conceptual knowledge but potentially provided other benefits to learners.}, number = {3}, urldate = {2009-09-21TZ}, journal = {CBE Life Sci Educ}, author = {Simmons, M. E. and Wu, X. B. and Knight, S. L. and Lopez, R. R.}, month = sep, year = {2008}, pages = {338--345} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3931444585323334, "perplexity": 5778.8216933485155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00629.warc.gz"}
https://www.physicsforums.com/threads/balinese-instrument-called-an-angklung.170123/
# Balinese instrument called an angklung 1. May 13, 2007 ### atthegates hi in class we're doing a project where we have to explain the physics behind an instrument that we either build or already have. i am going to be bringing in a balinese instrument called an angklung, which is an instrument similar to chimes except it is made out of bamboo. i was wondering whether someone can explain to me just briefly, in a few sentences, the physics behind this instrument. again, id really appreciate some help on this from someone who understands the physics behind musical instruments. thanks 2. May 13, 2007 ### Danger I must admit that I clicked on this thread expecting to find a question about airspeed indicators or oil pressure gauges. It sounds as if this is more complicated than normal windchimes. Those seem to just rely upon the resonance frequency of the material (usually glass or brass). In your case, that would be a major factor as well, but bamboo won't be nearly as consistent from one piece to another. Add to that the effect that 'segment weals' (my term; those ridges on the outside) have upon the rigidity and thus the frequency of the tubes. The internal and external diameters of the tubes will vary just as much as the length. I am not the person that you want answering this, though. I know almost nothing of musical instruments, and even less of math. 3. May 13, 2007 ### Ki Man me too It would be against guidelines if we just gave you everything without you doing any looking into of your own. If you just googled 'physics of chimes' you would have found this
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640481233596802, "perplexity": 1044.6003747277596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00496-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-short-one-on-symmetric-matrices.175060/
# A short one on symmetric matrices 1. Jun 25, 2007 ### Päällikkö This isn't really homework, but close enough. I suppose this is quite simple, but my head's all tangled up for today. Anyways, Given the real symmetric matrix LTL = UDUT, find L. I suppose L = +- D1/2UT, and it's clear this choice of L satisfies the given equation. But can it be proven that the above L actually follows from the given equation? i.e. LTL = UDUT = (D1/2UT)T(D1/2UT) <=> L = +- D1/2UT? Am I making any sense? 2. Jun 25, 2007 ### Dick So you are asking if L is unique? No. Take L and U to be different unitary matrices and D=1. Then T(L).L=1=U.D.T(U) (T=transpose). But it certainly isn't necessarily true that L=+/-T(U).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052454829216003, "perplexity": 3501.2850611232175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719564.4/warc/CC-MAIN-20161020183839-00357-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.ias.ac.in/describe/article/pram/044/02/0121-0131
• Conversion of a chaotic attractor into a strange nonchaotic attractor in an one dimensional map and BVP oscillator • # Fulltext https://www.ias.ac.in/article/fulltext/pram/044/02/0121-0131 • # Keywords One dimensional map; Bonhoeffer-van der Pol oscillator; controlling of chaos; strange nonchaotic attractor • # Abstract In this paper we investigate numerically the possibility of conversion of a chaotic attractor into a nonchaotic but strange attractor in both a discrete system (an one dimensional map) and in a continuous dynamical system — Bonhoeffer—van der Pol oscillator. In these systems we show suppression of chaotic property, namely, the sensitive dependence on initial states, by adding appropriate i) chaotic signal and ii) Gaussian white noise. The controlled orbit is found to be strange but nonchaotic with largest Lyapunov exponent negative and noninteger correlation dimension. Return map and power spectrum are also used to characterize the strange nonchaotic attractor. • # Author Affiliations 1. Department of Physics, Manonmaniam Sundaranar University, Tirunelveli - 627 002, India • # Pramana – Journal of Physics Volume 97, 2023 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059312701225281, "perplexity": 3520.5791961184004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00730.warc.gz"}
http://libros.duhnnae.com/2017/jun8/149836438731-Nested-quasicrystalline-discretisations-of-the-line.php
# Nested quasicrystalline discretisations of the line Nested quasicrystalline discretisations of the line - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. 1 APC - UMR 7164 - AstroParticule et Cosmologie Abstract : One-dimensional cut-and-project point sets obtained from the square lattice in the plane are considered from a unifying point of view and in the perspective of aperiodic wavelet constructions. We successively examine their geometrical aspects, combinatorial properties from the point of view of the theory of languages, and self-similarity with algebraic scaling factor $\theta$. We explain the relation of the cut-and-project sets to non-standard numeration systems based on $\theta$. We finally examine the substitutivity, a weakened version of substitution invariance, which provides us with an algorithm for symbolic generation of cut-and-project sequences. Keywords : Multiresolution wavelet Pisot number cut-and-project set quasicrystal self-similarity substitution combinatorics on words Autor: J.-P. Gazeau - Z. Masakova E. Pelantova Fuente: https://hal.archives-ouvertes.fr/ DESCARGAR PDF
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5665356516838074, "perplexity": 4310.85204663666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948569405.78/warc/CC-MAIN-20171215114446-20171215140446-00103.warc.gz"}
https://en.wikipedia.org/wiki/Poynting_vector
# Poynting vector Dipole radiation of a dipole vertically in the page showing electric field strength (colour) and Poynting vector (arrows) in the plane of the page. In physics, the Poynting vector represents the directional energy flux density (the rate of energy transfer per unit area) of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2). It is named after its inventor John Henry Poynting. Oliver Heaviside[1] and Nikolay Umov[2]:147 independently co-invented the Poynting vector. ## Definition In Poynting's original paper and in many textbooks, it is usually denoted by S or N, and defined as[3][4] $\mathbf{S} = \mathbf{E} \times \mathbf{H},$ where bold letters represent vectors and This form is often called the Abraham form.[5][6] Occasionally an alternative definition in terms of electric field E and magnetic flux density B is used. It is also possible to combine the electric displacement field D with the magnetic flux density B to get the Minkowski form of the Poynting vector, or use D and H to construct another.[6] The choice has been controversial: Pfeifer et al.[7] summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms. The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector[8] discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view. ## Interpretation A DC circuit consisting of a battery (V) and resistor (R), showing the direction of the Poynting vector (S, blue) in the space surrounding it, along with the fields it is derived from; the electric field (E, red) and the magnetic field (H, green). In the region around the battery the Poynting vector is directed outward, indicating power flowing out of the battery into the fields; in the region around the resistor the vector is directed inward, indicating field power flowing into the resistor. Across any plane P between the battery and resistor, the Poynting flux is in the direction of the resistor. The Poynting vector appears in Poynting's theorem (see that article for the derivation of the theorem and vector), an energy-conservation law: $\frac{\partial u}{\partial t} = -\mathbf{\nabla} \cdot \mathbf{S} - \mathbf{J_\mathrm{f}} \cdot \mathbf{E},$ where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by $u = \frac{1}{2}\! \left(\mathbf{E} \cdot \mathbf{D} + \mathbf{B} \cdot \mathbf{H}\right)\! ,$ where • E is the electric field; • D is the electric displacement field; • B is the magnetic flux density; • H is the magnetic field.[9]:258–260 The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term, and instead contribute to S and u. For linear, nondispersive and isotropic(for simplicity) materials, the constitutive relations can be written as $\mathbf{D} = \varepsilon \mathbf{E},\quad \mathbf{H} = \frac{1}{\mu}\mathbf{B},$ where Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency. In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms.[9]:262–264 ## Invariance to adding a curl of a field Since the Poynting vector only occurs in Poynting's theorem as a divergence ∇ ⋅ S, the Poynting vector S is arbitrary to the extent that one can add a curl of a field F to S:[9]:258–260 $\mathbf{S}' = \mathbf{S} + \nabla \times \mathbf F \Rightarrow \nabla \cdot \mathbf{S}' = \nabla \cdot \mathbf{S},$ since the divergence of the curl term is zero: ∇ ⋅ (∇ × F) = 0 for an arbitrary field F (see vector calculus identities). It is often thought that using a different vector than the classical Poynting vector will lead to inconsistencies in a relativistic description of electromagnetic fields where energy and momentum should be defined locally in terms of the stress–energy tensor.[9]:258–260 However such a transformation is consistent with quantum electrodynamics where photon particles have no defined trajectories but only a probability of being emitted or absorbed.[10]:139–141 ## Formulation in terms of microscopic fields In some cases, it may be more appropriate to define the Poynting vector as $\mathbf{S} = \frac{1}{\mu_0} \mathbf{E} \times \mathbf{B},$ where It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only. The corresponding form of Poynting's theorem is $\frac{\partial u}{\partial t} = - \nabla \cdot \mathbf{S} -\mathbf{J} \cdot \mathbf{E},$ where J is the total current density and the energy density u is given by $u = \frac{1}{2}\! \left(\varepsilon_0 \mathbf{E}^2 + \frac{1}{\mu_0} \mathbf{B}^2\right)\! ,$ where ε0 is the vacuum permittivity. The two alternative definitions of the Poynting vector are equivalent in vacuum or in non-magnetic materials, where B = μ0H. In all other cases, they differ in that S = 1/μ0 E × B, and the corresponding u are purely radiative, since the dissipation term, JE covers the total current, while the definition in terms of H has contributions from bound currents which then lack in the dissipation term.[11] Since only the microscopic fields E and B occur in the derivation of S = 1/μ0 E × B, assumptions about any material present can be completely avoided, and Poynting vector as well as the theorem in this definition are universally valid, in vacuum as in all kinds of material. This is especially true for the electromagnetic energy density, in contrast to the case above.[11] ## Time-averaged Poynting vector For time-periodic sinusoidal electromagnetic fields, the average power flow per unit time is often more useful, and can be found by using the analytic representation of the electric and magnetic fields as follows (the subscript "a" denotes an analytic signal, the underbar with the subscript "m" a complex amplitude, and the superscript " * " a complex conjugate): \begin{align}\mathbf{S} &= \mathbf{E} \times \mathbf{H}\\ &= \operatorname{Re}\! \left(\mathbf{E_\mathrm{a}}\right) \times \operatorname{Re}\!\left(\mathbf{H_\mathrm{a}} \right)\\ &= \operatorname{Re}\! \left(\underline{\mathbf{E_m}} e^{j\omega t}\right) \times \operatorname{Re}\!\left(\underline{\mathbf{H_m}} e^{j\omega t}\right)\\ &= \frac{1}{2}\! \left(\underline{\mathbf{E_m}} e^{j\omega t} + \underline{\mathbf{E_m^*}} e^{-j\omega t}\right) \times \frac{1}{2}\! \left(\underline{\mathbf{H_m}} e^{j\omega t} + \underline{\mathbf{H_m^*}} e^{-j\omega t}\right)\\ &= \frac{1}{4}\! \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m^*}} + \underline{\mathbf{E_m^*}} \times \underline{\mathbf{H_m}} + \underline{\mathbf{E_m}} \times \underline{\mathbf{H_m}} e^{2j\omega t} + \underline{\mathbf{E_m^*}} \times \underline{\mathbf{H_m^*}} e^{-2j\omega t}\right)\\ &= \frac{1}{4}\! \left[\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m^*}} + \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m^*}}\right)^* + \underline{\mathbf{E_m}} \times \underline{\mathbf{H_m}} e^{2j\omega t} + \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m}} e^{2j\omega t}\right)^*\right]\\ &= \frac{1}{2} \operatorname{Re}\! \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m^*}}\right) + \frac{1}{2}\operatorname{Re}\! \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m}} e^{2j\omega t}\right)\! . \end{align} The average over time is given by $\langle\mathbf{S}\rangle = \frac{1}{T} \int_0^T \mathbf{S}(t)\, dt = \frac{1}{T} \int_0^T\! \left[\frac{1}{2} \operatorname{Re}\! \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m^*}}\right) + \frac{1}{2} \operatorname{Re}\! \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m}} e^{2j\omega t}\right)\right]dt.$ The second term is a sinusoidal curve $\operatorname{Re}\! \left(e^{2j\omega t}\right) = \cos(2\omega t)$ and its average is zero, giving $\langle \mathbf{S}\rangle = \frac{1}{2} \operatorname{Re}\! \left(\underline{\mathbf{E_m}} \times \underline{\mathbf{H_m^*}}\right) = \frac{1}{2} \operatorname{Re}\! \left(\underline{\mathbf{E_m}} e^{j\omega t} \times \underline{\mathbf{H_m^*}} e^{-j\omega t}\right) = \frac{1}{2} \operatorname{Re}\! \left(\mathbf{E_\mathrm{a}} \times \mathbf{H_\mathrm{a}^*}\right)\! .$ ## Examples and applications ### Coaxial cable Poynting vector in a coaxial cable, shown in red. For example, the Poynting vector within the dielectric insulator of a coaxial cable is nearly parallel to the wire axis (assuming no fields outside the cable and a wavelength longer than the diameter of the cable, including DC). Electrical energy delivered to the load is flowing entirely through the dielectric between the conductors. Very little energy flows in the conductors themselves, since the electric field strength is nearly zero. The energy flowing in the conductors flows radially into the conductors and accounts for energy lost to resistive heating of the conductor. No energy flows outside the cable, either, since there the magnetic fields of inner and outer conductors cancel to zero. ### Resistive dissipation If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface.[12] This is a consequence of Snell's law and the very slow speed of light inside a conductor. See Hayt page 402[13] for the definition and computation of the speed of light in a conductor. Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454.[14] ### Plane waves In a propagating sinusoidal linearly polarized electromagnetic plane wave of a fixed frequency, the Poynting vector always points in the direction of propagation while oscillating in magnitude. The time-averaged magnitude of the Poynting vector is $\langle S\rangle = \frac{1}{2 \mu_0 \mathrm{c}}E_\mathrm{m}^2 = \frac{\varepsilon_0 \mathrm{c}}{2} E_\mathrm{m}^2$ where Em is the amplitude of the electric field and c is the speed of light in free space. This time-averaged value is called irradiance and denoted Ee in radiometry, or is called intensity and denoted I in other fields. #### Derivation In an electromagnetic plane wave, E and B are always perpendicular to each other and the direction of propagation. Moreover, their amplitudes are related according to $B_\mathrm{m} = \frac{1}{\mathrm{c}}E_\mathrm{m}$ and their time and position dependences are $E(\mathbf{r}, t) = E_\mathrm{m} \cos(\omega t - \mathbf{k} \cdot \mathbf{r})$ $B(\mathbf{r}, t) = B_\mathrm{m} \cos(\omega t - \mathbf{k} \cdot \mathbf{r})$ where ω is the angular frequency of the wave and k is wave vector. The time-dependent and position magnitude of the Poynting vector is then $S(\mathbf{r}, t) = \frac{1}{\mu_0}E_\mathrm{m}B_\mathrm{m} \cos^2(\omega t - \mathbf{k} \cdot \mathbf{r}) = \frac{1}{\mu_0 c}E_\mathrm{m}^2 \cos^2(\omega t - \mathbf{k} \cdot \mathbf{r}) = \varepsilon_0 \mathrm{c}E_\mathrm{m}^2 \cos^2(\omega t - \mathbf{k} \cdot \mathbf{r}).$ In the last step, we used the equality ε0μ0 = 1/c2. Since the time- or space-average of cos2tkr) is 1/2, it follows that $\langle S\rangle = \frac{1}{2\mu_0 \mathrm{c}}E_\mathrm{m}^2 = \frac{\varepsilon_0 \mathrm{c}}{2}E_\mathrm{m}^2.$ average value of (cos(x))^2 is a half, unlike the average of cos(x) which is zero It will be appreciated that quantitatively the Poynting vector is evaluated only from a prior knowledge of the distribution of electric and magnetic fields, which are calculated by applying boundary conditions to a particular set of physical circumstances, for example a dipole antenna. Therefore the E and H field distributions form the primary object of any analysis, while the Poynting vector remains an interesting by-product. The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by: $P_\mathrm{rad} = \frac{\langle S\rangle}{\mathrm{c}}.$ ### Static fields Poynting vector in a static field, where E is the electric field, H the magnetic field, and S the Poynting vector. The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, q(v × B). To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end. While the circulating energy flow may seem nonsensical or paradoxical, it is necessary to maintain conservation of momentum. Momentum density is proportional to energy flow density, so the circulating flow of energy contains an angular momentum.[15] This is the cause of the magnetic component of the Lorentz force which occurs when the capacitor is discharged. During discharge, the angular momentum contained in the energy flow is depleted as it is transferred to the charges of the discharge current crossing the magnetic field. ## Notes 1. ^ Julius Adams Stratton (1941). "Chap.II Stress and Energy". Electromagnetic Theory (First ed.). New York: McGraw-Hill. p. 132. ”first derived by Poynting in 1884 and again in the same year by Heaviside.” 2. ^ Janusz Turowski; Marek Turowski (6 February 2014). Engineering Electrodynamics: Electric Machine, Transformer, and Power Equipment Design. CRC Press. ISBN 978-1-4665-8932-2. 3. ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0471927129 4. ^ Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 5. ^ Poynting, J. H. (1884). "On the Transfer of Energy in the Electromagnetic Field". Philosophical Transactions of the Royal Society of London 175: 343–361. doi:10.1098/rstl.1884.0016. 6. ^ a b Kinsler, P.; Favaro, A.; McCall M.W. (2009). "Four Poynting theorems". Eur. J. Phys. 30 (5): 983. arXiv:0908.1721. Bibcode:2009EJPh...30..983K. doi:10.1088/0143-0807/30/5/007. 7. ^ Pfeifer, R.N.C.; Nieminen, T.A.; Heckenberg N. R.; Rubinsztein-Dunlop H. (2007). "Momentum of an electromagnetic wave in dielectric media". Rev. Mod. Phys. 79 (4): 1197. Bibcode:2007RvMP...79.1197P. doi:10.1103/RevModPhys.79.1197. 8. ^ Umov, N. A. (1874). "Ein Theorem über die Wechselwirkungen in Endlichen Entfernungen". Zeitschrift für Mathematik und Physik XIX: 97. 9. John David Jackson (1998). Classical electrodynamics (Third ed.). New York: Wiley. ISBN 0-471-30932-X. 10. ^ Hecht, Eugene (2002), Optics (4th ed.), United States of America: Addison Wesley, ISBN 0-8053-8566-5 11. ^ a b Richter, F.; Florian, M.; Henneberger, K. (2008). "Poynting's theorem and energy conservation in the propagation of light in bounded media". Europhys. Lett. 81 (6): 67005. arXiv:0710.0515. Bibcode:2008EL.....8167005R. doi:10.1209/0295-5075/81/67005. 12. ^ Harrington (1981, p. 61) 13. ^ Hayt (1993, p. 402) 14. ^ Reitz (1993, p. 454) 15. ^ Feynman Lectures on Physics, Sections 17-4 and Volume 2, Chapter 17, section 4 and the end of Chapter 27, Section 6. ## References • Harrington, Roger F. (1961). "Time-Harmonic Electromagnetic Fields". McGraw-Hill • Hayt, William (1981). Engineering Electromagnetics (4th ed.). McGraw-Hill. ISBN 0-07-027395-2 • Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (1993). Foundations of Electromagnetic Theory (4th ed.). Addison-Wesley. ISBN 0-201-52624-7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919655919075012, "perplexity": 1135.876941356957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00044-ip-10-171-96-226.ec2.internal.warc.gz"}
https://ccrma.stanford.edu/~jos/sasp/Optimal_Chebyshev_FIR_Filters.html
Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search ### Optimal Chebyshev FIR Filters As we've seen above, the defining characteristic of FIR filters optimal in the Chebyshev sense is that they minimize the maximum frequency-response error-magnitude over the frequency axis. In other terms, an optimal Chebyshev FIR filter is optimal in the minimax sense: The filter coefficients are chosen to minimize the worst-case error (maximum weighted error-magnitude ripple) over all frequencies. This also means it is optimal in the sense because, as noted above, the norm of a weighted frequency-response error is the maximum magnitude over all frequencies: (5.32) Thus, we can say that an optimal Chebyshev filter minimizes the norm of the (possibly weighted) frequency-response error. The norm is also called the uniform norm. While the optimal Chebyshev FIR filter is unique, in principle, there is no guarantee that any particular numerical algorithm can find it. The optimal Chebyshev FIR filter can often be found effectively using the Remez multiple exchange algorithm (typically called the Parks-McClellan algorithm when applied to FIR filter design) [176,224,66]. This was illustrated in §4.6.4 above. The Parks-McClellan/Remez algorithm also appears to be the most efficient known method for designing optimal Chebyshev FIR filters (as compared with, say linear programming methods using matlab's linprog as in §3.13). This algorithm is available in Matlab's Signal Processing Toolbox as firpm() (remez() in (Linux) Octave).5.13There is also a version of the Remez exchange algorithm for complex FIR filters. See §4.10.7 below for a few details. The Remez multiple exchange algorithm has its limitations, however. In particular, convergence of the FIR filter coefficients is unlikely for FIR filters longer than a few hundred taps or so. Optimal Chebyshev FIR filters are normally designed to be linear phase [263] so that the desired frequency response can be taken to be real (i.e., first a zero-phase FIR filter is designed). The design of linear-phase FIR filters in the frequency domain can therefore be characterized as real polynomial approximation on the unit circle [229,258]. In optimal Chebyshev filter designs, the error exhibits an equiripple characteristic--that is, if the desired response is and the ripple magnitude is , then the frequency response of the optimal FIR filter (in the unweighted case, i.e., for all ) will oscillate between and as increases. The powerful alternation theorem characterizes optimal Chebyshev solutions in terms of the alternating error peaks. Essentially, if one finds sufficiently many for the given FIR filter order, then you have found the unique optimal Chebyshev solution [224]. Another remarkable result is that the Remez multiple exchange converges monotonically to the unique optimal Chebyshev solution (in the absence of numerical round-off errors). Fine online introductions to the theory and practice of Chebyshev-optimal FIR filter design are given in [32,283]. The window method4.5) and Remez-exchange method together span many practical FIR filter design needs, from quick and dirty'' to essentially ideal FIR filters (in terms of conventional specifications). Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search [How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890880823135376, "perplexity": 2070.556039623415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295494.5/warc/CC-MAIN-20160823195815-00060-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/145775-improper-integrals-2-a.html
# Math Help - Improper Integrals 2 1. ## Improper Integrals 2 evaluate the integral or state that it diverges: integral from 0 to 1 of ((x+1)/sqrt((x^2)+2x))dx 2. Partial fractions should do the job: $\int_0^1 \frac{x+1}{x^2+2x}dx = \int_0^1 \frac{x+1}{x(x+2)}dx = \int_0^1 \left[\frac{A}{x} + \frac{B}{x+2}\right]dx$ $A(x+2) + Bx = x+1$ set x = -2 to get B; set x = 0 to get A. 3. the problem has been misread, the integrand is actually $\frac{x+1}{\sqrt{x^2+2x}}.$ 4. Originally Posted by smartartbug evaluate the integral or state that it diverges: integral from 0 to 1 of ((x+1)/sqrt((x^2)+2x))dx $\int_{0}^1\frac{(x+1)}{\sqrt{x^2+2x}}dx$ let $u=x^2+2x$ so $du=2(x+1)dx$ $\int_{0}^3\frac{(x+1)}{2(x+1)\sqrt{u}}du$ $\frac{1}{2}\int_{0}^3u^{-\frac{1}{2}}du$ 5. just to mention that the integral converges by limit comparison test with $\int_0^1\frac{dx}{\sqrt x}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9943272471427917, "perplexity": 909.6471231511729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449160.83/warc/CC-MAIN-20151124205409-00216-ip-10-71-132-137.ec2.internal.warc.gz"}
https://dessindenfants.wordpress.com/
## Monodromy Groups and Compositions of Belyi Maps A few weeks ago, I considered how the monodromy groups of Shabat polynomials change under composition by considering several examples.  I would like to explain a general phenomenon by considering the composition of Belyi maps on the sphere. Say that we have two Belyi maps, namely $\phi, \, \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C)$ such that the composition $\Phi = \beta \circ \phi$ is also a Belyi map. (For example, a sufficient condition here is that $\beta \bigl( \{ 0, \, 1, \, \infty \} \bigr) \subseteq \{ 0, \, 1, \, \infty \}$.) I am interested in computing the monodromy group of the compositon $\Phi$. To this end, I will show the following. Proposition. Say that $\text{Mon}(\beta) \subseteq S_N$ and $\text{Mon}(\phi) \subseteq S_M$ are the monodromy groups of $\beta$ and $\phi$, respectively, as subgroups of the symmetric groups $S_N$ and $S_M$, respectively. Then $\text{Mon}(\Phi) \subseteq S_M \wr S_N$ is a subgroup of the wreath product $S_M \wr S_N := {S_M}^N \rtimes S_N$ of the symmetric groups. ## System of Equations for Computing Shabat Polynomials A few weeks ago, Dong Quan Ngoc Nguyen (University of Notre Dame) came to visit here at Purdue.  We spoke a little about the computer package Bertini (created by Daniel Bates, Jonathan Hauenstein, Andrew Sommese and Charles Wampler) and whether the homotopy continuation method can be used to compute Belyi maps and Shabat Polynomials.  I’ve been working on setting up a system of polynomial equations whose solutions give the coefficients of the Belyi maps, so it really comes down to actually finding the solutions to these equations.  The hope is that a polynomial homotopy continuation method will be much more efficient than say, using Groebner bases, to find all solutions! Let me try and set up how this would work by working through some explicit examples. ## Monodromy Groups and Compositions of Shabat Polynomials Last week, the great Naiomi Cameron visited me for a few days to discuss some new directions about Shabat polynomials.  I’ve been horrible about posting on this blog, so now that I’ve been motivated to work on Shabat polynomials again, I figured it’s time for me to write! As considered in the 1994 paper by Georgii Borisovich Shabat and Alexander Zvonkin entitled Plane trees and algebraic numbers, the rational function $\displaystyle \beta(z) = - \dfrac {4}{531441} \, (z - 1) \, z^3 \, \bigl( 2 \, z^2 + 3 \, z + 9 \bigr)^3 \, \bigl( 8 \, z^4 + 28 \, z^3 + 126 \, z^2 + 189 \, z + 378 \bigr)$ is a Shabat polynomial which happens to be the composition $\beta = \phi \circ \Phi$ of two other Shabat polynomials.  The first has monodromy group $G_\phi = Z_2$ as a cyclic group, while the second has monodromy group $G_\phi = A_7$ as an alternating group.  The monodromy group of the composition has order $|G_\beta| = 12 \, 700 \, 800 = |G_\phi| \cdot |G_\Phi|^2$. Do we have $G_\beta = Z_2 \ltimes \bigl( A_7 \times A_7 \bigr)$ as the wreath product of $G_\Phi$ by $G_\phi$? ## “Jarden’s Property and Hurwitz Curves” by Robert A. Kucharczyk Robert A. Kucharczyk has a new paper on the ArXiv entitled “Jarden’s Property and Hurwitz Curves”. ## “Enumeration of Grothendieck’s Dessins and KP Hierarchy” by Peter Zograf Peter Zograf has a new paper on the ArXiv entitled “Enumeration of Grothendieck’s Dessins and KP Hierarchy”. ## “Generalized Onsager Algebras and Grothendieck’s Desssins d’Enfants” by Chernousov, Gille, and Pianzola Vladimir Chernousov, Philippe Gille and Arturo Pianzola have a new paper on the ArXiv entitled “Generalized Onsager Algebras and Grothendieck’s Dessins d’Enfants”.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7861045598983765, "perplexity": 784.5761685833298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512395.23/warc/CC-MAIN-20181019103957-20181019125457-00294.warc.gz"}
https://www.physicsforums.com/threads/trig-physics.111822/
# Homework Help: Trig Physics 1. Feb 22, 2006 ### rculley1970 I am having problems with starting a certain problem. A train traveling at 297 km/h requires 1.45 km to come to an emergency stop. Find the braking acceleration, assuming constant acceleration. Now I am not given the acceration or time so this one is stumping me. I have tried several formulas including: deltaX = 1/2(a)(t)^2 + Vo(t) (v)^2 = (Vo)^2 + 2(a)(delta X) v = Vo +a(t) I cannot figure out how to get time or acceleration to solve for the other. The acceleration isn't due to gravity so it isn't (-9.8m/s^2) so I am at a loss for what equation to use. Should I solve for time first? If so, what is the equation I am missing? As far as I know, I am given Vo (297), Vfinal (0), delta Y (-297), delta X (1.45) and I have already tried converting km/h to m/s which the answer is supposed to be in. CONFUSED!!!!! 2. Feb 22, 2006 ### nrqed You wrote the equation that you need!! $v_f^2 = v_i^2 + 2 a_x \Delta x [/tex]!! That's all you need! 3. Feb 22, 2006 ### rculley1970 I tried that already but will try again. I am using Vf^2 = 0 since that is the final velocity, 297 as the initial velocity, and 1.45 as delta X. I am coming up with -30417 but will keep messing with it to figure it out. I know it CAN'T be this hard. Should I be finding the time for it to stop? 4. Feb 22, 2006 ### jollygood all u need is (v)^2 = (Vo)^2 + 2(a)(delta X) the point here is, to think what happens when u hit brakes. u slow down, which is a deceleration or a negative acceleration. (opposite is speeding up, which is positive acceleration). usually in a problem like this both these situation are referred by the term acceeleration and leave you to decide. gravity does not come in. you're moving horizontally. gravity acts only on objects travelling in vertical direction. so from starting velocity V0=297 km/h to final velocity V=0 (i.e. to a stop) 0 = V0^2 + 2 a (delta X) u know delta X. plug in and solve for 'a', acceleration. final answer will be negative, proving that you are actually decelerating. 5. Feb 22, 2006 ### rculley1970 OK, I have the equation: 0^2 = (82.5)^2 + 2(a)1450 changed km/h to m/s and km to m. I am coming up with -2.35m/s acceleration. If I am wrong let me know. I am busy checking it right now. 6. Feb 22, 2006 ### nrqed right...except for the units.... 7. Feb 22, 2006 ### rculley1970 I know it is probably an easy problem after seeing how it is done but I have been fighting this problem by myself for 4 days and just can't seem to figure out why I can't get the acceleration without the the time. The example in the book shows: A plane brakes at (Ax) 10mi/h, after Vo of 160mi/h. this gives acceleration, initial velocity, final velocity and time can be figured out. 8. Feb 22, 2006 ### rculley1970 Do you mean changing km/h to m/s? 9. Feb 22, 2006 ### rculley1970 Do you mean changing km/h to m/s? 10. Feb 22, 2006 ### nrqed No. You got the right answer, but it is in [itex] m/s^2$, not on m/s 11. Feb 22, 2006 ### rculley1970 I am taking a break for the night on it. Email if you can explain the hint a little bit more. Will be at work at it again in the morning. I know it isn't that hard of a problem but I am making it hard. I just need to figure out what I am doing wrong with the conversion for it. Thank you for your help. 12. Feb 22, 2006 ### nrqed ? But you are done!! You did find the acceleration! 13. Feb 22, 2006 ### nrqed you got ir right. You converted the distance and speed correctly. I was just pointing out that you gave your final answer with the wrong units. (but it's the correct numerical value) 14. Feb 22, 2006 ### rculley1970 you mean -2.35 m/s^2. lol, i thought you meant I had the conversion wrong. Thank you for your help and I am going to redo the problem again in the morning just to verify. Sorry if I didn't get it soon enough but I guess I didn't carry the units across like I should have. I need to work on that. Once again, Thank you for all everyones help. By the way, this homework is already supposed to be submitted but I didn't get it done in time so I got a 0 on it. I am just trying to understand it because it may be on the exam coming up. 15. Feb 22, 2006 ### nrqed Sorry if I made you worry! And you have the right attitude: it's very important to understand that very well in preparation for the tests. good luck!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935718834400177, "perplexity": 667.0461245120183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944851.23/warc/CC-MAIN-20180421012725-20180421032725-00320.warc.gz"}
https://www.physicsforums.com/threads/spin-measurements-of-an-electron.909268/
# Spin measurements of an electron 1. Mar 27, 2017 ### BOAS 1. The problem statement, all variables and given/known data Consider an electron described initially by $\psi = \frac{1}{\sqrt{10}} \begin{pmatrix} 1\\ 3 \end{pmatrix}$. A measurement of the spin component along a certain axis, described by an operator $\hat{A}$, has the eigenvalues $\pm \frac{\hbar}{2}$ as possible outcomes (as with any axis), and the corresponding eigenstates of $\hat{A}$ are $\psi_1 = \frac{1}{\sqrt{10}} \begin{pmatrix} 1\\ 3 \end{pmatrix}$ , $\psi_2 = \frac{1}{\sqrt{10}} \begin{pmatrix} 3\\ -1 \end{pmatrix}$. (a) Explain without calculation why a measurement of A returns the result $\frac{\hbar}{2}$ with certainty. (b) If the spin-z component were now to be measured, what would be the probabilities of getting $\frac{\hbar}{2}$ and $- \frac{\hbar}{2}$, respectively? (c) If the spin-z component is now indeed measured, and subsequently A again, show that the probability of getting $\frac{\hbar}{2}$ in the second measurement is 41/50. 2. Relevant equations 3. The attempt at a solution For part (a) the initial state is the same as the eigenspinor with that corresponding eigenvalue, so when multiplying the state with the adjoint of the eigenspinor, of course we will get 1. (b) For the spin z measurement $\psi_{z +} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ and $\psi_{z-} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$ and the corresponding probabilities are found by multiplying the state with the adjoint of the eigenspinor and then squaring. $P_{z+} = 1/10$ and $P_{z-} = 9/10$ (c) I'm not sure how to tackle this, how do the measurements of spin-z effect the measurements of A? Al I can think of is that in order to get the same measurement of A, the z component of the spins must have remained unchanged. The sum of two measurements of unchanged spin-z (1/10 * 1/10 + 9/10 * 9/10 = 41/50) Last edited: Mar 27, 2017 2. Mar 27, 2017 ### Orodruin Staff Emeritus What states will the electron be in after measuring the z-component and with what probabilities. What are the probabilities of measuring hbar/2 in the A-direction for those states? 3. Mar 27, 2017 ### BOAS The electron will either be in the spin up or spin down states with probabilities 1/10 and 9/10 respectively. The probability of measuring hbar/2 in the A direction for the spin up state is 1/10. The probability of measuring hbar/2 in the A direction for the spin down state is 9/10. Therefore, the overall probability is the sum of the two, i.e 41/50 Draft saved Draft deleted Similar Discussions: Spin measurements of an electron
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799091815948486, "perplexity": 683.9332899262158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815544.79/warc/CC-MAIN-20180224092906-20180224112906-00790.warc.gz"}
http://judithcurry.com/2014/06/16/what-is-the-measure-of-scientific-success/
# What is the measure of scientific ‘success’? by Judith Curry Science has been extraordinarily successful at taking the measure of the world, but paradoxically the world finds it extraordinarily difficult to take the measure of science — or any type of scholarship for that matter. – Stephen Curry The problem The Higher Education Funding Council for England are reviewing the idea of using metrics (or citation counts) in research assessment. At occamstypewriter, Stephen Curry writes: The REF has convulsed the whole university sector — driving the transfer market in star researchers who might score extra performance points and the hiring of additional administrative staff to manage the process — because the judgements it delivers will have a huge effect on funding allocations by HEFCE for at least the next 5 years. This issue of metrics has a stark realization at Kings College London, where they are firing 120 scientists.  The main criteria for the firing appears to be the amount of grant funding. Using metrics to assess academic researchers is hardly something new.  In my experience with university promotion and tenure, the number of publications, the number of citations (and H-index), and the research funding dollars all receive heavy consideration.  It is my impression that the more prestigious institutions pay less attention to such metrics, and rely more on peer review (both internal and external).  In my experiences on the AMS Awards Committee and the AGU Fellows Selection committee, the number of publications and H-index are considered prominently. What are the responses of scientists to this?  Well most just play the game in a way that insures they maintain job security.  There are a few interesting perspectives on all this that have emerged in recent weeks: The most thought provoking essay is from The Disorder of Things, excerpts: Whilst metrics may capture some partial dimensions of research ‘impact’, they cannot be used as any kind of proxy for measuring research ‘quality’. We suggest that it is imperative to disaggregate ‘research quality’ from ‘research impact’ – not only do they not belong together logically, but running them together itself creates fundamental problems which change the purposes of academic research. Why do academics cite each others’ work? This is a core question to answer if we want to know what citation count metrics actually tell us, and what they can be used for. Possible answers to this question include: • It exists in the field or sub-field we are writing about • It is already well-known/notorious in our field or sub-field so is a useful reader shorthand • It came up in the journal we are trying to publish in, so we can link our work to it • It says something we agree with/that was correct • It says something we disagree with/that was incorrect • It says something outrageous or provocative • It offered a specifically useful case or insight [Citations] cannot properly differentiate between ‘positive’ impact or ‘negative’ impact within a field or sub-discipline – i.e. work that ‘advances’ a debate, or work that makes it more simplistic and polarised.  Indeed, the overall pressure it creates is simply to get cited at all costs. This might well lead to work becoming more provocative and outrageous for the sake of citation, rather than making more disciplined and rigorous contributions to knowledge. On ‘originality’ – work may be cited because it is original, but it may also be cited because it is a more famous academic making the same point. Textbooks and edited collections are widely cited because they are accessible – not because they are original. Moreover, highly original work may not be cited at all because it has been published in a lower-profile venue, or because it radically differs from the intellectual trajectories of its sub-field. There is absolutely no logical or necessary connection between originality and being cited. Using citation counts will systematically under-count the ‘significance’ of work directed at more specialised sub-fields or technical debates, or that adopts more dissident positions. [If] we understand ‘significance’ as ‘the development of the intellectual agenda of the field’, then citation counts are not an appropriate proxy. To the extent that more ‘rigorous’ pieces may be more theoretically and methodologically sophisticated – and thus less accessible to ‘lay’ academic and non-academic audiences, there are reasons to believe that the rigour of a piece might well be inversely related to its citation count. An article in Times Higher Education reports: Academics’ desire to be judged on the basis of their publication in high-impact journals indicates their lack of faith in peer review panels’ ability to distinguish genuine scientific excellence, a report suggests. Specifically with regards to using researchfunding as a metric: Philip Moriarty has a post How Universities Incentivise Academics to Short-Change the Public.  Excerpts: What’s particularly galling, however, is that the annual grant income metric is not normalised to any measure of productivity or quality. So it says nothing about value for money. Time and time again we’re told by the Coalition that in these times of economic austerity, the public sector will have to “do more with less”. That we must maximise efficiency. And yet academics are driven by university management to maximise the amount of funding they can secure from the public pot. Cost effectiveness doesn’t enter the equation. Literally. Consider this. A lecturer recently appointed to a UK physics department, Dr. Frugal, secures a modest grant from the Engineering and Physical Sciences Research Council for, say, £200k. She works hard for three years with a sole PhD student and publishes two outstanding papers that revolutionise her field. Her colleague down the corridor, Prof. Cash, secures a grant for £4M and publishes two solid, but rather less outstanding, papers. Who is the more cost-effective? Which research project represents better value for money for the taxpayer? …and which academic will be under greater pressure from management to secure more research income from the public purse? And finally, a letter to the editor of PNAS entitled Systemic addiction to research funding. Trending Daniel McCabe has an essay on The Slow Science Movement.  Excerpts: Today’s research environment pushes for the quick fix, but successful science needs time to think. There is a growing school of thought emerging out of Europe that urges university-based scientists to take careful stock of their lives – and to try to slow things down in their work. According to the proponents of the budding “slow science” movement, the increasingly frenetic pace of academic life is threatening the quality of the science that researchers produce. As harried scientists struggle to churn out enough papers to impress funding agencies, and as they spend more and more of their time filling out forms and chasing after increasingly elusive grant money, they aren’t spending nearly enough time mulling over the big scientific questions that remain to be solved in their fields. Among those who have sounded the alarm is University of Nice anthropologist Joël Candau. “Fast science, like fast food, favours quantity over quality,” he wrote in an appeal he sent off to several colleagues in 2010. “Because the appraisers and other experts are always in a hurry too, our CVs are often solely evaluated by their length: how many publications, how many presentations, how many projects?” From Dylans Desk:  Watch this multi-billion-dollar industry evaporate overnight.   Excerpts: Imagine an industry where a few companies make billions of dollars by exerting strict control over valuable information — while paying the people who produce that information nothing at all.  That’s the state of academic, scientific publishing today. And it’s about to be blown wide open by much more open, Internet-based publishers. Indeed, Academia.edu, PLOS, and Arxiv.org are doing something remarkable: They’re mounting a full-frontal assault on a multi-billion-dollar industry and replacing it with something that makes much, much less money. They’re far more efficient and fairer, and they vastly increase the openness and availability of research information. I believe this will be nothing but good for the human race in the long run. But I’m sure the executives of Elsevier, Springer, and others are weeping into their lattes as they watch this industry evaporate.  Maybe they can get together with newspaper executives to commiserate. Dorothy Bishop has a post Blogging as post publication peer review: reasonable or unfair?  Excerpts: Finally, a comment on whether it is fair to comment on a research article in a blog, rather than going through the usual procedure of submitting an article to a journal and having it peer-reviewed prior to publication. The authors’ reactions: “The items you are presenting do not represent the proper way to engage in a scientific discourse”. I could not disagree more. [W]hat has come to be known as ‘post-publication peer review’ via the blogosphere can allow for new research to be rapidly discussed and debated in a way that would be quite impossible via traditional journal publishing. In addition, it brings the debate to the attention of a much wider readership.  I don’t enjoy criticising colleagues, but I feel that it is entirely proper for me to put my opinion out in the public domain, so that this broader readership can hear a different perspective from those put out in the press releases. And the value of blogging is that it does allow for immediate reaction, both positive and negative. From occamstypewriter on altmetrics: One thing that has changed of course is the rise of alternative metrics — or altmetrics — which are typically based on the interest generated by publications on various forms of social media, including Twitter, blogs and reference management sites such as Mendeley. They have the advantage of focusing minds at the level of the individual article, which avoids the well known problems of judging research quality on the basis of journal-level metrics such as the impact factor. Social media may be useful for capturing the buzz around particular papers and thus something of their reach beyond the research community. There is potential value in being able to measure and exploit these signals, not least to help researchers discover papers that they might not otherwise come across — to provide more efficient filters as the authors of the altmetrics manifesto would have it. But it would be quite a leap from where we are now to feed these alternative measures of interest or usage into the process of research evaluation. Part of the difficulty lies in the fact that most of the value of the research literature is still extracted within the confines of the research community. That may be slowly changing with the rise of open access, which is undoubtedly a positive move that needs to be closely monitored, but at the same time — and it hurts me to say it — we should not get over-excited by tweets and blogs. JC reflections Research universities in the 21st century are in a transition period, as the fundamental value proposition of the research university is being questioned in the face of funding pressures. Its time to start re-imagining the 21st century research university.  More on this topic will be forthcoming. My main reflection on metrics is that you get what you count.  If you count numbers, then numbers are what you will get.  If you want originality, significance, robustness, then counting citations, dollars, and numbers of publications won’t help. If you want impact beyond the ivory tower, such as research that stimulates or supports industry or informs policy making, then counting won’t help either. In looking back at my own history of funding, publication productivity and citations, here is what I see.  My time at University of Colorado (mid 1990’s to 2002) stands out as the period where I brought in large research budgets ($1M+ per year) and cranked out a large number of papers, only a few of which I regard as important. I was definitely in ‘no time to think mode’, spending my time writing grant proposals and editing graduate student manuscripts. With regards to citations, my papers with the largest number of citations are the 2005 hurricane paper and a review article on Arctic clouds. My papers that I truly regard to be scientifically significant have relatively few citations, although the citations on these fundamental papers keep trickling in. My own rather extended postdoc period (4 years) allowed me lots of time to think; I despair for the current generation of young scientists who are under enormous pressure to crank out the publications and to start bringing in research funds so they can be competitive for a faculty position. I suspect that the dynamics of all this will change, largely fueled by the internet. So does anyone wonder why academic climate researchers crank out lots of papers, try to get them published in Nature, Science, or PNAS, and don’t worry too much whether their paper will stand the test of time? Scientists are following their reward structure – from their employees and from professional societies that dish out awards. ### 193 responses to “What is the measure of scientific ‘success’?” 1. Your name is used to describe something like watts and kelvin. • AndrewZ I propose the “curry”, a unit for measuring the temperature of a debate. The climate change debate generally tends to be in the megacurry range, or about 1 vindaloo. 2. Thank you, Professor Curry, for your efforts. I am convinced, however, that integrity cannot be restored to science unless we first accept that Aston gave valid reasons to fear Earth might be accidentally converted into a star in August 1945. If we forgive those who deceived us for the past sixty-nine years (2014 – 1945 = 69 yrs), then government science may again be a tool to benefit society as a whole rather than an instrument of our political leaders. 3. “What is the measure of scientific ‘success’?” Easy, 1) Getting other scientists to consider, discuss, rebut or build on your work. 2) Succeeding in predicting phenomena from general principles derived from basic physical laws. 3) Getting supporters of the paradigm du jour to admit you have successfully predicted phenomena from general principles derived from basic physical laws. • TJA No, it involves the ability to get editors fired from journals, the ability to get your smears of other scientists into the press, and the aggregation of a large number of blindly loyal followers who don’t understand a thing you do, but defend it relentlessly. Comparison of theory to data has nothing to do with it. • darrylb Unfortunately the scientific community, in the majority of disciplines have gradually evolved, to some degree, from what Tallbloke has written to that of TJA. It is due in part to the nature of politics and funding. It is a manifestation of what Ike warned about. Talking with a team of medical doctors in different areas of expertise in a casual setting today, a general conclusion upon which they all seemed to agree is that the general finding of a group will be that of the loudest person in the room. :( • TJA: You must have read ‘Against Method’ by Paul Feyerabend. ;-) Kicking, biting, scratching – Anything goes. Only very occasionally is an anti-paradigm theory strong enough to overcome the inertia of vested institutional interest. I believe I’m getting closer. My co-researcher and I have been progressing our solar-planetary theory steadiy in the background. Along with the latest paper from McCracken et al and Rick Salvador’s model we have a compelling explanation for the C14 and Be10 record which the hockey jockey’s don’t come within 9900 years of matching. 4. Scientific ‘success’ lies in discovering not inventing nature’s rules. 5. rls Dr Curry: If you had an unlimited budget to do climate research, with no constraints, how would that money be spent? Would the research money mostly go toward hiring people, or mostly for plant and equipment? • Good question. I would make sure the observations are funded (satellite, monitoring, and important process experiments). I would then spend funding on basic research related to climate dynamics (none of impacts stuff or endless analysis of model output) including solar physics. We need to entrain physicists, mathematicians, engineers, chemists and computer scientists for an infusion of new approaches to understanding the climate system. • rls Thank you Dr Curry. Maybe an independently funded project is needed, with the goals you just outlined. Need to find a few billionaires interested in more knowledge and a better world. • GaryM I hereby nominate Dr.Curry for climate czar. If we have to have socialized science, we might as well have someone lead it who has a clue about how it should be done. • darrylb —–Add to the list Statisticians! • Kneel If you feel it’s possible, would you explain why you feel this sort of thing isn’t currently being funded? As in: internal vs external, science vs politics etc. Thanks. • Mike Hock “We need to entrain physicists, mathematicians, engineers, chemists and computer scientists for an infusion of new scientific approaches to understanding the climate system.” Fixed that for ya! • Philbert Observations are the cornerstones of science, yet the funding and expertise to operate a credible climate data network is constantly under budget and personnel stress. How many papers would be published without data? Many of these networks were designed many decades ago, based on an agency’s mission, not climate change detection. You really need to understand the data before you use it. Without painstaking review of the station metadata and data values, you run great risks with your analysis. RLS asked a very good question, Dr. Curry’s answer is just the first step in a direction that some of the great scientists have followed to build our civilization. • David Young Judith, This is exactly right. • R Johnson-Taylor I’d also suggest that’s something was done about data quality, data definition and semantics. • timg56 I’m interested in seeing how people find fault with this recipe. I’m sure some will. • Getting a robust set of observations with genuine global coverage is the most crucial step, and one that still needs to be taken. We should still be at the stage of building and calibrating the wind tunnel (so to speak), but too many people give in to the temptation of trying to use it before it is ready. • David L. Hagen rls See Bjorn Lomborg and the Copenhagen Consensus for similar perspectives on how best to spend the budget for climate and energy. • David L. Hagen One measure of success would be to cut university administration by 90%. See: The Clever Stunt Four Professors Just Pulled to Expose the Outrageous Pay Gap in Academia 4 Profs to replace 1 president. • Consensus Climate Science is Dead in the Water. A new, diverse team, with no Consensus Constraints will come to understand Climate Natural Variability. 6. David Springer You grade student papers as a measure of their success, right? So what you need is to get your papers graded by those who know more than you do about the subject. Of course that means engineers must start grading the papers produced by scientists. I’ve been doing that since I retired from Dell 15 years ago as service to mankind. ;-) • John R T Was the”… service to mankind” your retirement? Is this a joke, the joke-intro? Which of the four is an actual sentence? — John Moore • Mike Hunt You have to sleep sometime, Curry. • timg56 I had my dad check my homework and papers through college. When some of my calculus classmates questioned that, I explained he was a Chemical Engineer and understood the subject not only better than me, but the TA. Must say though it was tough when he marked problems I got wrong. He always used a pen, which required me recopying the entire work, even if there was a single error. 7. Dr. Ioannidis [“Why Most Published Research Findings Are False,” in PLOS Medicine] found that the more popular an idea becomes and the more researchers the idea attracts the worse the resulting science will be. When you compare the assumptions used by Ioannidis to what we see in climate science, the reliability of global warming research can be expected to be far worse and so it is. The bias of Western AGW researchers isn’t a tendency it’s a given so climate researchers will come up with wrong findings all of the time not just most of the time. And, among all possible motivations climatists are actually being paid out of the limitless purse of the government and academia’s promise of lifetime tenure to make evidence and models dance to any tune they wish to play and accordingly, the climatists will always succeed in “proving wrong theories right,” whatever it takes. 8. DocMartyn Patents count as well. It took me more than 25 years to build up the knowledge to get to the first stage of drug design; I don’t think the younger generation will be allowed to incubate for this long. Another tip is that after the new Dean has laid out his strategic vision for the institute and asks “Any questions?”, do not ask “Are you insane?”, as although eveyone in the room wanted to know the answer, this is not a good career move 9. A fan of *MORE* discourse Three prescient articles and lectures: • Alberts et al: Rescuing US Biomedical Research from its Systemic Flaws • Resnick: Systemic Addiction to Research Funding All agree the system is breaking down, all agree that changes are coming, none can foresee what these changes can be/should be/will be. Bottom Line “Deans and Chairs can count, but they can’t read!” $\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$ • Mike Hunt Bottom Line “Deans and Chairs can count, but they can’t read!” \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt} ?????????????????????? 10. David Wojick I have been working on these issues for several years and the calls for revolution are both aimless and pointless (because they are aimless). The Internet has already changed science, but not by making the fundamental structures go away. The next big change will be when the US Public Access program finally hits, but that too will not change the fundamentals. Happily science is safe for the forseeable future. For those who preach revolution bear in mind that we are talking about the activities of millions of people around the world, who publish a million articles a year. Fads are just that, no matter how loud. • Steven Mosher You have been working on this for years. Whats the measure of your success • David Wojick First I was invited to blog for the Society for Scholarly Publishing. http://scholarlykitchen.sspnet.org/author/dwojick/ Then I started a subscription newsletter. http://insidepublicaccess.com/issues.html Along the way I have been funded by several groups. Making a living is a first order measure of success. • Steven Mosher so total failure in terms of measureable changes • timg56 Why so harsh Mosher? • Steven Mosher harsh? Water is wet is not harsh. david claims to have worked on this problem. there is no measurable success. That says nothing about david or the quality of his work. many problems are worked on with no measurable success. Gosh, decades of trying to reduce uncertainty on sensitivity. is noting that harsh? nope. water is wet. • David Wojick Not sure what you mean by failure, Mosher. No one is measuring the changes so a lack of measured (not measureable) changes is no failure on my part. Not measuring something does not mean it is not changing. No one is measuring the growth of the trees in the forest yet they grow anyway. They do not fail to grow, they just fail to be measured. 11. harkin Einstein predicting gravitational bending of light and having it proven by measurement = scientific success. 50+ global warming models being proven 97% wildly incorrect on one side by actual temps = failure. The fact that anyone who said the models were greatly overestimating future warming was called a “denier” = irony. 12. When stepping outside “what they do” scientists are very often hampered by the very things that can make them terrific at what they do. A great scientist can be far too literal, mechanistic and even simplistic when faced with complexities and unknowns. Some actually believe that whatever they can’t get their head around must be “chaos”. When a science is incomplete to the point of almost not existing that is when we see scientists at their worst: they operate under the disastrous assumption that best available knowledge is adequate knowledge. The sane approach is to let people do what they do best with patience and freedom. Accept the pottiness that goes along with any extraordinary mental or intellectual facility – and end this monstrosity, this barbarity, called Publish-or-Perish. • David Wojick Patience and freedom cannot make promotion decisions. Not everyone can become a full professor., nor an associate professor, etc. The present system is about decision making. If you do not have an alternative way of making these decisions then you have nothing to offer. Nor is wishing decisions away helpful. • A lack of promotion decisions? Don’t tell me civilisation is now facing Peak Promotion Decisions! • David Wojick Sorry but I do not understand your reply. I said nothing about a lack of promotion decisions, or funding decisions, which are the other big case. • That’s okay, David. I wasn’t interested in lack of promotion decisions either. But if you’d like to check out any field of study with the word “climate” or “environment” in its title (we’ll leave “gender” out of it for now) you’ll see where money should NOT go but DOES go, in prolific quantities. And if we don’t stop these people from their dogma-based publishing then the industrialised West will be a-perishing. Here’s one good rule of thumb for academia: Make Nothing Sexy. 13. John Gall, in _Systemantics_, covers this in a chapter entitled “Administrative Encirclement.” How administrators take over productive science. If you don’t read this, you won’t get anywhere in pondering incentives. The incentive science needs is curiosity, and has nothing to do with administrators and metrics, who do their best to stamp it out in the process of taking over. I don’t see it in google. it’s about a professor interested in angiospores and the need to write out his objectives for the next year. • David Wojick Science is an industry that gets hundreds of billions of dollars a year. This requires administration. • DocMartyn We in the UK and US the university administrators have seized power from the academics, and if you doubt this try this test; Which department/block has the best carpets/office furniture and most modern phones/computers? • David Wojick First of all, the top administrators (President, Provost, etc.) have to be the bosses. Second, in all the cases I know of the faculty have a lot of power. Universities are large corporations. • In the 6 universities that I’ve been affiliated with, I have seen substantially different ‘power structures’, with some universities being very top down, and others where faculty have a lot of power. My current institution (Georgia Tech) was of the top down variety; University of Colorado was more bottom up. 14. GaryM The place of the university is not to generate research funds, any more than it is to field a top flight football or basketball team. The place of a university is, or should be, to teach. To the extent that research can be conducted to facilitate that process it should be welcomed. But the whole “scientific research” industry is a product of government taking ever more control of what passes for education. The US federal government took over the student loan industry, thus driving up the cost of an education. I worked my way through undergrad at a fairly decent university. There is no way my son or daughter could do the same. The progressives who run the government have also found that “science” can be used to give them great propaganda to help them maintain and increase their power. From climate, to health, to the environment, to technology, the government decides who gets funded, and therefore who gets hired, retained, tenured…. Give them5 more years, and the government is going to start dictating university level curricula, just like they are truing to do at the elementary and secondary level already. When you feed at the government trough, the government sets the menu. • rls The book “Coming Apart” shows that often the influential people in academia, government, and media are close; in mind, and in social context -Crony Grantism. 15. There is an entire body of research out there on how to fix science. A fundamental first is that what is needed is enough baseline funding for every scientist to do some science without having to compete for a grant. Who else has a job where you need to get outside grants to do what your salary pays you to do? Imagine if Walmart told every newly hired cashier, “Now go out and get a grant for the cash register you will need.” Or imagine a newly hired pilot for American Airlines who is now told, “Great you can start flying as soon as you get a grant to buy yourself an airplane”. Yet that is how scientists are supposed to do it. But how to pay for universal baseline funding for everyone who is on a salary? It is so easy! Example: “Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant.” Accountability in Research: Policies and Quality Assurance 26 Feb Volume 16, Issue 1, 2009. Dismantle the bloated administrative oversight and use it for baseline funding and departmental level support for all the researchers. My Alma Mater recently created a “Dean of Professionalism” whose job includes creating and overseeing a dress code for scientists. I kid you not! Let’s start by taking the salary for such deans and putting it into baseline funding. It’s not like the solutions aren’t out there, well researched, cited, and published. Science as a system simply ignores the truth. And I certainly don’t expect my little rant to change anything. • if you want to FIX science, get rid of Consensus. Real Science is always Skeptical. 16. I don’t think science is in decline, much, but I do think the institutions representing (or claiming to represent) science have gone through a bad patch that has lasted a couple of decades. Maybe the internet will fix the publishing problems. I would rather see professional networking fix the scientist issue. Which it could. Most importantly, I am very hopeful that MOOCs can fix the problems with universities. It should give academics another metric that they can use instead of cite counts and publications. Sadly, it will then bring in ‘star power’ as a metric to be used against the. “How many hits does your lecture have?” 17. When I started as a scientist every department provided their researchers with a telephone, postage, electricity, furniture, and a shared administratorve staff who did typing and such. The department had two full time technicians and some heavy duty equipment (including a transmission electron microscope) that was shared by all. When I left the newly hired professor was shown an empty wood framed space in a new building. Their first assignment was to get a grant to put in drywall, doors, hook up the lights and the plumbing and buy furniture, and they had to pay 15% of each grant into the university as “overhead” to cover the cost of janitorial service. 18. Raving “What is the measure of scientific ‘success’?” Science has become industry. Follow the money and consult an economist 19. When I started science a typical grant application was two pages and the department head signed off to show you actually worked in a university. When I left the letter of intent took 15 pages, 12% were invited to apply for the grant. The grant is itself was nearly 100 pages and took 6-8 weeks to write if you were lucky enough to get through the first hurdle and it had to be reviewed by staff from two different offices of deans of research before being finally sent off. • Raving Are there professional grant proposal writing services? Are there paid lobbyists for use with (scientific) funding agencies? • Of course there are! After I was shown the door one of the deans where I was exited invited me to come back and join his group of professional grant writers. They charge a lot. • “… join his group of professional grant writers. They charge a lot.” Now there is a great cottage industry. 20. Think of the cost of what gets called “climate science” and the trillions that turn on its theories. Now, every theory about climate that I’ve ever heard involves the deep hydrosphere, and in a big way. So the deep hydrosphere has a traffic problem? You wish. Mawson went to the Antarctic to find out stuff. (That’s the place where the melty bits are also the geologically active bits – but we’re only supposed to talk about the melty, ’cause it’s on top and easy to see without going anywhere.) What happens in Antarctic exploration now? A bungling zealot with a manic laugh and flair for selling biochar goes there to affirm a dogma – and can’t get past the ice. Peer review that, suckers. Nope, curiosity and empiricism are on the fade. Better watch out. 21. Why do we do what we do? Why do some of us risk lives defending the nation? Why do we work behind security barriers which preclude publication? Good questions, but many do. Those of us scientists who have been through a war and lost good friends, have easy answers to these questions, but count ourselves luck to have survived.. Obviously there are more important things in life than publication. We do try to separate our professional lives from our private lives with various degrees of success. But to be a good scientist, you have to do it 24/7, publication is just a privilege you might occasionally enjoy. This is a competitive world in which rewards come rarely to most of us. Sorry I can’t provide better answers to the above. 22. jim2 Maybe universities could cooperatively review papers and evaluate them using better scientific criteria than cites? It’s a tough problem due to the momentum that develops in a field (see climate science) and makes it difficult for mavericks who might swim against the current but in fact be right. But cites don’t mean much, AFAICT. • Yes, let’s appoint a dean of cooperativety reviewing! Can’t be any worse than a dean of dress code. • DaveW I agree with FTTW. Universities already pre-review grant applications to make sure they are on-target for priority areas and suppress anyone with a new idea. A better idea would be to let academics come up with the ideas that need to be explored and send the administrators to the unemployed. 23. At least in the social sciences there is an imbalance in the intensity of incentives to write and the incentives to read. The attention economy constraint that we all face applies with a vengeance to academia. People are desperate for rationales to not read things, even in their own fields, and trying to get promotion, publication, and granting decisions based on thoughtful assessments of articles is almost impossible. Prof. Curry reflects this reality with her observation that her most important papers are the least read and cited. One can flip this on its head and note that if you track down citations of your own work, the overwhelming majority are incidental and have almost no engagement with the thrust of what you were doing. My assessment is that we need to rebalance the incentive structure to reward more people to continuously synthesize the state of research in a given field, and perhaps also translate results for those in other fields. There can be a strong symbiosis between the prolific publishers and the omnivorous synthesizers–one hallway conversation between the two can catalyze new insights or eliminate huge amounts of duplicative effort. • Thank you for this: My assessment is that we need to rebalance the incentive structure to reward more people to continuously synthesize the state of research in a given field, and perhaps also translate results for those in other fields. There can be a strong symbiosis between the prolific publishers and the omnivorous synthesizers–one hallway conversation between the two can catalyze new insights or eliminate huge amounts of duplicative effort. The topic of a forthcoming post: The Art of Integration • Peter Lang I hope the “Art of Integration” post will be relevant to and focused on solving the real world problem – i.e. delivering the science that is relevant for policy analysis. • AK [… O]ne hallway conversation between the two can catalyze new insights or eliminate huge amounts of duplicative effort. How about a discussion in the comments section of a blog? Ignoring all the surrounding noise? • Skiphil an interesting comparison/contrast with one prominent case in the humanities (I’m not sure how common this is, but I am aware of a number of people in Philosophy who attain tenure based upon only a few published articles, where some departments emphasize quality over quantity): the eminent American philosopher John Rawls received tenure at Cornell and then (soon after) MIT at a time when he had published ‘only’ 3 journal articles and 3 book reviews: http://adrianblau.files.wordpress.com/2013/06/rawls-publications.pdf One may argue that this is merely an extraordinary case for soneone judged early on as bearing great future promise, but it may also be a suggestion for many fields that quality should matter much more than quantity. Rawls did not publish his seminal “A Theory of Justice” until he was 50 years old, although some of his articles preceding it were regarded as highly influential. Still, publishing ‘only’ 1-2 articles per year (less in the early years), his university career would have bee derailed early on according to number-crunching counts of papers. Instead he came to be regarded as one of the most eminent philosophers of the 20th century (regardless of whether one agrees with many of his arguments or not). Interesting case, I think…. • Can we give big bonus points to the guys that highlight Corregenda? I always though a Corregenda should be about 25 negative brownie points but I hardly see anyone referencing them. • Actually, measuring success in chemistry at pharmaceutical companies is a huge problem. Medicinal chemists formerly at Astra-Zeneca described a dysfunctional incentive system that favored forwarding large numbers of barely tweaked “back-up” compounds (derived from high-throughput screening) that usually turned out to fail whenever the lead compound failed (so providing little actual “back-up”). Since almost all drug candidates fail, distinguishing good and bad performance in the drug-development system is almost as hard as judging the contributions of pure researchers. Good work and bad work both usually lead to failure. • Mike Hunt This kind of bs about how to measure success only exists in soft sciences. Can you imagine chemists arguing about how to measure success? • Mike Hock +1 • michael hart +1 Three Mikes in a row! • michael hart You can often see them puching the air in a departmental mass-spectrometry lab. I was gobsmacked by a guy who could recognise the tin isotope patterns at 10 yards. • Mike Unninglyhiddencomicname 3rd Mike – try saying the names of the first two out loud… • Anathema to the Scientific Method, Popper / Einstein, One goddam counter example knocks out yr theory as a contender. H/t Marlin Brando. • Piffle, bts. If twenty of my serfs agree I am a fine master, I am surely justified in beating the one who says I’m not. I see it not as punishing an insolent lout but rather as rewarding the loyalty of my twenty consensus serfs. • There’s somethin’ wrong with that argument, mosomoso, but serfs are uncertain as ter what. • Steven Mosher “Can you imagine chemists arguing about how to measure success” Yes. • David Springer Imagination should be your middle name, Mosher. Is it? • David Young Steve, Your observations about synthesizing existing results is very good one. There is tremendous value in a clear and careful exposition of well known results. It is also an excellent teaching vehicle. • Steve, as an economic policy adviser covering a broad range rather than a narrow speciality, and having knocked about the world a bit and done a lot of things other than economics, I found that I had an ability to connect disparate fields and information and synthesize it, which to me was very valuable and meant that I provided much better advice. It’s hard to see connections if you’re not exposed to things outside your narrow field. So I support your suggestion. 24. My personal opinion is that science as practiced in universities is about to implode on its own administrative money centred bloat. What is being described at Kings is going on all over North America as the money pot gets smaller and smaller and the people at the top of the pyramid get bigger and bigger salaries but are left to “eat their young” just to stay alive. I predict real science will become the stuff of blogs like this until the implosion is over. • Sparrow >My personal opinion is that science as practiced in universities is about to implode on its own administrative money centred bloat. You mean like collage athletic departments? Have you compared head coach pay packages to what they pay in science departments. >I predict real science will become the stuff of blogs like this until the implosion is over. Just remember you are turning over stewardship of science to a virtual domain, the internet. It’s 99.9% digital, there is no guarantee any of this stuff would survive a Carrington event. Might want to have a backup plan. Jack Smith • I would suggest you look at the Science 2.0 as a possible model on which to build the idea of blog based publishing. The comments mean a lot of garbage to wade through but the good stuff does shine out eventually. As for a Carrington event, that can be prepared for and data can be protected. 25. ‘What you count is what you get.’ ‘Well, we serfs always understood that doing science involved measurement but this is ridiculous or monstrous or er, maybe grotesque? 26. naq Supportable facts about this world we all live in. I hope. 27. -1 Obviously you have to define what science is first. Before you can attempt to measure it. Andrew • science is already defined. with a measure, maybe we can figure out what definition they are using. • naq You know it cousin. 28. My main reflection on metrics is that you get what you count. If you count numbers, then numbers are what you will get. Excellent point. As that was what I was thinking in reading the article. I guess that means I fully endorse Dr. Curry’s reflection. 29. Walt Allensworth In the US it is now all about getting funding and publishing what the administration in power wants. • rls There needs to be a non-public climate research project, with the goals outlined by Dr Curry in her comment to me near the top of this post. We only need a few wealthy individuals to kick this off. 30. stevefitzpatrick As far as I can tell, paywall journals contribute less than nothing to the process of disseminating research information. They provide negative ‘value added’, and so ought not exist. Like newspapers, journals have long since outlived their usefulness. They will eventually disappear, as they should, but it will be a slow and costly death…. both financially and socially. Governments could put them out of their suffering instantly by requiring that all published papers based on public funding be publicly available without cost. I expect it will (and should!) eventually happen, but I think not any time soon. Like the rapid crumbling of the ‘iron curtain’, it will be an end that seems too long arriving, but it will happen very fast when it starts. • A fan of *MORE* discourse stevefitzpatrick foresees “Governments could put [paywall journals] out of their suffering instantly by requiring that all published papers based on public funding be publicly available without cost.” In US biomedical research, this requirement has been in-place for the past five years stevefitzpatrick! NIH Public Access Policy (Omnibus Appropriations Act, 2009) “All investigators funded by the NIH submit or have submitted for them to the National Library of Medicine’s PubMed Central an electronic version of their final, peer-reviewed manuscripts upon acceptance for publication, to be made publicly available no later than 12 months after the official date of publication.” It works, too. PUBMED provides instant access to all US-funded biomedical research articles since 2009. Nowadays methmaticians and physical scientists are joining the open-access movement. Economists, not so much. Good on `yah, NIH! $\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$ • ianl8888 An experiment in open publishing is currently occurring on the Jo Nova website It is being published in instalments and is really very interesting, both in content and design. It is an interesting experiment, well worth following, and breaks the “Journal” mode of peer review and paywalling into itty-bitty pieces. Naturally this leaves the door open for nutty comments as well as constructive ones, but it is a very interesting venture NOTE: open publication of the new hypothesis is not completed yet; I make no comment on the validity of the hypothesis at this point. For myself, I am content to follow the process through and then ask critical questions 31. Hi Judy, “It is my impression that the more prestigious institutions pay less attention to such metrics, and rely more on peer review.” Each university has its own culture. At Caltech the most important factor in a tenure vote is the reference letters. In addition, we read two or three of the candidate’s papers before we vote. The candidate’s funding level is excluded from the discussion and when I was the engineering dean I never discussed expectations for funding with the new professors. In engineering, citations have major problems as a quality index because what is most important in the long run is the application in commercial products, military systems, or in scientific instruments. Dave • Hi Dave, this confirms my impression. The engineering culture at different universities is interesting, some do seem to emphasize publications and citations rather than applications. • Harold It depends on where in the pipeline the engineering research is. Some research is one step removed from commercial implementation, and some is several. The former should be measured by commercial potential, and the latter by other criteria. More and more basic research, for example in nanotechnology and materials, is several steps removed from commercialization. The pot of gold is still there with these, but advances will enable more applied research, which in turn will enable commercial possibilities. In the end, all engineering research is there to enable commercialization of something, but the distance between some of this basic research and something that makes money may be so great that nobody is going to fund it for the commercial potential. If a patent isn’t in the cards in 10 years or less, it’s not going to get the kind of funding that something that might could be in production in 5. This isn’t really all that new, it’s just that the private sector isn’t funding this basic research like it once did. Killing Bell Labs, for example, killed a lot of privately funded basic research. This is the dilemma Eisenhower was talking about. 32. The education system isn’t helping when a computer science professional gets ensconced as dean of the science and math department and suggests simulating all the labs. “Computers can do anything”. Even in my field of elementary particle physics some theoreticians are of the opinion that we don’t need any more measurements. Charles 33. David Young The recent editorial in the Economist is a good summary of some of the problems with the academic reward structure. I’ve seen a lot of grant proposals and the successful grant getters at the best universities are very well paid by any standard. A lot of academics have their own companies and some of them have been quite lucrative. Now there is a lot of sweat equity in these companies, but they also benefit from the essentially free labor of graduate students who are very poorly paid and usually pretty bright and hard working. The main problem I see with the grant system is just that there are “fashions” in research and usually grantors are really trying to pursue something fashionable that they can then sell to the real source of the money, the government. I actually have found that fundamental research usually suffers in this setting and “colorful” or “impactful” research prospers even if its real substance is nil. I’ve heard this complaint a lot in engineering from some really top people where the last decades fad was “design” which is in their view really about interfaces, visualization, etc. and not about fundamental understanding or improved methods. Some very top notch people have moved based on these kind of considerations even late in their careers. There are still some holdouts places that do real hard analysis research. Another thing I have found in reviewing proposals is that there is a strong tendency to oversell the research and to put the very best face on the positive impacts. That’s a function of the extremely competitive environment. A lot of the literature from some of the “top” academics while not without merit is not replicatible by others at least in my experience. The Economist is excellent on the reasons for this too. 34. Peter Lang curryja @ June 16, 2014 at 6:51 pm, Good question. I would make sure the observations are funded (satellite, monitoring, and important process experiments). I would then spend funding on basic research related to climate dynamics (none of impacts stuff or endless analysis of model output) including solar physics. We need to entrain physicists, mathematicians, engineers, chemists and computer scientists for an infusion of new approaches to understanding the climate system I disagree with our hostess on this. If the ultimate aim is to take appropriate ‘action’, or not, then the question is poorly framed and JC’s answer is not addressing what is important and relevant, IMHO. If we interpret “action” to mean implement policy to mitigate climate damages, then we need to focus on the objectives of that action. We need to define the objectives and then analyse the policy options. The policy selection and implementation needs to be managed like a project – by the discipline of project management http://www.pmi.org/PMBOK-Guide-and-Standards.aspx. That is, define the required project outcomes / result / we want as a first step. To achieve policy success we have to understand the many constraints we must overcome for the policy to succeed in delivering the outcomes. Science has a contribution to make, but it is just one among many. First, it needs to provide the relevant information that allows the policy analysts to estimate the costs and benefits and probability of success of the different policy approaches. I suggest the policy analyst need just two inputs from climate scientists: 1. the PDFs of future climate by region (including PDFs for abrupt climate change, PDFs for the duration until it begins, duration of the abrupt change, and PDFs for the magnitude and direction of the change) 2. The PDF’s of damage function for the possible future climate climates, per region. The latter is the area that seems to be least understood. • GaryM Peter Lang, I may be wrong, but I suspect what Dr. Curry is calling for is a prerequisite for the type of research you call for. Climate scientists don’t understand the climate enough yet to give that kind of advice. So I think her call for focusing what money is spent on observations and basic research is spot on. Climate science is in its infancy. I believe there is just not enough known to use what is currently being produced for serious long term policy analysis. • Peter Lang GaryM, I agree with your comment. But, how can you do relevant and appropriate research if we don’t first define the policy objectives we need to address? We’ll just spend another 20 years researching this sort of nonsense: http://whatreallyhappened.com/WRHARTICLES/globalwarming2.html • GaryM YOU know that climate science is in its infancy. I know that climate science is in its infancy. The trouble is that the Climate scientists don’t seem to know its in its infancy. tonyb • A healthy infant is naturally full or curiosity, likes to wander about and poke its head and fingers into things. Sad to say, our infant climate science won’t leave its room since we bought it a bloody computer. • rls When CERN was built, what were the “policy” objectives? • GaryM tonyb, Ain’t that the truth! Peter Lang, “But, how can you do relevant and appropriate research if we don’t first define the policy objectives we need to address?” Well, we know ACO2 is a potential problem. That is all we need to know to make an effort to see if we can actually gather enough data to have more than a predetermined WAG as to temps, or better yet total climate heat content. It also justifies research into the nature and interactions of the various forcings and feedbacks, as well as whether in fact the climate can be modeled, and if so, how. Because we sure don’t know how to yet. Hell, pure human curiosity justifies those, as suggested by mosomoso and rls above. Just not on the idiotic scale we have been funding the institutionalized confirmation bias process that calls itself “climate science.” But I think it has been, and will continue to be, an enormous waste of (our) money to keep funding incomplete and demonstrably wrong (for purposes of predicting heat rise) GCMs. Not to mention funding for bizarre statistical modeling global temperature from rings from dead trees, muck from the bottom of the ocean, and other sundry detritus to tell us what to charge for petroleum over the next 30 years. And then there’s deciding which industries to shutter and which massive boondoggles to fund. All that should wait until we have a clue of how the climate actually reacts long term to increased CO2, if we ever do. • Peter Lang GaryM, Well, we know ACO2 is a potential problem. That is all we need to know to make an effort to see if we can actually gather enough data to have more than a predetermined WAG as to temps, or better yet total climate heat content. But we’ve been doing this stuff for several decades and not providing the information needed for policy analysis. We’ve been wasting most of it on completely irrelevant research which is being justified by cooking up some argument that it is relevant to climate change (see link in my previous comment). You are advocating for basic research and I am advocating for applied research. I agree some basic research is justified but the total funding for basic research should be divided up without any regard for the political ideologies that are prevalent at the time. Regardign applied research – i.e. for climate change – that should be directed to addressing the perceived issues and risks. Therefore, the objectives and desired outcomes of the research need to be clearly stated as a basis for awarding funding. It also justifies research into the nature and interactions of the various forcings and feedbacks, as well as whether in fact the climate can be modeled, and if so, how. Because we sure don’t know how to yet. I reckon the balance of funding for climate science (which is applied research) is wrong. I think we need to put a much higher proportion of the total research effort into understanding impacts. Because we sure don’t know much about impacts yet. We sure know cooling is bad – in fact very bad – but we don’t know if warming is bad or good. We sure don’t know, do we? • rls As a certified and experienced project manager I can say that one of the first steps in forming a project is to evaluate the maturity of the science. As an example, consider a project with the initial objective of mitigating climate disasters. If it is determined that the science is not adequately mature, a decision could be made to introduce additional preliminary objectives that will mature the science as outlined by Dr Curry. • Peter Lang rls Congratulations that you are a certified and experience project manager too. I expect you’d agree that the first step in project definition and initiation is to agree the project requirement/capabilities/outcomes/objectives/results. You’d also agree that these must be defined in measurable terms. The acceptance criteria and acceptance authority for the highest level of deliverables must be agreed. Once this has been done we can proceed to start planning the project. We cannot begin until these are agree. What you have written is secondary to defining the project Scope (and deliverables). Policy analysis, design and implementation can and should be conducted as a formal project or program (see definitions below) with multiple phases and components. Therefore, IMO, it is a waste to continue throwing money at poorly directed research as the developed world has been doing with climate research for the past two or three decades. The research needs to be directed by what is needed for policy analysis. IMO, the two main bits of information we need are,as I said in my first comment, PDFs on what the climate is likely to do and PDFs on the consequences of climate change. It is the latter (consequences of climate change) that has had much less work and we have little understanding (other than ideologically driven scaremongering). We know cooling is really bad. But we don’t know much about the consequences of warming. Is warming likely to be good or bad? We do know that warm and warming has been excellent for life in the distant past and also in the past 200 years or so. PMBOK Definitions: “Project: a temporary endeavour undertaken to create a unique product, service or result” Program: A group of related projects managed in a coordinated way. programs usually include an element of ongoing work. • Peter Lang says: ‘It is a waste to continue throwing money at poorly directed research as the developed world has been doing with climate research for the past two or three decades.’ In the real world, problem directed innovators know this, heck, even serfs know this! Guvuhmint financed institutions, squandering other peoples’ money are seemingly unaware of – this. • rls I agree with you but where we might differ regards the phase during which science maturity is evaluated. It is extremely important and can impact cost and schedule or even cause the project to be aborted. Successful project managers are cautious and would prefer aborting a project over having it continue using immature science. It is not unusual for projects to be aborted for this reason. From my experience, large projects are conceived in Washington and the first step you describe is done there. Then the project office takes over and begins, very early, to evaluate the maturity of the needed science/technology. 35. A measure of success is whether a scientist is employed as a scientist. • The problem with that assessment is that it merely supports the excuse that anyone who isn’t doing well within the system as it stands now is doing poorly simply because they aren’t good enough. That is a whip used to beat down and justify a lot of abuse in the system. That is the whip used to drive the endless postdoc cycle, for example. And there is no evidence for your assertion as far as I can tell. • My evidence is anecdotal: my own experience. I struggled for 10 years at post-doc level with a working husband and two young children. I did my job to fulfill the contract under which I was funded. I did not have time to think about advancing my career. I never thought of it as a career. I just liked doing calculations and thinking about relationships between the data. But no one gets paid doing that. There has to be a goal or a product. The one time I had the chance, most universities were looking for modelers (I am a satellite data person). The only satellite data jobs were for data managers (building databases). I do research at home, unpaid and very slow. Stephen McIntyre is my inspiration. I call myself a scientist. But without the institution behind the name, I am just kidding myself, right? • Nope you are not kidding yourself. For another inspiration, look to Nic Lewis (if you are unfamiliar with him, search his name on this blog). Also, search for ‘guest post’ on this blog, and you will see others who are doing research that is independent of an institution. I aspire to join your ranks (as soon as I can afford to). • and you will see others who are doing research that is independent of an institution. I aspire to join your ranks (as soon as I can afford to). It helps to have a day job. Hence the slow part, at least for me….. • Mi Cro My husband is a scientist with a teaching job but also does research in his spare time. My youngest son has now graduated high school and I am able to focus more on my work. Being independent also means running your own computer and database. With Linux OS, the costs are very low but the time investment is high. 36. Spartacus Every university, college, department has its own culture. In my department, teaching is valued far more than research. “Number crunchers”, “go-getters”, and “money grubbers” are scoffed at. The only people who get their name etched in stone on the campus quad are those who receive teaching awards. Those who get funded are just supposed to be happy they got some extra cash. What that means is that the students are taught the consensus. They do not measure anything because others have done the measurements for them. They never experience the thrill of discovery, because everything has been discovered elsewhere. As long as the teacher makes them feel happy, they are happy, and the teaching awards follow. Beware of easy generalizations – or am I just shouting in the wind? • Rob Thank you for eloquently articulating the “other side” as I have heard the arguments that the focus on research is forgetting what the “real’ role of universities is – teaching. There is no perfect mix, but there should always be a mix! • The culture idea is correct. In my university, teaching was considered a punishment for those not good enough to get a lot of grants and thereby get exempted from teaching. The big grant grabbers were required to give one or two lectures a year and wow did they complain about the imposition. The ones who could not get grants were doing three, four even five full courses each year. Another university in our city placed the highest value on teaching. Very few professors were classical grant grabbers. Culture really is important. 37. steinarmidtskogen If success is hard to measure, I think the trend now is to model it instead. • j ferguson +1 • Skiphil and if you can’t model it with genuine rigor, maybe just pretend: http://chronicle.com/blogs/percolator/the-magic-ratio-that-wasnt/33279 some strong parallels with Mannian statistics and the fervent resistance to consideration of McIntyre’s criticisms (I realize that proxy reconstructions may not fit some definitions of “model” but a conceptual parallel is there nonetheless): “Both Sokal and Brown say they are surprised that no one, before now, had taken a more skeptical look at such a revolutionary ratio. “The main claim made by Fredrickson and Losada is so implausible on its face that some red flags ought to have been raised,” Sokal writes in an e-mail. “At this point I can’t resist drawing the analogy with the reaction of the editors of Social Text to a certain strange manuscript that appeared on their desks in the fall of 1995.”” • Rob Starkey If your predictions do not seem to come to pass, merely claim that simulations show that your predictions will prove to be true over timescales of many decades and that potential disasters will occur if your suggestions are not followed. 38. The measure of scientific success depends on who is holding the yardstick. At the bench level, for me it was impact. Did my work have an impact on my field? Ultimately, did my work change / shape / transform the field I worked in? Money was nice, but I was motivated by impact – still am, in fact. When I headed a research institute, my measures changed. Now money was much more important – after all, I was now responsible for 70 professionals and about 120 students. Impact was still important, however – good work means a customer who comes back. But since my institute spanned much more than my field, I now had to look at proxies for impact. For example, did people come to us for work in our fields? Or, how competitive were we? – at the end of my tenure, we were hitting about 40-45% of our research proposals. The University had somewhat different perspectives. They weren’t really concerned about the science – they were concerned about the scientific enterprise, i.e., they watched the money. And if there was some political benefit to be gained as well, that was another “Attaboy.” As many have said, you get what you measure. The key is knowing what you’re going to do with the measurement. And that brings us to the brain controlling the hand that holds the yardstick. 39. Mikky Coming from a branch of physics where there are 2 tribes, experimentalists and theorists, I think a similar split should be considered for climate science. One tribe would be the observationists, people who measure things or take previous measurements, and derive some information about the current and/or past climate system. There may be the odd spat about proxies, but by and large peace and harmony would prevail. The other tribe would be the theorists, who try to explain things. Only theorists indulge in warfare worthy of politicians. Observationists can be judged on their previous and potential ability to add to our knowledge of what the climate system is and/or was. It should be relatively easy to judge this group from their publications and lecture notes. Theorists generally think highly of themselves (less so of others), most being God’s gift to the subject. I would get the observationists to rank the theorists, as this would help to keep them “grounded”, and may help to prevent false consensusses. • rls The experimentalists also need funding for people, particle accelerators, satellites, etc.. Where does that funding come from? Hasn’t the source, in the past, been something other than grants? It appears that this is a case of initiating a project, the first phase of project management, of which I am familiar with; been there, done it. 40. frequena If the concept of using citation counts (as metrics in research assessment) had been generally applied some seven decades ago, then I guess Trofim Lysenko would most likely have carried all the awards in genetics of his days. • David Wojick Have you actually checked his stats? A lot of the ideas that were popular 70 years ago have turned out to be wrong. So what? We still have to make real time funding and promotion decisions. Or do you think scientists should not be paid? That would solve the decision problem. • Anyone who is working as a scientist should at least have enough funding to have a couple of students and be doing some basic research without having to get a grant. As it stands now in Canada a lot of scientists have a salary to do science but can’t do science because, for whatever reason, they have no grants. Meanwhile other “successful” scientists are not doing science. They have three dozen underlings doing the science while the paid scientist’s days are filled with endless writing of grants. • Joshua As much as I disagree with you w/r/t some aspects of the climate wars, David – I do appreciate that you do sometimes make comments like this one that challenge “skeptics” to apply due skeptical scrutiny to their logic. • frequena Perhaps my point was not clear enough: practically every scientific article in genetics published in those days within the Soviet sphere of influence contained a Lysenko citation, as it might not have been seen as fit for publishing otherwise and, moreover, since it would have been dangerous for the author not to include one. So Lysenko was the most cited “scientist” of his period. Lysenko rose to dominance at a 1948 conference in Russia where he denounced Mendelian thought as “reactionary and decadent” and declared such thinkers to be “enemies of the Soviet people”. His methods were not condemned by the Soviet scientific community until 1965, more than a decade after Stalin’s death. • David Wojick Your point was clear, but I do not see how it applies to the issue at hand. As I asked, so what? We are not in the old Soviet Union. • David Wojick Yes Joshua, it is interesting that people who do not want to see the economy restructured in the name of climate change want to see science restructured for no good reason at all. • Since “the economy” is a spontaneous order influenced by government policy, people who don’t like too much interference in the economy criticize government intervention. “Science” is likewise a spontaneous order, one that is even more influenced by government policy through control of funding. Of course people who think that the government policy is screwed up are going to want to change it. (That group includes plenty of insiders, such as Shirley Tighlman, biologist and former president of Princeton, but also people who are even more skeptical of the role of government in the system.) • Skiphil frequena, re: Lysenko and 1948 just a note on dates, Lysenko was already rising to dominance by the late 1920s/early 1930s: http://en.wikipedia.org/wiki/Trofim_Lysenko (I know that Wikipedia cannot always be trusted, but I think the chronology in this article is accurate) 41. Rob Have been through this a couple of times in the US tech transfer sector and the best quote I have heard came from someone at the AUTM meeting a few years ago: “People treasure what you measure” Expect whatever metric you use to increase out of all proportion to any (and every) other measure of success. 42. Truth for its Own Sake, 101 : Knowledge for its own sake is provides not one wit of utility to society! • David Wojick Except society has to decide which knowledge to pay to get. The US Federal basic research budget is about$60 billion per year. Congress decides how it will be spent, by specifying the programs that get funded and how much each gets. So this is not about knowledge for its own sake. • We need to divide this country up or go back to state’s rights so that ideologues cannot bankrupt the rest of us when they decide to spend the public’s money to land on the moon using nothing but wind power as the energy source. • rls From my experience congress determines budgets on the larger scale but it is government bureaucrats that determine exactly where the funds go. • David Wojick Yes, RLS, they are called Program Officers. But they are answerable to Congress because their program can be reduced or even zeroed out. In some cases they are watched very closely. The point is that this is part of the complex decision system of democracy so society is making the funding decisions. It is not knowledge for its own sake. There is always a compromise between scientific value and social value. • rls I was a program officer. My office was far from Washington. We had to, by law, stay within the overall budget and use the right color of money. But there was never any second guessing from congress as to specific projects. 43. Steven Mosher Taking the measure of science. First one needs a suitable categorization of the human behavior and what it aims at before one can take its measure. The term “science” is far too vague a description. One can start with ANY suitable taxonomy, but one should start with a taxonomy. Over time that taxonomy can and will change because taxonomy itself is a tool for controlling behavior. And in the end while science might aim at “freedom of inquiry” this is mostly a platitude. taking the measure of science then boils down to picking a taxonomy (any will do within reason ) and deciding metrics and methods of behavioral control or channeling. There are three basic types. Applied Research Use Inspired research Pure research The metrics for Applied Research are brutal and easy to calculate. Did your science behavior result in a product? How much did it make? The metrics for Use Inspired research are squishier. A) did you actually advance understanding B) how significant is the social problem you are working on. Squishy. The metrics for Pure research? really squishy basically how far did you advance understanding. If it has any use, you get major bonus points Note the relationship between the value of free inquiry and squishyness What’s missing is a metric for “how far” one advances understanding • You cannot leave out the cost… unless you’re investing your time and money. Money is stored labor and therefore a finite resource. Wasting money on projects that provide no value to society is not productive and not the way to maximize the net present wealth of a society. We don’t, for example, need to spend public money to fund vuvuzela classes in public schools. • timg56 Wagathon, When considering science research it is possible to focus too much on cost. It’s one reason several universities RPI being the first I believe, developed Science & Technology based MBA programs. R&D is the life blood of technology companies, yet Finance and Accounting types see it as a drain on the bottom line. I’ll argue the same applies to education. Insisting on tight cost controls is likely to have the impact of placing shackles and blinders on educational research. • Like the industrial-military complex years ago, it is now the government-education complex that is in need of serious downsizing. • Steven Mosher for applied research cost is implicit in the profit question. dunce. For the other two. yes, cost would have to be part of the equation, however, perfect cost data How do we measure knowledge. counting papers and cites is a poor proxy • rls How did CERN come about? Wasn’t it about pure research and hugely expensive? I suspect it required leaders with great influence. • David Wojick Mosher, it is stupid to call someone a dunce in this context. Especially when you are wrong, as the US budget for applied research is several billion dollars a year, with no profit involved. Most of it is for weapon systems. I think counting papers and citations is a very good proxy for the advancement of knowledge. Discoveries that no one knows about or uses are worthless. Knowledge is a social system. • David Wojick RLS, atom smashers are a major item in all big country’s research budgets, thanks largely to nuclear weapons, which proved that nuclear science is important. In the US this is the job of the Energy Department’s Basic Energy Sciences program, which has a budget over one billion dollars a year. They funded the Atlas instrument on the Large Hadron Collider, which instrument cost over half a billion dollars. • David Wojick Correction, the US budget for applied research is several hundred billion dollars a year, not a mere several billion (dwarfing basic research). Typing too fast and reading too slow. • Steven Mosher “How did CERN come about? Wasn’t it about pure research and hugely expensive? I suspect it required leaders with great influence.” Immaterial to the question. The question is how to measure. looking at what happened in the past may or may not tell you how to measure. • Steven Mosher david Now I understand why you made no progress. “Mosher, it is stupid to call someone a dunce in this context. Especially when you are wrong, as the US budget for applied research is several billion dollars a year, with no profit involved. Most of it is for weapon systems.” Wrong. Applied research in defense has huge profits. You seem to misunderstand. If the goal of applied research is to produce things we can use A) did you your research get used in a product How much profit.. WHICH MEANS YOU LOOK AT COST, DUNCE. ######################################## I think counting papers and citations is a very good proxy for the advancement of knowledge. Discoveries that no one knows about or uses are worthless. Knowledge is a social system. more duncery. A) If you want to argue that tree rings are a good proxy for temperature you do so by comparing the two. So, to argue that number of papers is good proxy for advancement in pure research you have to have a measure of two things: the papers and the advancement. The obvious counter example is the seminal paper that B) Discoveries that no one knows about dont exist. C) Discoveries that have no use are PRECISELY the kinds of pure research that are hard to measure. Not worthless. you might not be able to monetize them, but the question is how do you measure knowledge for its own sake. You can of course define that out of existence, but thats just changing the question. No wonder you made no progress. • > I think counting papers and citations is a very good proxy for the advancement of knowledge. Another way to count: NORTH CAROLINA: The state will soon hold a lottery for spots in its new voucher program. More than twice as many low-income kids applied than can get a spot. ALABAMA: A judge reversed a previous decision forcing a new school choice program to stay on hold until a union lawsuit against it plays out. Now kids can receive K–12 scholarships as the suit goes forward. DELAWARE: Lawmakers will consider a bill to give families education savings accounts that can pay for myriad education expenses, including but not limited to tuition. The bill gives more money to poorer families. MILTON FRIEDMAN: The Nobel Prize laureate would say school choice solves the problem of coercive overtesting, two researchers conclude. FLORIDA: The state’s school choice program offered a young man an opportunity to dramatically turn his life around. ILLINOIS: Federal investigators say a Chicago-area charter school chain defrauded investors and made crony contracts. http://news.heartland.org/newspaper-article/2014/06/11/more-benefits-wisconsins-collective-bargaining-curbs Does that count as +6 for School Choice research? • rls Mosherr: I asked about CERN because I thought it was for the squishy type of science, yet it apparently got funded by several governments and at large expense. How was the squishiness overcome? Perhaps a consortium of very influential physicists? • rls David: The Defense research dollars go mostly to defense contractors that make the profits. • timg56 “for applied research cost is implicit in the profit question” Not necessarily • The usual liberal ad home reply belies ignorance: everything is profitable when money is free –e.g., as Irving could teach anyone on the Left if they actually had an open mind is that you can payback with fuel savings the ‘cost’ to flatten every railroad track in the US so long as there is no interest on the loan. The Left with its war on reason is costing the country in ways that can never be made whole again. • David Wojick Mosher, I have no idea what you think a taxonomy will do for you as far as measuring scientific success, impact or progress is concerned. I do not think you understand the issue, nor the extensive work that is being done on it, and has been done over the last 60 years. Perhaps you should (gasp) read some of the papers, maybe even mine and my team’s. Then you could (choke) cite it. • David, I would certainly welcome a guest post on this topic, or a link to one of your previous blog posts that you think would be suitable here. 44. And the hits keep coming, regardless of funding source . http://contextearth.com/2014/06/17/the-qbom/ Keep on with your pity party 45. A fan of *MORE* discourse History provides plenty of examples of distinguished researcher/academicians who found ways to prosper outside the default treadmill of “teach undergraduates and write proposals”: • Charles Ives / composer and insurance executive • Nathaniel Bowditch / mathematician and actuary • David E Shaw / structural biologist and trader • Claude Shannon / engineer and investor • Michael Spivak / mathematician and actuary • Donald Knuth / computer scientist and author • Benjamin Franklin / scientist and statesman • Craig Venter / scientist and entrepreneur • Si Ramo / engineer and industrialist • James Harris Simons / mathematician and hedge fund manager • Jane Goodall / primatologist and writer As Arnold Shoenberg wrote of Charles Ives: “There is a great Man living in this Country — a composer. He has solved the problem how to preserve one’s self-esteem and to learn. He responds to negligence by contempt. He is not forced to accept praise or blame. His name is Ives.” Not to mention, Isaac Newton pursued a career path that (literally) coined money! Conclusion  It is well for students (in particular) to keep in mind that there is no “one size and one style suits all” of academic achievement. $\scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}$ Seth Carlo Chandler : Astronomer and Actuary Thomas Bayes : Statistician, philosopher, and Presbyterian minister Nathan Myrhvold : CTO of Microsoft, photographer, chef, and Climate Scientist 46. My main reflection on metrics is that you get what you count. I’ve seen this before in sales, and we recently saw it at the VA, low wait times to get a bonus, even if you have to have a secret list. 47. Hi Judy, “The engineering culture at different universities is interesting, some do seem to emphasize publications and citations rather than applications.” Believe me, if they had the application successes, they would emphasize them. However, Harold’s observation that some of the applications are a long time in the making is a fair one. Dave • Dave, here is something that surprised me at GT. I recently led a multidisciplinary proposal to NSF (didn’t get funded) that was targeted at increasing the resilience of the electric grid in the face of weather disasters. The particular NSF program required extensive engagement with decision makers from govt and industry (in our case, regional power providers). I figured the electrical and industrial engineers on our team (modeling the electric grid) would be the ones with good contacts, but this was most definitely not the case (only one of the industrial engineers had contact with a regional energy provider). It turned out the people with the contacts were myself, an energy policy person and a project manager for an Institute. • JeffN sorry, but that proposal sounds like one of those things that just drives the lay person nuts. Granted, I don’t know the details, but from your abstract: First, it’s weather that you’re hardening the grid for- “weather disasters” happen and need to be accounted for in the grid. Second, requiring “extensive engagement with decision makers from govt and industry” isn’t a science pursuit, it’s policy that the NSF can help by doing it’s job and focusing on science and engineering questions. Those questions are “what breaks, why, how to fix it, how to avoid breaking it or speed up repair time and cost.” With that information, decision makers decide. And science needs to accept the fact that a mandate might not be the answer. It sounds pedantic, but I believe this is most of the problem with the debate in climate. We’re stuck in an endless (26 years now) debate over how wrong the temperature projections are because one “team” wants to be able to say it’s bad enough to enact goofy policies. Get the science funding back on science questions- yes, continue to try to improve weather forecasts, but more importantly figure out how to make nukes, windmills, solar panels, and bio fuels work and be honest about when they don’t. I don’t even think there’s (much) of a problem, but I’m more than happy to see science do science. • If you lived in Florida, or were impacted by Hurricane Sandy, you would get this. How can we minimize power outages exceeding 2 days from such storms? This has nothing to do with climate change. • ” I figured the electrical and industrial engineers on our team (modeling the electric grid) would be the ones with good contacts,…” Nice play on words there. The malfunction was caused by rust on the contacts. • Rob Starkey It is unclear what benefit a power company would get from this “multidisciplined” approach beyond what information they already have available. Power companies (and those that distribute) have a wealth of historical data on why power outages have occured. It seems pretty simple to do root cause analysis and a hardening of the system to prevent a reoccurance. I’d guess (with no reasonable information to support the conclusion) that most power outages are caused by poor maintenance and not following established safety procedures. • JeffN I live on the coast and was impacted by Sandy and Isabel. The science questions were – with X storm surge and wind, what breaks, why, how do we harden it and/or get it fixed faster. The policy question is which of those things are cost-effective- note that this means in some areas you pay more for hardened power, some areas you have to expect to be on your own for a while, and some areas you aren’t allowed to build a house. My area of the beach lost power very briefly because it was cost effective to bury lines and harden above-ground equipment. Friends were without power for days, but they always are during hurricanes because it was not cost effective to bury lines there. Hurricanes are part of life here. Look back at your snow issue in Georgia- an science-sponsored extensive engagement with decision makers re “weather disasters” would be pointless. Research on cheap, fast modifications to existing DPW trucks and de-icing materials to mitigate snow would not be pointless. Ultimately, science can’t tell if it makes more sense to buy those things for the rare snow or tell the population to deal with it and spend the on something else. No amount of time spent equating “snow” with “disaster” will change that. • Harold On a related note, one other electrical grid reliability is cybersecurity. I’m moderately involved in the issue regarding electric grid, water systems, and other utilities, and am finding that there are certain parties who don’t want to talk about it, because it rains on their grand plans for “cloud” telemetry. You start arguing about this for a while, and you soon realize that some very big (Microsoft, IBM, etc.) toes are being stepped upon by raising these issues. They would rather sell “cloud” sizzle than security steak. Usually when something like this doesn’t make sense, there’s somebody not too far away, with a lot of money on the line. As far as the grid is concerned, all of this reliability and security stuff goes against what they would rather do with the capital. Like all engineers, they want toys, the more expensive, the more fun. • timg56 Did you reach out to alumni. I know more than one GT alum in the electrical utility industry, including one of my brothers. • timg56 Rob, RE: “Power companies (and those that distribute) have a wealth of historical data on why power outages have occured. ” One might think so but is not the case. Very few power companies conduct root cause and failure analysis following storm / weather induced outages. Their primary objective is to get the system back up. That is not conducive to collecting evidence on cause of failure. Poll a utility company on what they consider the leading cause of pole failures (which almost always lead to power outages) and they are likely to say storm damage. For a presentation last year I looked into our own data on pole replacement and was surprised to find the common consensus was wrong. Car hit poles and environmental degradation turned out to be the two leading causes of pole replacements due to “failure”. By far the leading cause of early replacement is public improvement projects. • Rob Starkey Timing Thanks for the education about the information maintained by power companies/power distribution companies. I am suprised that they do not have better information on failure analysis since it would seem to be a key component of long term costs and customer service. That data would seem more valuable to their decision makers than a study by a group of academics- (no offense to Judy) • Hi Judy, “Dave, here is something that surprised me at GT.” I think a lot of US EE departments have not maintained their connections to the electrical utilities well. It may be because the individual utilities have not funded much research. Dave • More seriously, my understanding is that many high-status EE departments have shifted very far from thinking about power issues, substituting an emphasis on electronics and digital topics. Even back in the early 1980s, my best-in-his-class EE roommate had to study up on electric motors and electric power on his own before taking the GRE. All his coursework and independent research had been on signal processing, linear systems theory, etc. That emphasis came in handy when he later worked on advanced radar stuff and then became a patent attorney, but it suggests the department wasn’t too oriented to the electric power aspects of EE. • timg56 Rob, I suspect the reason they don’t is that in the long run it accounts for a small portion of cost. A wood pole has an expected life of 50 years (this can vary by type of wood – we use Doug Fir. Utilities in the NW at one time used western red cedar. I’ve seen poles that are approaching 100 years in service). The average life is closer to 20 years. Meaning there is a very good chance you will end up replacing (or removing with undergrounding of facilities) the pole before it fails. Another factor is utilities often will carry insurance against storm damage or they will have available the ability to the commission and obtain a rate increase to cover extraordinary storm restoration costs. Finally there is the point of priorities when poles fail during a storm. Getting the lights back on outweighs all other considerations. Try dealing with folks who are into their 2nd or even 3rd week without power. They ain’t baking you cookies. Most utilities do have inspection and treat programs now, as early failure due to environmental conditions (rot, insects, woodpeckers) can be a significant cost. • Skiphil I’d intended to put this comparison here, raising an issue of judging “future promise” vs. existing (minimal) quantity and influence at the time of a tenure decision…. it’s only one case, from outside the sciences/engineering, but I think it suggests that there should be university pathways for people displaying exceptional conceptual/intellectual promise, even if the quantity of publications is not (yet) there: http://judithcurry.com/2014/06/16/what-is-the-measure-of-scientific-success/#comment-598734 48. John Power This issue of metrics has a stark realization at Kings College London, where they are firing 120 scientists. The main criteria for the firing appears to be the amount of grant funding. Measuring the value of scientists with a financial metric – that sounds logical. (NOT!) What has become of the academic community when it resorts to such an irrational practice that turns science into an aspect of economics and sociology in this way? Evidently it can no longer have any idea of what the essential purpose of science is – the increase of human knowledge of reality. Surely the only rational criterion for evaluating the works of scientists is the extent to which they increase our knowledge of reality. Although knowledge is a subjective mental quality that cannot be measured directly, it can be measured indirectly by behavioral metrics based on information theory. It is the task of psychologists, not sociologists or administrators, to create, develop and refine such metrics for application and use throughout all the sciences. Claude Shannon invented information theory in 1948 and some 66 years later the academic community still has not applied it to the objective evaluation of works of science and scientists. This is shameful surely. 49. A similar problem arises in private industry. For example, many large oil companies realize oil exploration is extremely risky. Many wells are “dry”. This encourages the exploration community to recommend wells and emphasize getting them approved rather than making sure the geology makes sense. There are perverse rewards for style and number of wells recommended as well as the quality of the slides and the sales ability. A similar problem arises in new business development, where some “developers” come up with business failures but are rewarded merely for selling the idea. I also saw that in the way research funding was allocated within a corporation. Some “pet ideas” held by supervisors were funded and anything unusual with a different slant was given a quick burial. I still have a couple of ideas I couldn’t get funded for some lab work and fine scale models of lab results. Cheap stuff, but I was looking at things upside down. And if it happened to me, it must happen much much more, after all I’m just a dabbler. 50. Noway I would have to say not failing 51. Poptech Or maybe not 52. It becomes a vicious cycle when you work on a progressive project. Unfortunately, the bureaucracy slows down the research process. As a result, important findings are held up for years. It certainly starts to seem counterproductive. 53. jim2 This guy should sue. From the article: American University statistician tells The Fix: Belief in climate catastrophe ‘simply not logical’ If one would have asked statistician Caleb Rossiter a decade ago about global warming, he says he would have given the same answer that President Barack Obama offered at a recent commencement address. “He castigated people who don’t believe in climate catastrophe as some sort of major fools,” Rossiter says of the president’s speech, adding he would have agreed with the president – back then. But Rossiter would give a different answer today. “I am simply someone who became convinced that the claims of certainty about the cause of the warming and the effect of the warming were tremendously and irresponsibly overblown,” he said in an exclusive interview Tuesday with The College Fix. “I am not someone who says there wasn’t warming and it doesn’t have an effect, I just cannot figure out why so many people believe that it is a catastrophic threat to our society and to Africa.” For this belief – based in a decade’s worth of statistical research and analysis on climate change data – Rossiter was recently terminated as an associate fellow at the Institute for Policy Studies, a progressive Washington D.C. think tank. http://www.thecollegefix.com/post/18034/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25578436255455017, "perplexity": 2406.7952524484467}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672923.0/warc/CC-MAIN-20151001215752-00118-ip-10-137-6-227.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/211381/image-based-reinforcement-learning-with-neural-networks
# Image-based reinforcement learning with neural networks After seeing that "OpenAIGym" is not exactly supported on Windows and playing around with https://www.wolfram.com/language/12/neural-network-framework/train-an-agent-in-a-reinforcement-learning-environment.html, I decided to create a new gym as part of my reinforcement learning studies! I was progressing smoothly with the first environment Pixels-v1: • DeviceFramework API • human interaction then I got stuck with the actual reinforcement learning using neural networks o__O - training produces results that just drift off screen rather than aim for the center. Is that perhaps because of picking images as input domain instead of a tiny vector of numbers summarizing the state - please see below for details and help me out?! I have prepared a notebook to make you see the issue right away. # Notebook 1. evaluate all initialization cells 2. scroll to the bottom of the notebook ### Context The first environment (Pixels-v1) is going to be that of a few happy pixels surviving against various simple hazards. • "ObservedState": a 40*40 Image • "ActionSpace": {Left, Right, Up, Down} • "Step": I am not sure how to define the reward here for the neural network to converge, intention for the simplest case: Reward == 1 if stepped closer than ever before to center, otherwise 0; also Ended == True if active Pixel hit edges. ## Questions I would expect my policy network to use the environment image directly. policyInput = NetEncoder[{"Image", {40, 40}, "Grayscale"}]; policyOutput = NetDecoder[actionClass]; policy = NetInitialize@NetChain[ { 4, SoftmaxLayer[] }, "Input" -> policyInput, "Output" -> policyOutput ] policyOutput; policy // netSize 1. Should I change the "ObservedState" from being an image of binary 40*40 image with the active pixel being a single pixel to something more noticable to the neural net? 2. Should I change the policy network definition (snippet above), how? 3. Should I change the loss function definition, how? 4. Should I change the step function definition (snippet below), how, especially with regards to the "Reward"? (where \[Rho][[i]] is the active pixel.) I would strongly appreciate if someone could provide detailed guidance on what I am missing at this point. I would love to make Mathematica more accessible for beginners of reinforcement learning with neural networks. 1. evaluate all initialization cells and 2. scroll to the bottom of the notebook • Please do not use the version-xx tags without a good reason (i.e. an issue that could in principle apply to other versions but unexpectedly it does not). – Szabolcs Dec 18 '19 at 10:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2379244714975357, "perplexity": 2441.414153445791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00043.warc.gz"}
http://mathhelpforum.com/pre-calculus/104581-average-rate-change.html
# Math Help - average rate of change 1. ## average rate of change Let f(x)=5 ·x2+2 ·x−3 and let x0= 1. The average rate of change of f between x= 1 and x= 1.18 equals ? 2. Originally Posted by samtheman17 Let f(x)=5 ·x2+2 ·x−3 and let x0= 1. The average rate of change of f between x= 1 and x= 1.18 equals ? $f(x) = 5x^2 + 2x - 3$. You have $f(1) = 5(1)^2 + 2(1) - 3$ $= 5 + 2 - 3$ $= 4$ Also $f(1.18) = 5(1.18)^2 + 2(1.18) - 3$ $= 6.962 + 2.36 - 3$ $= 6.322$. The average rate of change will be given by $\frac{f(1.18) - f(1)}{1.18 - 1}$ $= \frac{6.322 - 4}{0.18}$ $= \frac{2.322}{0.18}$ $= 12.9$. 3. ok, thank you so much! that really helped (ps.anwser is 12.9 as 1.18-1 is .18 not 1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983357787132263, "perplexity": 4629.533927356359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999638008/warc/CC-MAIN-20140305060718-00094-ip-10-183-142-35.ec2.internal.warc.gz"}
http://blog.metalight.net/2012/12/macroscopic-model-of-traffic.html
## Saturday, 1 December 2012 ### Macroscopic Model of Traffic I work in the Intelligent Transport Systems (ITS) industry, at Transmax, and completed a PhD in applied mathematics at QUT. So it should come as no surprise that I want to combine and explore the two areas. To combine my two interests, I have investigated macroscopic models of traffic flow. This post presents a simple model of traffic flow. It will form the baseline for further work and modifications. I learned of these equations from this paper by Paul Ross, published in 1988. ## Model Equations The first equation comes from the definitions of flow, density and speed: $$q = \rho v,$$ where $$q$$ = traffic flow rate (veh/s) past the point; $$\rho$$ = vehicular density (veh/m); and $$v$$ = vehicle speed (m/s). The above equation states that the number of vehicles that will pass by a point is equal to the density of traffic at that point times the speed of vehicles at that point. That is, flow rate increases with density and speed. The second (partial differential) equation was given by Lighthill and Whitham in 1955. It describes the conservation of vehicles: $$\frac{\partial \rho}{\partial t} + \frac{\partial q}{\partial x} = S(x,t),$$ where $$\partial$$ indicates partial differentiation; $$t$$ = time (s); $$x$$ = distance (m) along road; and $$S$$ = vehicles entering (+ve) or leaving (-ve) the road (veh/m/hr). The third equation describes vehicle speed. The simplest relationship would have to be linear relationship between speed and density, as reported by Greenshields in 1934. This speed equation gives a free flow speed at zero density, and zero speed at the jam density: $$v=v_f ( 1 - \rho/\rho_j )$$ where $$v_f$$ = free flow speed (m/s); and $$\rho_j$$ = jam density (veh/m), the average vehicles per metre in stationary traffic. Let's say that the free flow speed is 100km/h. That means $$v_f = 27.78$$ m/s. Also, let's assume that the average vehicle length is 5 metres, and that in a jam, they are spaced 2 metres apart. That means $$\rho_j = 1/7$$ veh/m. The equation for speed has an important effect on the traffic in the simulation, and has received a lot of attention in the literature. Various forms have been presented in the literature, and we will look at some of them in the future. ## Boundary Conditions A model with a partial differential equation is not complete without boundary conditions. There are a few options for us, but we must specify two boundary conditions - one for a point in space, and one for a point in time. First, some notation to help us specify the boundary conditions. We will refer to flow, density and speed as functions of two variables by writing $$q(x,t)$$, $$\rho(x,t)$$ and $$v(x,t)$$. Before we actually specify the boundary conditions, note that we have one partial differential equation, and two algebraic equations. With the two algebraic equations we can translate density to/from speed, and calculate flow from density or speed (but not the reverse - we'll see why below). The time condition is usually chosen to be for time zero. We might choose that initially: $$\rho(x,0) = 0$$ for all $$x$$. In this model, it is equivalent to specifying $$v(x,0) = v_f$$. For this model, the space condition can be specified in terms of density or speed, as we can use the equations to determine the unspecified measures. Let's choose: $$\rho(0,t) = \rho_j/4.$$ I don't think it's very important, but let's say that this condition overrides the time condition at $$x=t=0$$. In the future we may change the space condition to be a function of time, for example: $$\rho(0,t) = \frac{(1+sin(2\pi t/T))}{2}\frac{\rho_j}{4}.$$ ## What's Next? In another post we will analyse the relationship between speed, density and flow. After that we'll discretise the model equations so we can solve them numerically. And after that we may investigate the effect of different speed equations on model behaviour. P.S. Thanks to MathJax and My technical memo for the script to render equations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495169520378113, "perplexity": 632.3674650758358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00176.warc.gz"}
https://www.lessonplanet.com/teachers/a-jan-brett-coloring-alphabet-the-letter-gg-k-1st
# A Jan Brett Coloring Alphabet- The Letter Gg In this printing worksheet, students examine and learn both upper and lower case DeNelian letter Gg. Students also observe and color a detailed picture of a gingerbread man. Students do not form any letters on this page.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489777088165283, "perplexity": 6578.4674283865525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425766.58/warc/CC-MAIN-20170726042247-20170726062247-00415.warc.gz"}
https://opus.bibliothek.uni-wuerzburg.de/frontdoor/index/index/searchtype/authorsearch/author/%22Einfeldt%2C+S.%22/rows/100/start/1/sortfield/author/sortorder/desc/subjectfq/Physik/docId/4390
• search hit 2 of 2 Back to Result List ## Molecular beam epitaxial growth and characterization of (100) HgSe on GaAs Please always quote using this URN: urn:nbn:de:bvb:20-opus-50947 • In this paper, we present results on the first MBE growth of HgSe. The influence of the GaAs substrate temperature as well as the Hg and Se fluxes on the growth and the electrical properties has been investigated. It has been found that the growth rate is very low at substrate temperatures above 120°C. At 120°C and at lower temperatures, the growth rate is appreciably higher. The sticking coefficient of Se seems to depend inversely on the Hg/Se flux ratio. Epitaxial growth could be maintained at 70°C with Hg/Se flux ratios between lOO and ISO,In this paper, we present results on the first MBE growth of HgSe. The influence of the GaAs substrate temperature as well as the Hg and Se fluxes on the growth and the electrical properties has been investigated. It has been found that the growth rate is very low at substrate temperatures above 120°C. At 120°C and at lower temperatures, the growth rate is appreciably higher. The sticking coefficient of Se seems to depend inversely on the Hg/Se flux ratio. Epitaxial growth could be maintained at 70°C with Hg/Se flux ratios between lOO and ISO, and at 160°C between 280 and 450. The electron mobilities of these HgSe epilayers at room temperature decrease from a maximum value of 8.2 x 10^3 cm2 /V' s with increasing electron concentration. The concentration was found to be between 6xlO^17 and 1.6x10^19 cm- 3 at room temperature. Rocking curves from X-ray diffraction measurements of the better epilayers have a full width at half maximum of 5S0 arc sec.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397181272506714, "perplexity": 1709.2183317995753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147054.34/warc/CC-MAIN-20200228043124-20200228073124-00355.warc.gz"}
https://fr.scribd.com/document/473207048/Math-2-Practice-Test-4B-pdf
Vous êtes sur la page 1sur 24 # PRACTICE TEST 4 281 PRACTICE TEST 4 Treat this practice test as the actual test and complete it in one 60-minute sit- Once you have completed the practice test: 2. Review the Answers and Solutions. 3. Fill in the “Diagnose Your Strengths and Weaknesses” sheet and deter- mine areas that require further preparation. PRACTICE TEST 4 283 PRACTICE TEST 4 MATH LEVEL 2 Tear out this answer sheet and use it to complete the practice test. Determine the BEST answer for each question. Then, fill in the appropriate oval using a No. 2 pencil. 1. A B C D E 21. A B C D E 41. A B C D E 2. A B C D E 22. A B C D E 42. A B C D E 3. A B C D E 23. A B C D E 43. A B C D E 4. A B C D E 24. A B C D E 44. A B C D E 5. A B C D E 25. A B C D E 45. A B C D E 6. A B C D E 26. A B C D E 46. A B C D E 7. A B C D E 27. A B C D E 47. A B C D E 8. A B C D E 28. A B C D E 48. A B C D E 9. A B C D E 29. A B C D E 49. A B C D E 10. A B C D E 30. A B C D E 50. A B C D E 11. A B C D E 31. A B C D E 12. A B C D E 32. A B C D E 13. A B C D E 33. A B C D E 14. A B C D E 34. A B C D E 15. A B C D E 35. A B C D E 16. A B C D E 36. A B C D E 17. A B C D E 37. A B C D E 18. A B C D E 38. A B C D E 19. A B C D E 39. A B C D E 20. A B C D E 40. A B C D E PRACTICE TEST 4 285 PRACTICE TEST 4 Time: 60 minutes Directions: Select the BEST answer for each of the 50 multiple-choice questions. If the exact solution is not one of the five choices, select the answer that is the best approximation. Then, fill in the appropriate oval on the answer sheet. Notes: ## 1. A calculator will be needed to answer some of the questions on the test. Scientific, programmable, and graphing calculators are permitted. It is up to you to determine when and when not to use your calculator. 2. Angles on the Level 2 test are measured in degrees and radians. You need to decide whether your calculator should be set to degree mode or radian mode for a particular question. 3. Figures are drawn as accurately as possible and are intended to help solve some of the test problems. If a figure is not drawn to scale, this will be stat- ed in the problem. All figures lie in a plane unless the problem indicates otherwise. 4. Unless otherwise stated, the domain of a function f is assumed to be the set of real numbers x for which the value of the function, f (x), is a real number. 5. Reference information that may be useful in answering some of the test questions can be found below. Reference Information 1 2 Right circular cone with radius r and height h: Volume = πr h 3 ## Right circular cone with circumference of base c 1 and slant height ᐉ: Lateral Area = cᐉ 2 4 3 Sphere with radius r: Volume = πr 3 Surface Area = 4πr2 1 Pyramid with base area B and height h: Volume = Bh 3 286 PART III / EIGHT PRACTICE TESTS ## PRACTICE TEST 4 QUESTIONS 1. If x5 = 68, then x = USE THIS SPACE AS SCRATCH PAPER (A) 2.17 (B) 3.06 (C) 9.6 (D) 17.58 (E) 1,296 (A) 3(x2 + 3x) (B) 9(x + 1) (C) 9x2 + 9x (D) 3x2 + 9x (E) 9x2 + 6x ## 3. If (x, y) is a point on the graph of a function, then which of the following must be a point on the graph of the inverse of the function? (A) (y, x) (B) (−x, −y) (C) (−y, −x) (D) (x, −y) (E) (−x, y) ## 4. What real values of a and b satisfy the equation a + b + 9i = 6 + (2a − b)i? (A) a = 5, b = −1 (B) a = 5, b = 1 (C) a = 6, b = 0 (D) a = 4, b = 2 (E) a = 1, b = 5 ## 5. Standing 20 feet away from a flagpole, the angle of elevation of the top of the pole is 42°. Assuming the flagpole is perpendicular to the ground, what is its height? (A) 18 ft (B) 22 ft (C) 13 ft (D) 15 ft (E) 16 ft ## GO ON TO THE NEXT PAGE PRACTICE TEST 4 287 ## 8 π USE THIS SPACE AS SCRATCH PAPER 6. If sin θ = and < θ < π, then tan θ = 17 2 15 (A) − 17 8 (B) − 15 8 (C) − 17 8 (D) 15 8 (E) 17 7. Which of the following is an equation of the line with an x-intercept of 4 and a y-intercept of −3? 3 (A) x − y = −3 4 3 (B) x + y = −3 4 4 (C) x−y=3 3 4 (D) x + y = −3 3 3 (E) x−y=3 4 ## 8. The graph of which of the following functions is symmetric with respect to the origin? (A) f (x) = ex (B) g(x) = (x − 3)2 (C) h(x) = (x + 1)3 (D) G(x) = 2 sin x (E) F(x) = x3 − 1 9. If − 4 + 2 − x = x , then x = (A) −7 or −2 (B) 7 or 2 (C) −7 only (D) −2 only (E) −2 or 7 (A) 4.33 (B) 2.28 (C) 2 (D) 1.33 (E) 81 ## GO ON TO THE NEXT PAGE 288 PART III / EIGHT PRACTICE TESTS 11. If f (x) = x2 + 3 for −1 ≤ x ≤ 3, then what is the range USE THIS SPACE AS SCRATCH PAPER of f ? (A) y ≥ 0 (B) y ≥ 3 (C) −1 ≤ y ≤ 3 (D) 4 ≤ y ≤ 12 (E) 3 ≤ y ≤ 12 ## 12. What is the length of the edge of a cube having the same total surface area as a rectangular prism meas- uring 3 cm by 4 cm by 8 cm? (A) 22.7 cm (B) 4.8 cm (C) 136 cm (D) 5.8 cm (E) 11.7 cm (A) −1 (B) 4 (C) 1 (D) −4 (E) −5 ## 14. A teacher has a test bank of 12 questions. If she wishes to create a test using 8 of the questions, how many different combinations of 8 questions are possible? (A) 96 (B) 495 (C) 2,950 (D) 11,800 (E) 1.996 × 107 ## 15. The number of tails showing when a pair of coins was tossed ten times was {0, 1, 2, 2, 1, 1, 0, 2, 0, 1}. What is the mean of the data? (A) 0 (B) 0.5 (C) 1 (D) 1.5 (E) 2 16. If a point has polar coordinates (2, π), then what are its rectangular coordinates? (A) (0, −2) (B) (2, 0) (C) (−2, −2) (D) (−2, 0) (E) (0, 2) ## GO ON TO THE NEXT PAGE PRACTICE TEST 4 289 17. Figure 1 shows one cycle of the graph of y = USE THIS SPACE AS SCRATCH PAPER 3 sin x + 1 for 0 ≤ x < 2π. What are the coordinates y of the point where the minimum value of the func- tion occurs on this interval? 4 ⎛ 3π ⎞ (B) ⎝ , − 3⎠ 2 2 ## (C) (π, −2) ⎛ 5π ⎞ (D) ⎝ , − 2⎠ 4 x 5 ⎛ 3π ⎞ (E) ⎝ , − 2⎠ 2 –2 Figure 1 18. Valerie’s average score on the first three math tests of the term is 89%. If she earns an 81% on the fourth test, what will her new average be? (A) 87% (B) 85% (C) 86.8% (D) 88% (E) 85.5% ## 19. If a circle has a radius of 6 cm, then what is the length of the arc intercepted by a central angle of 210°? (A) 6 (B) 2 (C) 7π 15π (D) 2 (E) 8π 1 20. What is the domain of f ( x ) = ? 16 − x 2 (A) x ≠ ±4 (B) x<4 (C) x > −4 (D) −4 < x < 4 (E) x < −4 or x > 4 ## 21. If f (x) = 2x3 − 1, then f −1( f(5)) = (A) −1 (B) 5 3 (C) 3 (D) 1 (E) 249 GO ON TO THE NEXT PAGE 290 PART III / EIGHT PRACTICE TESTS ## USE THIS SPACE AS SCRATCH PAPER 22. If x = 4 cos θ and y = 4 sin θ, then x 2 + y2 = (A) 1 (B) 4 (C) 16 (D) 4 sin θ cos θ (E) 4(cos θ + sin θ) ## 23. Which of the following quadratic equations has roots 8 + i and 8 − i? (A) x2 − 16x + 65 = 0 (B) x2 + 16x − 65 = 0 (C) x2 − 16x + 63 = 0 (D) x2 + 16x − 63 = 0 (E) x2 + 16x + 65 = 0 ## 24. If A is a point on the unit circle in Figure 2, then what y are the coordinates of A? (A) (sin 30°, cos 30°) ⎛ 2 2⎞ (B) ⎜ ⎝ 2 , 2 ⎟⎠ A (x,y) ⎛1 3⎞ (C) ⎜ , ⎟ ⎝2 2⎠ 30° x ⎛ 1 1⎞ (D) ⎝ , ⎠ 2 2 ⎛ 3 1⎞ (E) ⎜⎝ , ⎟ 2 2⎠ 25. What are the real zeroes of f (x) = −x4 − 6x3 − 9x2? (A) {3} Figure 2 (B) {−3, 3} (C) {0, −3} (D) {0, −3, 3} (E) {0} (cos A cot B) 26. In ΔABC in Figure 3, = A csc A a2b (A) c3 b2 c b (B) c2 (C) 1 a2 B (D) a C c2 a3 Figure 3 (E) bc 2 ## GO ON TO THE NEXT PAGE PRACTICE TEST 4 291 27. Which single transformation can replace rotating a USE THIS SPACE AS SCRATCH PAPER polygon 30° clockwise, followed by 110° counter- clockwise, followed by 15° clockwise all about the same center of rotation? (A) 155° clockwise (B) 95° counterclockwise (C) 275° clockwise (D) 65° counterclockwise (E) 80 counterclockwise ## 28. The solution set of 5x + 2y > 0 lies in which quadrants? (A) I only (B) I and II (C) I, II, and IV (D) II, III, and IV (E) I, II, III, and IV 5 29. If tan θ = , then cos θ = 12 12 (A) 13 12 (B) ± 13 5 (C) 13 5 (D) ± 13 12 (E) 11 ## 30. The cube in Figure 4 has edges of length 4 cm. If A point B is the midpoint of the edge, what is the perimeter of Δ ABC? (A) 8.94 (B) 11.31 B (C) 12.94 (D) 14.60 4 (E) 15.87 Figure 4 n! 31. If = (n − 1)! then n = 3 (A) 1 (B) 2 (C) 3 (D) 4 (E) 5 ## GO ON TO THE NEXT PAGE 292 PART III / EIGHT PRACTICE TESTS 32. The sides of a triangle are 5, 6, and 7 cm. What is the USE THIS SPACE AS SCRATCH PAPER measure of the angle opposite the 5 cm side? (A) 44.4° (B) 53.8° (C) 57.1° (D) 78.5° (E) 90.0° ## 33. Given the statement “If ∠ABC is a right angle, then it measures 90°,” an indirect proof of the statement could begin with which of the following assumptions? (A) ∠ABC does not measure 90°. (B) ∠ABC measures 90°. (C) ∠ABC is a right angle. (D) ∠ABC is an obtuse angle. (E) ∠ABC measures 30°. (A) −37.3 (B) −0.9 (C) 0.1 (D) 1.9 (E) 35.3 (A) 8 (B) 17 (C) 36 (D) 61 (E) 80 ## 36. A varies inversely as the square of B. What is the effect on B if A is multiplied by 9? (A) It is multiplied by 3. (B) It is multiplied by 9. (C) It is divided by 3. (D) It is divided by 9. 1 (E) It is multiplied by . 81 ## 37. If x(x + 5)(x − 2) > 0, then which of the following is the solution set? (A) −5 < x < 2 (B) x < −5 or 0 < x < 2 (C) x > 2 (D) −5 < x < 0 or x > 2 (E) x < −5 or x > 2 ## GO ON TO THE NEXT PAGE PRACTICE TEST 4 293 38. If log4 (x2 − 5) = 3, then which of the following could USE THIS SPACE AS SCRATCH PAPER equal x? (A) 7.7 (B) 8.3 (C) 4.6 (D) 8 (E) 69 ## 39. If \$5,000 is invested at a rate of 5.8% compound- ed daily, how much will the investment be worth in 3 years? (A) 5,117 (B) 5,597 (C) 5,921 (D) 5,943 (E) 5,950 ## 40. ΔMNO in Figure 5 is an equilateral triangle. What is y the slope of segment MN? M 3 (A) − 3 (B) −1 (C) − 3 (D) −2 x 3 O N (6,0) (E) 3 41. A new computer does a calculations in b hours, and an old computer does c calculations in d minutes. If the two computers work together, how many calcu- lations do they perform in m minutes? ⎛ ac ⎞ (A) m ⎝ 60 bd ⎠ Figure 5 ⎛a c⎞ (B) 60m ⎝ + ⎠ b d ⎛ a c⎞ 60 (C) m ⎝ + ⎠ b d ⎛a c⎞ (D) m ⎝ + ⎠ b d ⎛ a c⎞ (E) m ⎝ + 60 b d ⎠ ## 42. If the 15th term of an arithmetic sequence is 120, and the 30th term is 270, then what is the first term of the sequence? (A) −30 (B) −20 (C) 10 (D) 20 (E) 30 GO ON TO THE NEXT PAGE 294 PART III / EIGHT PRACTICE TESTS 43. If −1 is a zero of the function f (x) = 2x3 + 3x2 − USE THIS SPACE AS SCRATCH PAPER 20x − 21, then what are the other zeroes? (A) 1 and 3 (B) −3 and 3 7 (C) − and 3 2 (D) −3 and 1 7 (E) − and 1 and 3 2 x 2 + 7x + 6 44. If f ( x ) = , what value does the function x 2 − 2x − 3 approach as x approaches −1? 7 (A) − 2 5 (B) − 4 (C) −1 (D) −2 1 (E) − 2 2x2 − 12x + 8? (A) x − 1 (B) x + 2 (C) x − 2i (D) x2 + 4 (E) x − 2 8 46. ∑ (−1) 3k = k k=0 (A) −108 (B) −12 (C) 84 (D) 12 (E) 108 ## 47. A line has parametric equations x = 6t − 2, and y = −8 + 4t. Given t is the parameter, what is the slope of the line? 3 (A) 28 2 (B) 3 3 (C) 2 28 (D) 3 4 (E) 3 GO ON TO THE NEXT PAGE PRACTICE TEST 4 295 (A) 5 2 (B) 2 2 (C) 2 5 (D) 3 (E) 6 ## 49. In how many ways can 10 people be divided into two groups if one group has 6 people and the other has 4? (A) 60 (B) 120 (C) 210 (D) 720 (E) 5,040 ## 50. In how many ways can the letters of the word TEACH be arranged using all of the letters? (A) 15 (B) 30 (C) 60 (D) 120 (E) 720 S T O P IF YOU FINISH BEFORE TIME IS CALLED, YOU MAY CHECK YOUR WORK ON THIS TEST ONLY. DO NOT TURN TO ANY OTHER TEST IN THIS BOOK. PRACTICE TEST 4 297 1. D 11. E 21. B 31. C 41. E ## 10. A 20. D 30. E 40. C 50. D 1. D Take the fifth root of both sides of the equa- 4. B Because a + b + 9i = 6 + (2a − b)i, a + b = 6 and tion to solve for x. 2a − b = 9. Set up a system and use the linear combi- nation method to solve for a and b. x5 = 68. a+b = 6 x5 = 1, 679, 616. + 2a − b = 9 x = (1, 679, 616 ) 5 . 1 3 a + 0 b = 15. x ≈ 17.58. a = 5. 2. C Given f (x) = x2 + 3x, 5 + b = 6, so b = 1. f (3 x) = x2 + 3 x, 5. A Let h = the height of the flagpole. f (3 x) = (3 x)2 + 3(3 x), h f (3 x) = 9 x2 + 9 x. tan 42º = . 20 3. A The graph of the inverse of a function is the h = 20(tan 42º ) ≈ 18 feet. graph of the function reflected over the line y = x. If (x, y) is a point on f, then (y, x), the reflection of the point over the line y = x, is on the graph of f −1. 298 PART III / EIGHT PRACTICE TESTS ## 6. B Think of sine either in terms of the opposite leg When x = −2, − 4 + [2 − (−2)] = −2. − 4 + 2 = −2. and hypotenuse of a right triangle or in terms of the π When x = −7, − 4 + [2 − (−7)] = −7. − 4 + 3 ≠ −7. point (x, y) and r of a unit circle. Because < θ < π, 2 ⎛ sin θ ⎞ x = −7 is not a solution of the original equation, so θ lies in quadrant II, and the tangent ⎝ must be cos θ ⎠ x = −2 is the only answer. negative. 10. A 8 y sin θ = = 17 r log 3 x + 2 log 3 x = 4. Because r = x2 + y2 ⬊ 3 log 3 x = 4. 4 log 3 x = . 17 = x2 + 82 . 3 x = 15. 4 3 3 = x. y 8 x = 4.33. tan θ = − =− . x 15 11. E The graph of f (x) = x2 + 3 is a parabola with ver- 7. E The line passes through the points (4, 0) and tex (0, 3) and concave up. Because the domain is spec- −3 − 0 3 ified, the curve has a beginning and an ending point. (0, −3). The slope of the line is m = = . 0−4 4 y Because the y-intercept is given, you can easily write 12 the equation in slope-intercept form. 10 y = mx + b. 8 3 6 y= x − 3. 4 4 3 x − y = 3. 2 4 x –12 –10 –8 –6 –4 –2 2 4 6 8 10 12 ## means it is symmetric with respect to the origin. –4 Graph G(x) = 2 sin x on your calculator to determine –6 that it does, in fact, have origin symmetry. –8 ## 9. D Isolate the radical expression and square both –10 sides of the equation to solve for x. –12 −4 + 2 − x = x. When x = −1, y = 4, and when x = 3, y = 12. The range 2 − x = x + 4. is the set of all possible y values, so don’t forget to 2 − x = x2 + 8 x + 16. include the vertex whose y value is less than 4. The range is 3 ≤ y ≤ 12. 0 = x2 + 9 x + 14. 12. B The surface area of the prism is: 0 = ( x + 2)( x + 7). SA = 2(3)(4) + 2(3)(8) + 2(4)(8) = 136 cm2 . x = −7 or − 2. Squaring both sides of the equation may introduce The surface area of the cube is given by the formula extraneous roots, however, so check the two solutions SA = 6e2, where e = the length of an edge of the cube. in the original equation. 136 = 6 e2 . 22.67 = e2 . e = 4.8 cm. PRACTICE TEST 4 299 13. B The maximum value of the function is the 19. C Use the formula s = rθ, where s = the arc length y-coordinate of the parabola’s vertex. For the function and r = the radius of the circle. Convert 210° to radian f (x) = 4 − (x + 1)2, the vertex is (−1, 4). (You can check measure first. this by graphing the parabola on your graphing calcu- lator.) The maximum value is, therefore, 4. ⎛ π ⎞ ⎛ 7π ⎞ 180 ⎠ ⎝ 6 ⎠ An alternate way of solving for the maximum is to find b (−2) Now, solve for the arc length: the y-value when x = − . In this case, x= − = −1, 2a 2(−1) ⎛ 7π ⎞ so y = 4 − (−1 + 1)2 = 4. s = 6 ⎝ ⎠ = 7π cm. 6 ## 14. B 20. D The denominator cannot equal zero and the 12 C8 = . 8!(12 − 8)! 16 − x2 > 0. 9 × 10 × 11 × 12 = . − x2 > −16. 1× 2 × 3 × 4 x2 > 16. = 495. −4 < x < 4 15. C The mean is the sum of the data divided by the number of terms. 21. B This problem can be done quickly and with lit- (0 + 1 + 2 + 2 + 1 + 1 + 0 + 2 + 0 + 1) tle work if you recall that the composition of a function = 10 and its inverse function, f−1( f (x)) and f ( f−1(x)), equal x. 10 f −1 ( f (5)) = 5. = 1. 10 22. B 16. D Because the polar coordinates are (2, π), (r, θ) = (2, π). ( x2 + y2 ) = (4 cos θ)2 + (4 sin θ)2 x = r cos θ = 2 cos π = 2(−1) = −2. = 16(cos2 θ + sin 2 θ) y = r sin θ = 2 sin π = 2(0) = 0. Recognize that you can use one of the Pythagorean Iden- The rectangular coordinates are (−2, 0). tities, cos2 θ + sin2 θ = 1, to simplify the expression. 17. E The graph of y = 3 sin x + 1 is the graph of y = sin x shifted up 1 unit with an amplitude of 3. The 16(cos2 θ + sin 2 θ) = 16(1) = 4. minimum value occurs at the point where x = . The 23. A The sum of the roots is: 8 + i + 8 − i = 16. 2 The product of the roots is: (8 + i)(8 − i) = 64 − i2 = 65. y-coordinate at that point is −2. The quadratic equation is, therefore, given by the 18. A Let s = the sum of the scores of Valerie’s first equation: three tests. a[x2 − (sum of the roots) x + (product of the roots)] = 0. s = 89. a( x2 − 16 x + 65) = 0. 3 s = 267. Setting a equal to 1 results in one possible answer: 267 + 81 x2 − 16 x + 65 = 0. Valerie’s new average is = 87%. 4 300 PART III / EIGHT PRACTICE TESTS ## 24. E Because the circle is a unit circle, the coordi- 29. B nates of A are (cos 30°, sin 30°). This can be simplified 5 y ⎛ 3 1⎞ tan θ = = . to ⎜⎝ , ⎟. 12 x 2 2⎠ x cos θ = where r = ( x2 + y2 ). If you don’t know what the cosine and sine of 30° r equal, let (x, y) be the coordinates of A, and draw a First solve for r to get: right triangle with legs of length x and y. The triangle is a 30°-60°-90° triangle, so use the ratios of the sides r = ( x2 + y2 ) = (122 + 52 ) = 13. of this special right triangle to determine that the ⎛ 3 1⎞ 12 coordinates of point A are ⎜⎝ , ⎟. Therefore, cos θ = ± . 2 2⎠ 13 ## 25. C 30. E Because B is the midpoint of the edge of the cube, use the Pythagorean Theorem to determine the − x4 − 6 x3 − 9 x2 = 0. measure of AB and BC. − x2 ( x2 + 6 x + 9) = 0. AB = BC = 4 2 + 22 = 20 = 2 5. − x ( x + 3) = 0. 2 2 ## The remaining side, AC, is the hypotenuse of a right x = 0 and x = −3. triangle with legs of lengths 4 and 4 2. 26. D Using right triangle trigonometry to determine values for the three trigonometric functions. AC = 42 + (4 2 )2 = 48 = 4 3 cos A = = . The perimeter of hypotenuse c Δ ABC = 2 5 + 2 5 + 4 3 = 15.87 cm. cot B = = . opposite b 31. C 1 hypotenuse c csc A = = = . n! sin A opposite a = ( n − 1)! 3 b ⎛ a⎞ n! cos A cot B ⎝ ⎠ = 3. = c b . ( n − 1)! csc A c a n = 3. a 32. A The Law of Cosines states: c2 = a2 + b2 − 2ab a2 = c = 2. cos ∠C. c c a 52 = 62 + 72 − 2(6)(7) cos ∠C, where C is the angle oppo- site the 5-cm side. 27. D Rotating a polygon 30° clockwise, followed by 110° counterclockwise, followed by 15° clockwise all 25 = 85 − 84 cos ∠ C. about the same center of rotation is equivalent to rotat- ing it 30 + (−110) + (15) = −65°. 65° counterclockwise is −60 = −84 cos ∠ C. the correct answer. ⎛ 60 ⎞ cos −1 ⎝ ⎠ = 44.4° . 5 84 28. C Solve the inequality for y to get y > − x. Then, 2 5 graph the linear equation y = − x . The solution to 2 the inequality is the shaded area above the line, and that region falls in quadrants I, II, and IV. PRACTICE TEST 4 301 33. A “∠ABC does not measure 90°” is the negation 40. C Because ΔMNO is equilateral, you can break it of the conclusion of the given statement. It is the cor- into two 30°–60°–90° right triangles. The x-coordinate rect assumption to use to begin an indirect proof. of point M is the midpoint of ON, which is 3. The y-coordinate of point M can be determined by using 34. E Take the log of both sides of the equation to the ratios of the sides of a 30°–60°–90° triangle. The side solve for k. opposite the 30° angle is 3, so the side opposite the log(8 k + 2 ) = log(9 k ). 60° angle is 3 3 . Point M, therefore, has coordinates ( k + 2)log 8 = k log 9. (3, 3 3 ). ⎛ log 9 ⎞ 3 k + 2 = k⎜ . ⎝ log 8 ⎟⎠ The slope of MN is −3 3 = − 3. k + 2 = 1.0557 k. 41. E The new computer does a calculations in k = 35.3. a b hours, so it does calculations in one minute. Add 60 b 35. B the individual rates together and multiply their sum by m minutes. 13 a = 13 ( a ) = ( 13 )(4.718) ⎛ a c⎞ m⎝ + . = 17. 60 b d ⎠ 36. C A and B are inversely proportional. When A is 42. B Because the 15th term of an arithmetic sequence is 120 and the 30th term is 270, the common ratio multiplied by 9, B is divided by 9. Answer C is the 270 − 120 150 correct answer choice. between consecutive terms is = = 10. 30 − 15 15 37. D The critical points of the inequality x(x + 5) an = a1 + ( n − 1) d. (x − 2) > 0 are x = 0, −5, and 2. Evaluate the 4 intervals created by these points by determining if the inequality 120 = a1 + (15 − 1)10. is satisfied on each interval. −5 < x < 0 or x > 2 is the −20 = a1. 43. C If x = −1 is a zero of the function, then x + 1 is a 38. B If log4 (x2 − 5) = 3, then 43 = x2 − 5. factor of the polynomial. Use either long division or syn- 64 = x2 − 5. thetic division to determine that (2x3 + 3x2 − 20x − 21) ÷ (x + 1) = 2x2 + x − 21. 69 = x2 . 2 x2 + x − 21 = 0. x = 8.3. (2 x + 7)( x − 3) = 0. nt ⎛ r⎞ 39. E A = P ⎝1 + ⎠ , where n is the number of times 7 n x=− or x = 3. the investment is compounded per year. 2 365(3) 44. B Factor the numerator and denominator. Then, ⎛ 0.058 ⎞ simplify the expression and evaluate it when x = −1. A = 5, 000 ⎝1 + . 365 ⎠ x2 + 7 x + 6 ( x + 6)( x + 1) A = 5, 000(1.0001589)1095. f ( x) = = x2 − 2 x − 3 ( x − 3)( x + 1) A ≈ 5, 950. ( x + 6) = . ( x − 3) ( x + 6) 5 When x = −1, =− . ( x − 3) 4 302 PART III / EIGHT PRACTICE TESTS 45. E One way to solve this problem is to verify that 48. A Recall that the absolute value of a complex if x − a is a factor of the polynomial, then a is a zero. number is given by: a + bi = ( a2 + b2 ). f (1) = (1)5 + (1)3 + 2(1)2 − 12(1) + 8 = 0. 7+i = (72 + 12 ) = 50 = 5 2 . f (−2) = (−2) + (−2) + 2(−2) − 12(−2) + 8 = 0. 5 3 2 49. C Choosing 6 people out of the 12 results in the f (2i) = (2i)5 + (2i)3 + 2(2i)2 − 12(2i) + 8 = 0. following: f (−2i) = (−2i)5 + (−2i)3 + 2(−2 2i)2 − 12(−2i) + 8 = 0. 10! 10! 10 C6 = = 6!(10 − 6)! 6! 4! f (2) = (2)5 + (2)3 + 2(2)2 − 12(2) + 8 = 32. 7 × 8 × 9 × 10 = Note, to determine if x2 + 4 is a factor of the polynomial, 1× 2 × 3 × 4 check that x − 2i is a factor because (x − 2i)(x − 2i) = = 210. x2 + 4. f(2) results in a remainder of 32, so x – 2 is not a factor of the polynomial. Note that once the 6 members are chosen, the remain- ing 4 people are automatically placed in the second 46. D Substitute k = 0, 1, 2, . . . 8 into the summation group. to get: 50. D Find the number of permutations of five letters 0 − 3 + 6 − 9 + 12 − 15 + 18 − 21 + 24 = 12. taken five at a time. x+2 5! = 5 × 4 × 3 × 2 × 1 = 120. 47. B Because x = 6t − 2, t = . Substitute this 6 value into the second equation to get: ⎛ x + 2⎞ y = 8 + 4⎝ . 6 ⎠ 3 y = 24 + 2 x + 4. 2 28 y= x+ . 3 3 2 The slope of the resulting line is . 3 PRACTICE TEST 4 303 ## DIAGNOSE YOUR STRENGTHS AND WEAKNESSES Check the number of each question answered correctly and “X” the number of each question 9 questions 2 questions Geometry 6 questions 7 questions 15 questions Statistics, and Probability 2 questions ## Numbers 4 14 33 42 44 46 48 49 50 Total Number Correct and Operations 9 questions 1 4 1 ___________________________ − 4 (_____________________________) = ________________ 304 PART III / EIGHT PRACTICE TESTS Compare your raw score with the approximate SAT Subject Test score below: ## SAT Subject Test Raw Score Approximate Score ## Menu de pied de page ### Obtenez nos applications gratuites Droits d'auteur © 2021 Scribd Inc. Droits d'auteur © 2021 Scribd Inc.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069194793701172, "perplexity": 2081.4125274617245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00549.warc.gz"}
http://mathhelpforum.com/trigonometry/31502-very-confusing-homework.html
# Math Help - very confusing homework 1. ## very confusing homework What is the approximate area of a segment of a circle with a radius 12 meters if the length of the chord is 20 meters? Round your answer to the nearest whole number. I started but i could not solve for theta given as: 20=24sin(theta/2) 2. $20 = 24sin\left(\frac {\theta}{2}\right)$ $sin\left(\frac {\theta}{2}\right) = \frac {5}{6}$ Where did you get that formula from? 3. Notice that I have divide the isosceles triangle in half, creating two right angle triangles. We know that the hypotenuse is 12 and that one side is 10 so all we have to do is find the angle we need and multiply it by 2. So we have: $\sin {\theta} = \frac {10}{12}$ $\sin {\theta} = \frac {5}{6}$ Use the arcsin function to solve for $\theta$ and then plug that into this formula: $\frac {1}{2}r^2\left(\frac {\pi}{180}\theta - \sin {\theta}\right)$ If you are using degrees or $\frac {1}{2}r^2\left(\theta - \sin {\theta}\right)$ If you are using radians.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493371844291687, "perplexity": 234.60277244425473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988860.88/warc/CC-MAIN-20150728002308-00189-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.omnimaga.org/profile/?area=showposts;u=32113
Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. Messages - ImZealot Pages: [1] 2 1 Miscellaneous / Re: What is your avatar? « on: November 26, 2014, 12:47:37 pm » it speaks for itself (made by me ) 2 Miscellaneous / Re: Post your game library (digital or not) « on: November 24, 2014, 04:11:35 pm » Found something: http://www.collectorz.com/game/game_library.php EDIT: It's paid >-< 3 Miscellaneous / Re: Post your game library (digital or not) « on: November 24, 2014, 03:26:30 pm » Is there a good way to store and manage a physical game database? I'd like to not pull hair when I'm moving to buy some games around. I'd say Microsoft Excel/whatever the equivalent is in LibreOffice will do the job, but I guess specialized software for this might exist. That's a great idea, if someone finds any specific software for it say something! 4 Miscellaneous / Re: Post your game library (digital or not) « on: November 22, 2014, 10:29:12 pm » I need to post my library at one point, but I have like an hundred of game cartridges and CD-based ones. I had a picture of my collection a while ago but I traded in some stuff afterward and I planned to do so again in the near future, so I might go with a text-based list. I don't have much stuff on Steam, though, and two things listed there are not even games anyway. Yeah, I gotta do that to my game library too. So many GB and GBA cartridges... And old (DOS) PC CD's... My calculator game library: http://clrdraw.weebly.com/ My favorite Xbox games: Portal 2, Forza Motorsport 4, and Birds of Steel. I have a few others but I don't play them that much. Awesome calc games there! Thanks man ^^ Here's my Steam library as of 6/22/14 My most played games are EU IV, RTW and RTW II And I thought I had a lot of games on Steam Ô~ó That's an awesome list, gotta have an awesome rig too! I can't play half of my games cause my PC sucks... 5 Introduce Yourself! / Re: Hi, I'm André! :D « on: November 22, 2014, 10:24:48 pm » http://www.losethegame.com/ Oh god. Who made this?! It is... EVIL! And awesome at the same time! Well, I've been winning for almost 18 years. Just lost it. You don't win. You just don't lose Right, sorry. I just lost ;-; 6 Miscellaneous / Re: Post your desktop « on: November 22, 2014, 10:23:17 pm » I see that you seem to be a FNAF fan with the background and both games/demos Yeah! Hardcore fan actually! I played the 1st one until Night 6, can't get past it though... I'm at Night 6 on the sequel right now, too damn hard to cope with all the animatronics I never got the guts to play the demo for the first one at my friends place I can't get it because my main platforms are calcs and a mac. Ohh... That sucks man :c 7 Super Smash Bros. Open / Re: [Axe] Super Smash Bros. Open « on: November 21, 2014, 10:55:05 pm » UPDATE • Now with the modified version of ClrDraw's titlescreen • Facelifting for the character selection menu • Now displays the "name" of the character when the map zooms out so that you always know where you are • Rectangles are drawn more often. I still found one occurence of a missing rectangle but all other ones were gone so that's an improvement in my opinion. And if you want to know how much time I spent on this, check the source code out, I didn't remove all my attempts at correcting that problem (kept them in comments in case I break everything when trying something else) edit notice the apparition of the new "SSBODATA" appvar. For now it only contains the titlescreen but it will probably contain more. You probably want to transfer it if you don't want to have some garbled mess displayed when launching the game. Nice update man! Looking forward to see the game evolve more and more! Still didn't figure out how to play linked, but I will, and I'll get someone to play with me! 8 Miscellaneous / Post your game library (digital or not) « on: November 21, 2014, 10:50:49 pm » Hey guys, so I saw Zera's Topic "Post your desktop" kind-of forum game. I want you guys to share your game library, either it being your digital collection or physical. Proof is not needed but its nice to see it! Let's share each other's games, find common interests, and suggest new games to each other! My Steam game library: My Origin game library: (just got that "Crusader" game, Origin was giving it "On the house"... Yeah.) My 'physical' game library: Oh boy, if I were to count and write down all my PS2, PS3, Wii and PC games I'd be here for a while. I might take pictures of them and post it here, MIGHT. 9 Introduce Yourself! / Re: Hi, I'm André! :D « on: November 21, 2014, 10:34:06 pm » http://www.losethegame.com/ Oh god. Who made this?! It is... EVIL! And awesome at the same time! Well, I've been winning for almost 18 years. Just lost it. 10 Miscellaneous / Re: Post your desktop « on: November 21, 2014, 10:29:58 pm » Yeah I got Trackmania Built to Race for the Wii. I didn't open the game package yet, though, because I got it for $10 then discovered it's an ultra rare Wii game. It's like$30 used over here now . Maybe the next Earthbound or Hagane? So it might end up being resold later if I need money. I guess it is rare, never ever heard of it I see that you seem to be a FNAF fan with the background and both games/demos Yeah! Hardcore fan actually! I played the 1st one until Night 6, can't get past it though... I'm at Night 6 on the sequel right now, too damn hard to cope with all the animatronics Aaaayyyy! SRB2! YEAH! Brings back so many freaking memories... <3 11 Humour and Jokes / Re: Important life algorithms « on: November 20, 2014, 05:33:34 pm » That moment while you really wish you got these jokes, and you kinda do, but you don't cause you can't code, and you wish you knew simple code like this. 12 Miscellaneous / Re: Post your desktop « on: November 20, 2014, 05:29:10 pm » virus.txt I was just showing my friend a sort of simple prank virus, already tricked many people and made 4 computers at my school getting taken out of the 'study room' to be formatted.  woot EDIT: Also "virus.bat" Trackmania is fun ^.^ Haven't played it in a while actually! But yeah, it is! 13 Introduce Yourself! / Re: Hi, I'm André! :D « on: November 20, 2014, 05:26:49 pm » Hi Zealot, welcome to Omni! People rarely know where it is from, glad to see someone that knows the game. I'm really bad at the game, though, since I play causally (in SC2 I only play unranked). I'm not getting that one The game? As in the game of life? ;-; Thanks for the welcome 14 Miscellaneous / Re: Post your desktop « on: November 20, 2014, 10:30:55 am » My laptop's desktop: Will post my PC's desktop when I get home 15 Introduce Yourself! / Re: Hi, I'm André! :D « on: November 20, 2014, 10:21:05 am » Do you just cannon-rush him or 6-pool every game? When I introduce someone to Starcraft 2 I have the bad habit to 6-pool in the first game Nah, and we play the 1st one, I get it would be 4-pool :b But I cannon rushed him a couple of times Pages: [1] 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18713419139385223, "perplexity": 4360.898649082782}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00325.warc.gz"}
https://scholars.ncu.edu.tw/zh/publications/in-situ-and-remotely-sensed-observations-of-biomass-burning-aeros
# In-situ and remotely-sensed observations of biomass burning aerosols at Doi Ang Khang, Thailand during 7-SEAS/BASELInE 2015 Andrew M. Sayer, N. Christina Hsu, Ta Chih Hsiao, Peter Pantina, Ferret Kuo, Chang Feng Ou-Yang, Brent N. Holben, Serm Janjai, Somporn Chantara, Shen Hsiang Wang, Adrian M. Loftus, Neng Huei Lin, Si Chee Tsay 18 引文 斯高帕斯(Scopus) ## 摘要 The spring 2015 deployment of a suite of instrumentation at Doi Ang Khang (DAK) in northwestern Thailand enabled the characterization of air masses containing smoke aerosols from burning predominantly in Myanmar. Aerosol Robotic Network (AERONET) Sun photometer data were used to validate Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 6 ‘Deep Blue’ aerosol optical depth (AOD) retrievals; MODIS Terra and Aqua provided results of similar quality, with correlation coefficients of 0.93–0.94 and similar agreement within expected uncertainties to globalaverage performance. Scattering and absorption measurements were used to compare surface and total column aerosol single scatter albedo (SSA); while the two were well-correlated, and showed consistent positive relationships with moisture (increasing SSA through the season as surface relative humidity and total columnar water vapor increased), insitu surface-level SSA was nevertheless significantly lower by 0.12–0.17. This could be related to vertical heterogeneity and/or instrumental issues. DAK is at ~1,500 m above sea level in heterogeneous terrain, and the resulting strong diurnal variability in planetary boundary layer depth above the site leads to high temporal variability in both surface and column measurements, and acts as a controlling factor to the ratio between surface particulate matter (PM) levels and column AOD. In contrast, while some hygroscopic effects were observed relating to aerosol particle size and Ångström exponent, relative humidity variations appear to be less important for the PM:AOD ratio here. 原文 ???core.languages.en_GB??? 2786-2801 16 Aerosol and Air Quality Research 16 11 https://doi.org/10.4209/aaqr.2015.08.0500 已出版 - 11月 2016
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221005439758301, "perplexity": 28623.21196651141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00250.warc.gz"}
https://docs.eyesopen.com/toolkits/cpp/molproptk/OEMolPropFunctions/OEGetLongestUnbranchedHeavyAtomsChain.html
# OEGetLongestUnbranchedHeavyAtomsChain¶ unsigned int OEGetLongestUnbranchedHeavyAtomsChain(const OEChem::OEMolBase &mol) Returns the size of the longest chain of heavy atoms in a molecule. This is defined to be the maximum number of connected, unbranched, and non-ring heavy atoms. An unbranched atom is a chain atom with maximum two connections to other heavy chain atoms. A set of unbranched atoms which are connected together form a chain. A molecule may contain multiple chains which are isolated from each other by non-chain atoms (e.g. ring or branched atoms).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16510668396949768, "perplexity": 1099.8527689970865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00318.warc.gz"}
http://cauchy-riemann.blogspot.com/2002/06/
# ...Prove Their Worth... "Problems worthy of attack prove their worth by hitting back." - Piet Hein A kind of running diary and rambling pieces on my struggles with assorted books, classes, and other things, as they happen. You must be pretty bored to be reading this... ## Sunday, June 30, 2002 Hmm. Blogspot seems to be dead. edit: Yup, it's still dead. I've promised myself that I won't put political or social commentary on this page when I started it. That was because I didn't want it to be like the stereotypical "war-blog", devoted to mindless political masturbation and link propagation. I still hold to that principle, but I'm going to violate it right now, with the intention of it being a one-time kind of thing. I do it because I want to record a prediction. Polling shows something like 9 out of 10 americans want "under god" in the US pledge of allegiance. The US Congress was already fairly apoplectic over the 9th Circuit Court ruling before the poll data came in. This will lend support to their pandering. The political pressure to overturn on the full 9th Circuit, and on the Supreme Court, when it goes there (as it very likely will, I think), will be immense. If the courts judge upon the constitional merits of the case (that is, ruling the pledge unconstitutional), the United States will pass a constitutional amendment castrating the first amendment. Or such is my prediction. The whole length posting was to record, on the WWW, said prediction. I hope I'm wrong. ## Saturday, June 29, 2002 The last couple of days, I've been doing quite a bit of thinking about hairy balls. No! Wait! Don't leave! It's not what it sounds like! I've been trying to get a picture of what 'bundles' are like. My best effort so far goes like this. Manifolds are too abstract to deal with intuitively for me. So let's just use a simple accesible manifold, just a sphere in 3D (aka a ball), as a stand in for all manifolds. Now, to get a tangent bundle corresponding to the manifold, we have to take the set of all the tangent spaces to all the points on the manifold, more or less. That's too abstract. But the tangent spaces are vector spaces, and vectors are easy to visualize - they're just arrows! More, tangent bundles are apparently just important instances of 'fiber bundles'. That is, bundles are made up of abstract things called fibers, which end up corresponding to the individual tangent vector spaces to each point on the manifold. So. A nice picture for all this that our balls sprout hair*. All kinds of hair - neat, frizzled, dreaded, etc. The hair as a whole is now a picture corresponding to a 'tangent bundle' to the manifold, with the hairs being the 'fibers' making up the bundle. Sensible, right? So, we've got tangent bundles pictured as hairy balls now. Why the hell did we want to do that? Well, I don't really know, because I'm not yet sure how one uses tangent bundles. But, I've read that if you want to deal with differentiational thingies with manifolds, you end up dealing with bundles. That covers stuff like 'velocities' and also far more bizarre things. That actually fits nicely with our image of hairy balls. Say you take such a hairy ball, and give it a good shake. Run about the room waving it in the air. If the hairy balls are sufficiently hairy, you can even outfit a cheerleading squad with them, and have them do some cheers. Now, instead of watching the cheerleaders, try watching the hair on the balls. I know this is really hard, but just try it. What's the hair doing? Well, it's swishing all over the place - and if the balls are sufficiently hairy, you can only see the hair - not the balls. And the swishing is not random - it depends on just what the balls are doing - how they're accelerating, or whether they are deforming under there, or whatever. So by looking at the hair of the balls, we can get lots of good information about the balls. What's more, the hair makes life a lot easier - it's a lot easier to notice a hairy ball accelerate and jitter about than a naked one. If we imagine ourselves in a huge empty room, with our viewpoint moving with the ball, it's going to be hard to see how the ball is moving if it's bald. But if it's hairy, we're in like Flynt - we just watch the hair. So that's why tangent bundles are useful. I think. Hey! Don't look at me like that. I didn't write all that just because it was an excuse to talk about hairy balls and cheerleaders in the same sentence. No siree Bob! And it's not my fault your mind is in the gutter! Kids these days... * - As seen on TV! Yours for just 19.95 plus shipping and handling, but only if you call now! "I'm not just the company president - I'm a customer!" ## Thursday, June 27, 2002 Subtle is the Lord. It's the title of a book I've got to read someday, and it's also the first thing (after a suitably earthy expletive) that came to my little atheist mind today when I learned that the right-hand-rule of electromagnetism is a crock. According to sources I trust, if one does electromagnetism properly, using differential forms and other relatively modern mathematical objects, the right hand rule isn't needed. The right-hand-rule has always bothered me - it seemed capriciously random, and while I knew you could get an equivalent picture using a left-hand-rule, the very idea that one had to pick one bothered me. I have got to learn about this stuff. But, I can't pick a book to use. Right now, it's between Frankel's Geometry of Physics, Baylis's Electrodynamics: A modern approach (uses clifford algebra to do stuff), and John Baez's Knots, Gauge Fields, and Quantum Gravity. The book I really want to use is Baez's, but I'm scared, because the title is very intimidating, I've never actually seen it, and I'm afraid I'd immedeatly drown in it. But then again, I've heard the prerequisites aren't too heavy. Damn. Hmm. Heh, I might have to pester Prof. Baez in a newsgroup for the answer - it was one of his posts that clued me into all this in the first place... ## Tuesday, June 25, 2002 I've caught the flu, and I feel like a used kleenex. What's more, it's likely an imported flu (from Russia, most likely). Joy. In more pleasant news, I was reading Spivak while on The Throne, and I think I'm actually not completely off my rocker in my initial impression of bundles. They are, apparently, 'just' a fancy way of saying you've got a tangent vector space to a manifold. The 'tangent bundle' turns out to be somewhat fancier still - it's something like a set of equivalence classes of bundles on the manifold, or something. I haven't digested that part yet. Hmm... I think I'll write up my digested version of tangent bundles once I work through them, for posterity and humor value. Strichartz's book is neat. So is Needham's book (except I'd use a stronger descriptor than neat -- perhaps comic-book-guy-style "Best. Math. Book. Ever."?). I'm done with Needham's fantastic chapter on winding numbers and topology. The short-range plan is now to finish Strichartz's chapter on differentiation, and Spivak's chapter on tangent bundles. Then, I'm going to get my exercise on, and try to crank through some problem sets in all three books before continuing in any of them. Exercises are irreplaceable in firming up one's grasp of material, and at least the one's in Needham are actually fun. Sadly, exercises are also hard work, and I'm a lazy bum. Motivation to do the 'homework' is one of the things that is especially challeging in autodidaction. On the other hand, in my specific case, there is a silver lining. See, I get to pick what problems I do, when I do them, and so on. Not some teacher with masochistic tendencies and an undying devotion to engineering plug-and-chug busywork. I pick the good stuff (and yes, I can see this being a problem, as it's questionable whether I can tell what the good stuff is.) And I don't get graded. And I'm not competing with anyone for a curve or anything like that. Which makes all makes the idea of self-assigned work easier to swallow and stick with, provided the work is interesting, and I can feel myself making progress. I've a nasty character flaw in getting easily frustrated. ## Monday, June 24, 2002 I'm switching to a strict regimen of Strichartz and Needham for the next few days. This was brought on by the fact that I can feel my understanding of Spivak slipping away from me faster than a radical libertrarian's screed on government slides off a slippery slope*. Basically, I'm just not grasping tangent bundles in any meaningful way. I can't just read the proofs anymore, and I just stare at them glassy-eyed. So I think I need to do some exercises, and I also need a break from Spivak. I'm beginning to like Strichartz now that he's talking about things I've seen before, such as continuity of functions and differentiation. I can see him carefully motivating the discussion, stressing common themes in proofs and definitions, and so on. It all leads me to suspect I've been too harsh on his book, and that if I had seen things about the construction of the reals and topology and so on before I'd read his book, I would have enjoyed those sections as well. I ate far more pasta than is advisable last night, so I didn't go jogging. * - Ooh, baby, just look at that lovely alliteration! Bow down before my display of literary pretentiousness! ## Saturday, June 22, 2002 I saw Mr. Thumpy tonight. Or at least, so suggests some circumstantial evidence. Whatever I saw was greyish, was about the size of a rabbit, and made the same kind of noise while running that MrThumpy's made when I've been able to positively identify him. Squirrels aren't generally out that late, and it didn't seem like a cat. So, most probably it was a rabbit, and that means Mr. Thumpy. Unfortunately, as I was observing Mr. Thumpy, I was almost hit by a couple of thirteen year old fucktards on skateboards. This at 11 pm, in almost complete darkness, on a footpath. The mind boggles. In math news, I'm currently reading Strichartz. I'm beginning to think my impression of his Way of Analysis is wrong. More on that later. Heh. Reed College is unique among the school webpages I've hit in that their 'look at our perfect pretty students' pictures, are, well, not. This suggests that either a) Reed only has unattractive people or b) Reed is actually being honest and more-or-less randomly picking average representatives of the student body. I kinda like Reed, actually. I considered applying there back in high school, but decided against it because a) I didn't think I'd get in and b) I'm probably a 'liberal', but I'm not that liberal, so their reputation bothered me a bit and c) I don't smoke weed, or, as it happens, anything else and d) I didn't want to pay another 'application fee'. Given that I was unlikely to get into Reed after high school, it's effectively impossible now, as a transfer, with the addition of my scintillating college transcript. Sigh. I've been doing some surfing around college websites. As I expected, I outright don't qualify for transfer at most of the universities I'd consider attending, and all the ones where I 'technically' may qualify more or less say 'don't bother' given certain stats (such as mine). And I can't say I blame them - I'd do the same if I was sitting in an admissions office. Expected, but still dissapointing. Podunk New Age Science University of Jebus*, here I come! * - School motto: "We award more four-year degrees in BS to pet rabbits than any other school in Ohio!" Score. I can finally run about a mile in about seven minutes*. This is admittedly pathetic compared to most people my age, but in high school, I 'ran' a mile in fourteen minutes. Now granted, part of that was an extended 'fuck all ya'll' to the very idea of a mandatory gym class, but a bigger part of it was that I was simply a slow and stupid dork. Now I'm a faster dork (whose intrinsic intelligence is probably unchanged, though I'm a little wiser thanks to 'life experience', natch). Which is cause for celebration, I think, and I just ate a few grapes and a peach to commemorate the occasion. In other news. Hmm. Not much else in other news. My first impression of vector bundles: "Hey, neat, but isn't this just a really anal-retentive way of saying 'Houston, we have a tangent vector space'?" I'm sure once I find out more about them that impression will be cause for amusement, but so it goes. * - The error bars on that are undetermined, because I don't own a stopwatch, and I haven't measured my running path with an odometer. But I think the figure is 'about' right, to within something like a minute and something like a fifth of a mile, pulling some numbers out of a hat. ## Thursday, June 20, 2002 The promised details on Theorem 9, Ch2: First, some terminology. In differential geometry the objects studied are manifolds. A manifold is just a space that looks like R^n if you look at any little part of it. R^n is just normal flat space, as R is just the real line, and the n stands for its dimension (n = 3 for the space we live in*). In general, a manifold can look like R^n's of differing n's in different areas, but we can ignore that for the most part. There's lots of theorems one can play with that use only that much information, but to get to the 'differential' part of differential geometry, we need some more structure. For instance, we might want to assign some coordinates to our manifold, M, or at least to a small part of it. To do that that, we look at some subset U of M. Then we define some function, x, which, given a point in U, will spit out a point in R^n. Just to be fancy, we call (x, U) an atlas of M. We can show that this definition makes sense, and meshes with our normal conception of coordinates as just a grid. Armed with this idea, and a few others I'm not going to go into here, we can start doing calculus on manifolds. We can define functions between manifolds, and take their derivatives. This is actually a very neat process. Let's call our function f, and have it take points in M^n to points in another manifold, A^m. To actually do this, we involve some unspecified charts on both manifolds, say (x, U) on M, and (y, V) on A. Now, remember, x takes M -> R^n. So x^-1 takes R^n -> M. (Same goes for y, of course.) The thing is that it's really pretty simple to define functions that do things to R^n - we've all been doing it since middle school (except we didn't call it R^n back then ...) So, let's stick with what we know, and try to make f work in R^n (or between R^n and R^m - whatever), but also in the process do what we need it to do, which is work with manifolds. Here's how: It's just y*f*x^-1, with * meaning composition of functions, y, f, and x^-1 being functions (duh), and the whole thing is to be read right to left, as one normally (!) does with composition of functions. So, given a point q in M, we want to get a point f(q) in A. Ok. Let's feed q to x^-1, getting x^-1[q] (notice, now we're in R^n!). Now, feed that to f, getting f[x^-1[q]] (notice, we fed f R^n, and it shat out R^m - so it's a good ol' function, the kind we know how to handle). And now we feed that to y, getting y[f[x^-1[q]]]. Notice that y is what takes us from R^m to where we wanted to get, which is the manifold A^m ! Whew. So, look at what we did: we 'hid' the fact that we're actually feeding it with something bizarre, like manifolds, from f, and persuaded it that it's actually muching on simple tasty things like R^n. So while we're lying unscrupulous bastards, we got what we wanted: f really is a function between manifolds. It just so happens it needs to wear blinders to do it, otherwise it would run away in terror. If you followed that, I hate you and envy you, because it took me several days to get that far. Now, since we can define these functions, and they're continuous, blah blah blah, we can take their derivatives, and we can talk about 'changing coordinates', and crap like that. We can even define big-ass Jacobians, which are matrices that tend to pop up when you squint carefully at the idea of changing from one set of coordinates to another. And we can play with matrix, figuring out its 'rank' and other linear-alebraish things. (Note: it's rank is going to vary from place to place!) So, err, that was the introduction. The theorem (Th. 9, Ch2, Spivak's DG) says: Say we've got a function f that takes things from one manifold, M^m, to another, A^n. Say further that it has a rank k at the point p in M^n. Then, given p = (a_1, a_2, ..., a_m): y*f*x^-1[a_1, a_2, ..., a_m] = (a_1, a_2, ..., a_k, phi_(k+1), phi_(k+2), ..., phi_(n)) (the phi's are just numbers you get from f in a certain way, as it turns out, and I've left out the second part of the theorem, which looks the same except for having a bunch of zeros instead of the phi's.). Now, what that is saying are some things about what actually happens if you stuff one manifold into another: depending on f and p, some of parts of your points are going to remain fixed and other parts are going to get chewed on. Different points are going to contain different amounts of dietary undigested fiber**, if you will. Hopefully, that makes some amount of sense. I didn't understand this at all until late last night, when, after having a few drinks at a birthday party (not mine), I was sitting upon the Throne of Power. There, pretty much all of the above hit me. I was stuck on this damn theorem for three days straight, though as I understand now, that was because I didn't correctly grasp some theorems before. Now, I intentionally glossed over quite a few things in the above exposition, there are parts (close to all of them) where I have a suspicion I don't really know what I'm talking about, and there are parts where I'm probably saying something terribly naive and stupid or both. That is the curse and the blessing of studying alone: there's no one to yell "Hey, you bloody stupid arse-scratcher, that's wrong and dumb!" * - Well, 4, if you want to be a laxative butt-monkey about it. ** - I don't think manifolds are part of the food pyramid, but they should be! ## Wednesday, June 19, 2002 Praise Jebus. I finally have a grasp on Theorem 9 (Chapter 2). That bitch had me tied in knots and has been slapping me around for the last two days, but I think I may finally have it spanked for good*. At the least, I've some leverage. More details forthcoming. The above is not intended to serve as an endorsement of, or even commentary on, BDSM or anything like it. Get your mind out of the gutter! ## Monday, June 17, 2002 I saw Mr. Thumpy today! He was jumping about in the grass on someone's back yard. It was dark, but it was definitely Mr. Thumpy: he was gray, had large ears, and he moved like rabbit would, in spurts, actually making dull 'thumps' with what I assume to be rear legs during the initial phases of acceleration. I also saw the-cat-with-bells-on (I have got to think of a catchier, simpler name for the chap - I seem him almost every bloody night, after all.) He was inspecting the undercarriage of a parked car (or lying in wait, you never know with cats), and came out to greet me (or to startle me, for you never know...) as I passed. This was fairly close to the close encounter of the first kind with Mr. Thumpy, so I have to hope the-cat-with-bells-on and Mr. Thumpy get along well. They're about the same size, so I'm not sure who is harrassing whom, if their relations aren't cordial. Nothing new to report on the math front. Still reading about winding numbers and topology. Hurray for Brouwer's Fixed Point Theorem! ## Sunday, June 16, 2002 I got chased by a cat last night. I was jogging, and a cat (the one with the cow-bell on his neck) jumped out from under a car, probably mistaking me for a mouse or something. He then jogged after me for a short while, but quickly lost interest and went to investigate a fascinating patch of grass. In other news, I've read the first chapter of Spivak, and am in the second chapter. The first chapter was not very difficult. The second chapter is harder. I think I need to do the exercises in Ch.1 to get more comfortable with the material. I'm also continuing my readings in Needham. Still on the chapter about topology and winding numbers. Fun. ## Friday, June 14, 2002 This is amusing. Day before yesterday, when I was reading Strichartz, I found out more than I ever wanted to know about sets. Don't get me wrong, I think I can see the intrinsic interest of the topic, but it just didn't grab me. Anyways, I learned about various obscure things such as boundedness, compactness, and other sundry bits and pieces. So, this evening I'm reading Spivak's Differential Geometry (as I said, I just received my copy today), and lo and behold, there's something like two pages, starting on page 4, of what Spivak merrily terms a "hassle with point-set topology." Starring, specifically, compact sets and boundedness and so on. It's used to talk about some useful properties of manifolds, which Chapter 1 of Volume I is about. Spivak went and used a few terms I didn't know, but a check of MathWorld cleared it up for me. I'm still very impressed by Spivak's books. So far at least, the exposition is fun and informative, which is the most important thing. Also, as I've said, the physical quality of the books is very impressive. They have excellent bindings, pretty -- and, more importantly -- easy to read typesetting, they're printed on very smooth, slightly glossy (but not to the point of reflecting anything, really) paper and have a better than average 'scent'. Also, the cover art is quite striking, inspired by Coleridge's Rime Of The Ancient Mariner, and painted (well, in the original, that is, not on the actual covers) by Spivak himself. I've looked at the older editions the nearest university library has, and by god, what a difference. The old editions were basically photocopies of a typed manuscript, or so they looked. Really, I just can't get over the initial physical impression of the books I got - I was paying thirty bucks a pop, more or less, so I expected some cheapies. Well, they don't look cheap at all. And in case anyone reading this wonders, I've no affilitation with Michael Spivak, Publish or Perish, or anything else. I write what I see. And what I see here, I like. The first two volumes of Michael Spivak's A Comprehensive Introduction To Differential Geometry arrived today. They are absolutely beautiful. Everything from the pretty artistic covers (by Spivak himself), to the sweet typesetting, to the satisfyingly heavy, glossy smooth paper, to the 'new book' scent, to the exposition (at least in the beginning - I've only read a tiny bit so far). And cheap! Approx. $30 for a hardback textbook (and about$50 for two) is an awesome price! And oh yes, there's an entry for 'pig, yellow - pg 434' in the index. What else could one possibly want? Obviusly, I'm a bibliophile. More later. Right now, I want to try to go see Bourne Identity. ## Thursday, June 13, 2002 So I've jumped back to Visual Complex Analysis. Having spent the last few days with Strichartz, the difference is striking. Needham actually gives intuition in spades, motivates his arguments, is fun to read, and all-around blows Strichartz clean out of the water on everything except rigour. And to be honest, I don't give a crap about rigour for rigour's sake. I know, in a vague intellectual way, that rigour is necessary and even useful, but I don't have any feel for it. I definetly need to find a companion book for Strichartz that will have motivational stuff in it. Hopefully, the book by Abbott I mentioned earlier will fit the bill - I'll try to find it in the local book super-store this weekend. I've temporarily skipped Needham's chapter on non-Euclidean geometry, and I'm reading the chapter after that, on topology and winding numbers. I did this reluctantly, as I've long dreamed of learning about crazy geometries, but I also want to learn about complex integration, for which the topology and winding number stuff is a prerequisite, while the non-Euclidean geometry chapter isn't. However, I can't resist intellectual candy, and so I've made a compromise: I'll read the non-Euclidean chapter during bathroom breaks, as I do some of my best thinking there, as explained in detail in past posts, and I'll read the winding number stuff in my out-of-bathroom time. Fun! Talking about fun, the winding numbers chapter is spectacularly entertaining. I think I've said it before, but I'll say it again: "Tristan Needham's Visual Complex Analysis is the Best Math Textbook Ever." If you have even the slightest interest in mathematics, and especially geometry, and have studied basic calculus, get this book. You're very unlikely to regret it. I'm testing mozblog! And it seems to work. Nice. ## Wednesday, June 12, 2002 Yay! I'm done with Chapter 3 of Strichartz, "The Topology of the Real Line". This calls for tea and cake and/or candy, I think. Also, I'm seriously considering getting another book on analysis, name Abbott's Understanding Analysis as a supplement to Strichartz. Basically, Strichartz is admirably in-depth, but I just don't feel that I'm getting as much intuition from him as I'd like. Perhaps it's a foolish complaint - after all, the whole reason for being of analysis seems to be avoiding intuitive arguments in favor of anal-retentive formal ones. But Abbott claims to try to give intuition and rigour, and a few reviews of his book I've read agree, so I may end up buying his book. In other news, Blogger's web interface looks even more terrible now. Whether that's Blogger's doing or some changes in Mozilla 1.1a, I don't know. Also, the house still has a certain scent to it. I wonder if French parfume manufacturers would be interested in bottling some eau du pizza brûlée, for a suitable fee, of course? ## Tuesday, June 11, 2002 I'm back from tonight's jog. The air outside is warm, with a soup-like consistency. The air inside the house still reeks pungently of flaming pizza. The high points of the jog were seeing a cat, and getting yapped at by a couple of pocket-pooches out walking their human. I've made some respectable* progress in my real analysis text today - I've knocked over the chapter on the construction of the real number system, and I'm most of the way through a chapter on the topology of the real line. (Mini-rant: I'm developing a vicious hate for inequalities. If I ever meet one in real life, I fear that I may beat it to death with an aquarium tank, if one is handy, or with a rubber chicken, if it isn't.) * - For an occasionally retarded college-dropout, that is. Whoa. We just had a fire in the house. Luckily, it was localized to the inside of the microwave. My brother decided to nuke some frozen pizza. He says he set the thing to three minutes, but I guess he hit zero one too many times, as the thing caught fire. As an added surprise, it turns out that none of the smoke detectors in the house work. We only found out about it because I asked him what he's cooking that's making the stink. We went downstairs, and lo and behold, the kitchen is full of thick, grey, stinky smoke. Eepers. As I said, the damage is mainly to our noses, the pizza, and the general scent of the house. We're working on the smell issues by opening all kinds of doors and windows and deploying The Fan. The pizza is a write-off, I'm afraid. ## Monday, June 10, 2002 Sweet merciful Jesus, I'm a retard sometimes. I now completely and totally grok the triangle inequality. I say I'm a retard because it's so bloody simple it makes my head spin. Hooray for sleeping through math in middle and high school! My fingers are tired. I've tried ten (10) problems in Needham over the last ten or so hours (spread over yesterday evening and today's evening). I haven't solved a single one. I did cover a few dozen pages with algebra and a few pictures and more algebra. A tangentially relevant passage from Macbeth comes to mind: To-morrow, and to-morrow, and to-morrow, Creeps in this petty pace from day to day To the last syllable of recorded time, And all our yesterdays have lighted fools The way to dusty death. Out, out, brief candle! Life 's but a walking shadow, a poor player That struts and frets his hour upon the stage And then is heard no more: it is a tale Told by an idiot, full of sound and fury, Signifying nothing. I think I'll try to do something else now. Hmm. My choices, ignoring bummery: • Jump to the next chapter, on complex integration, and come back to the exercizes later • Jump books to Strichartz, prove the fucking triangle inequality once and for all, and soldier on • Continue trying these same ten exercizes • Beat head on wall. Now, the latter two choices are effectively the same, and tempting as it the last choice is, it doesn't seem very productive. Sigh. Choices... ## Sunday, June 09, 2002 I'm going to solve a problem in Needham today (well, if I don't get stuck due to my stupidity) that claims to show where the Schwarzian derivative comes from. Yay! Should be interesting. I also want to finally finish the chapter on the real numbers in Strichartz's Way of Analysis today, but I doubt I have the cajones to get it done tonight. So, I went to the park this afternoon for a jog. I didn't jog very far. Now, it wasn't because it was unexpectedly humid, or because I'm a weak-legged, out of shape wuss (though both are, to some degree, true). Actually, it was because of the fat people. There was a totally surreal concentration of truly spheroidal persons in my favorite park today. It was as if there was some kind of oversize person convention (which isn't actually as outlandish as it sounds - this park is often the site of various parties and gatherings). Now, it's not as if I have some kind of debilitating physical reaction to the sight of profoundly rotund indivuals jiggling around a park that stops me from running. What actually infuriated me enough to cut my run short was that these same oversize persons decided to go for a walk, en masse, along the forest path which winds around the lake. Along with a large number of highly energetic little children. And this was a problem because somehow the vast majority of them had no concept of manners whatsoever. They blocked the narrow trail with their girths (and I mean that fucking literally!), and did not even attempt to allow me to pass. They merely trundled along like rhinos that know they always have the right of way, langurously blinking their eyes at me, forcing me to go off-road, into god knows what types of poison-ivy, every fucking minute, on average. Of course, the ones that brough their kids along were even worse, because the kids formed a kind of highly energetic stupidity-and-no-manners cloud around their elders, making even attempts to get past them by going off into the bushes with the rabid squirrels very difficult. After fifteen minutes of this shit, I turned around, went back to my car, and drove home. Grr. Really, I can sympathise about hormone problems, insatiable appetites for Big Macs, and the desire to replenish the Earth with your spawn, but for the love of Chthulhu, is it so hard to get some manners and at least make an attempt to allow faster-moving foot-traffic to pass on narrow paths? That's the rant for tonight. Sincere apologies to anyone it offends. I can't find my pants. (The ones I thought I'd be wearing today, that is.) News at 11. ## Saturday, June 08, 2002 underthumb reports on an interesting (though, from my amateurish perspective, fatally flawed) experiment in psychology. People are presented with a box with a known ratio of black/white marbles (.5), and a box with an unknown ratio of marbles. Experiments then show that people tend to pick the known-ratio box over the unknown box. There is then the claim that this is stupid and irrational. The argument goes like this: if, say the 'win condition' is getting a black ball, people tend bet on the known-ratio box. You've thereby made a bet that the ratio for the unknown box is worse (that is, more white than black) than in the known box. Put the ball back. Then make people pick again, this time making a white ball the win condition. People still pick the known box, 'betting that the ratio in the unknown box is worse', in the opposite direction. So you've just made two 'opposing' bets about the same damn box. Irattional? Stupid? Um, hell no. I'm going to try to make the argument that it is perfectly rational, and smart to go with the known, rather than the unkown, and to claim otherwise is ... unwise. First, the flaw in the experiment. It's good, in setting up the experiment, to keep it simple, and strip away the irrelevant. It's possible to strip away essential variables, however, and the marble problem is an example. Let's try a slightly more complicated setup. Say you're given some win condition, and two choices of 'path' to that condition. One path is described to you prior to an attempt to get to the win condition, the other is undescribed. Say futher that the win condition and paths are complex enough to allow on-the-fly and pre-op choices in moving toward the win condition (I suspect that's one of the big things that makes the 'evolutionary' approach that underthumb refers to work). Concrete example: two sets of road networks, with the objective of getting a certain distance in a certain time. In this case, it's completely bloody obvious that it's a good idea to pick the road network that you know about. You can plan a route ahead of time with it, and make informed on-the-fly choices. With the 'unknown' road network, you can't. You might get to your destination faster (it might be a much straighter path, for instance), but betting that way is stupid - it might just as easily be worse, and you've got no planning or information benefits. The marble experiment strips away the utility of the given information, thereby making it a stupid test of the 'better a known than an unknown' phenomenon -- after all, in Real Life, information does tend to be useful. But forget that. Even with the problem as stated, it's smart to go for the known. When you make that first choice of known box to get a marble from, you aren't making any bets as to the other box. All you're stating is that you don't know the ratios for the unknown. Could be worse for you, could be the same, or could be better. You don't know, and it's a given that you can't know. So, because you want to win, and you don't want to take the chance that the unknown has no 'win balls' at all or something, you go for the known. In the second experiment, nothing has changed. You still don't know anything about the unknown. So the very same reasoning applies to the second draw as to the first one. It's not unlike coin tosses - the outcome of one flip has no effect on the probabilities of subsequent flips. Basic fact of statistics. I'm a rank amateur. It's possible I Just Don't Get It, or I'm missing something important. But as it stands, it seems to me that any psychologist that claims people are wrong in this marble experiment hasn't thought about it very carefully. Counter-arguments welcomed. Holy Fucking Shit. That was awesome. I went to the Volvo "Fire and Ice Driving Experience" this morning. They set up three race-courses in a huge parking lot at FedEx Field near Baltimore. You go there, sign up, get a name badge and an armband signifying that you signed a release form, and are of age and have a license. Then you listen to a short speech about various safety features of Volvos in a big tent with dead sexy plasma displays illustrating key points. You are then invited to come outside. The first thing my group did was the 'hot lap'. They've got four Volvos sitting out on the asphalt, with professional race drivers (retired NASCAR, etc.) in them. You're given helmets. You climb into the passenger seat. And then they take you for a ride. The tires literally smoke (and how! and the smell!) most of the way through the course, as the drivers skid and slalom and flat-out literally put the assorted pedals to the metal all over the course. They use the handbrake to improve breaking and traction at key points in the course. It's better than a roller-coaster, and you come out of the car with decidedly shaky knees. Or at least I did. But it doesn't stop there. There are inter-session demos of things like safety glass, structural rigidity, and a prototype Volvo SUV, and a fucking sweet concept car (dead sexy glass all around, including the roof, insane active safety features, etc). There are two more sessions in addition to the hot lap. One is a 'winter' course, the other a 'summer' course. I went to the winter course next. There, you get to drive, in succession, a Volvo S60 2.5T (front-wheel drive), and an S60 AWD, down a course with various slick portions, pop-up cardboard mooses that you have to avoid, and other fun things. Fucking sweet, that was. But then. Oh my. There was the summer course. You drive a Volvo S60 T5 (single-turbo), an S80 T6 (twin-turbo), and an S80 Executive (just like T6, but with a TV and a drinks fridge in the back). Holy fucking crap. The T5 and T6 are insanely sweet cars. I have to say I preferred the T5, because while it was a little less powerful than the T6, it was a lot less heavy, and so it had better acceleration and handling. The cars take off ridiculously fast, with a nice, throaty roar off the starting line, and the T5 stays absolute glued to the track no matter how crazy you drive (the T6 was sweet as well, but wasn't quite as tight as the T5). I drove the T5 five or six times down this summer track, and the T6 once. I didn't drive the Executive, because the lines for it were longer, and seeing as I would be driving, I didn't care about the TV in the back. And this was all free. It was an amazing experience, and very smart marketing on the part of Volvo: you can bet that if I ever come by forty or so thousand bucks that I can blow, I will give the S60 T5 a very, very serious look. And I got a very nice yacht-racing cap at the end, complete with a clip-to-shirt thingie so it doesn't get blown away by the wind. If you've got one of these Fire and Ice demos going on in a city near you, go! ## Friday, June 07, 2002 I found some interesting XXX material today. It's called Static Negative Energies Near a Domain Wall. I'm going to do a summary of a part of the paper, as I understand it right now, for future reference and laughs. It's been known for a while that if you want relativity to allow 'closed timelike curves' (time-travel) or faster than light motion, the 'weak energy condition' must be violated. This just means that stuff like masses is positive. Quantum fields can violate this weak energy condition, while garden-variety classical fields can't. However, the stereotypical quantum field that has apparently been used in such calculations is what's called a 'free field', and those have to obey something called the 'averaged weak energy condition'. That means that you simply don't have negative masses on average. That is, if you've got one, it's damn temporary. So these aren't all that interesting, if you're interested in time travel. Now, the most famous example of a thingie with negative energy is the 'Casimir problem'. That's that thing where you've got two metal plates real close to each other getting squeezed together by interesting quantum effects. I've written about it before (but not here). The interesting thing here is that that problem is 'static' - it can just sit pretty, with it's naugthy negative energy density. Which means that it violates the averaged weak energy condition. What the paper in question does is look at a toy model, a '2+1' dimension (two space dimensions, one time) setup, with a 'domain wall' taking the place of a Casimir setup. And they show that close to this domain wall, you get negative energy densities. But wait! We ain't meetin' real furry creatures from Alpha Centauri any time soon. Because this 'toy system' does obey something called the 'averaged null energy condition', which is enough to rule out goodies like time-travel and faster than light transportation. Ya just can't win. But there's a small glimmer of hope, because the authors speculate that there's a chance the more complicated, but also more realistic 3+1 dimension model, can violate this other condition. Work on a paper exploring that is 'subject of our future work', the authors say. Should be an interesting read. I'm just about ready to beat my head on the nearest wall. It has to do with that Schwarzian derivative I mentioned earlier. Here's the problem. I'm supposed to show that there's a certain 'chain rule' for it. That is, assign w = f(z), f being analytic. The question then becomes, what is the Schwarzian derivative of g(w), where g is another analytic function. Now, in Needham, the rule is given: {g(w), z} = (f'(z))^2 * {g(w), w} + {f(z), z} The problem is that I can't for the life of me show that to be true. I just get lost in an endless swamp of algebra and calculus, and never really come out. Normally, I'd just blame myself for my ineptness. There is, however, a small chance that the above equation actually isn't quite right - there might be a typo. I've found a couple of places that give something that looks a lot like it, but with one (1) extra parenthesis. So those sources definitely have typos. But the question is, did they want to have a couple of extra parentheses, or none? I'm confused. What's even worse is that I can show something almost, but not quite, like the given formula. Aarrrrrrgh. Here's something you don't see every day, but I forgot to mention. I saw the neighbourhood bunny rabbit, whom I call Mr. Thumpy, on my jog today. Here's a snippet from T.S. Eliot's "The Love Song of J. Alfred Prufrock". This particular part seemed to resonate with me tonight: ... And indeed there will be time For the yellow smoke that slides along the street, Rubbing its back upon the window-panes; There will be time, there will be time To prepare a face to meet the faces that you meet; There will be time to murder and create, And time for all the works and days of hands That lift and drop a question on your plate; Time for you and time for me, And time yet for a hundred indecisions, And for a hundred visions and revisions, Before the taking of a toast and tea. I do my best work sitting on the shitter. I swear. Today was another example. I've been struggling since yesterday with a very interesting problem from Ch. 4 of Needham, #18, I think. It has to do with the following interesting question. Say you've got two curves touching (gently, non-pathologically) at a point. Is it possible to define a meaningful 'angle' between these two curves at this point? Well, maybe. Certainly, any such 'angle' that we end up defining better end up being conformal. That is, if we apply a conformal transformation to the plane containing the two curves, it will preserve any normal angles, like those that are part of triangles, say. ('Conformal' is just a fancy way of saying 'angle-preserving'. An important bit of trivia is that analytic transformations are always conformal.) So our made-up definition for the meeting "angle" of two curves damn well better act like normal angles do, or it doesn't deserve to be called an 'angle'. This actually turns out to be a fairly hard problem, and it is only fully solved in the last chapter of Needham's book. In chapter 4, though, I'm asked to follow a couple of doomed attempts at a definition. The first attempt at a definition was done by Newton. Unfortunately, with the modern tools of complex analysis, it is fairly simple to show that it is not conformal. So it doesn't work. There is another attempt to define this 'angle', building on Newton's attempt. It fails too, but for a different reason. (As I said, this won't be solved until like chapter 12 or 14 or whatever.) But never mind that. What's important is that last night I got stuck on a bit of geometry in showing that this second attempt doesn't quite work. I've been thinking about it all day, to no great benefit. About an hour ago, while resting on the porcelain throne in preparation for a nice, fat-arse shaking jog, the answer to my quandary hit me like a lightning bolt from a minor Greek god, or more poetically like a toilet-alligator bite on the ass. See, there's an often used fact that for small theta, sin(theta) = theta. And that is the lever that got me unstuck. I wonder if it's unusual for people to do their best thinking while sitting on the crapper, and if it isn't, how many of the great discoveries throughout history were made while taking a dump? Perhaps Archimedes wasn't in the bathtub when he bellowed his now-famous cry, "Eureka!" Here's a poetic version of the above post, mostly due to underthumb's help: Lo! Though I poop, My mind doth not droop, And from it--for sooth! Ideas now go 'gloop'! ## Thursday, June 06, 2002 Who is the bunny arse that thought up the Schwartzian derivative*, and more importantly, why? It looks like this, for a function f(z), with respect to z: {f(z), z} = f'''/f'' - 3/2 * (f''/f')^2 Now, it turns out this 'derivative' has some nice properties, for instance all Mobius transformations have a vanishing Schwartzian derivative, and the reverse also works. This is cool, undeniably. But where the bloody hell does it come from? * - Well, yes, obviously, some mother-bunny named Schwartz thought of it. Probably the same Schwartz that did work in analytic continuations. But that's not the important part. I have no academic focus. Not now, not ever. I'm interested in just about everything. Hell, I've got a disturbing hankering to read the new edition of Molecular Biology of the Cell, all 1400 pages worth. I want to read some kind of book-lenth introduction to evolutionary psychology. I want to read Gould's life's work, aka The Structure of Evolutionary Theory, or whatever the actual name is. I want to read Chaucer's Canterbury Tales, in the original. I want to learn Japanese (because it's not a latin-related language). I want to learn differential geometry. I want to learn about linguistics. I'd like to read about international relations theory. I want to learn to write well. I want to learn quantum mechanics. I'd like to learn to be a witty and engaging conversationalist. I want a lot of things, and I'll probably dabble a bit in a lot of them over my life. But I'm unlikely to ever have a well-defined academic focus. There is a price to pay for the pleasure of having a wide-ranging curiosity. I have had to resign myself to the fact that I will forever be a dilletante in almost all of my areas of interest. I have wasted large parts of my teenage years in slacking and learning what I wanted, not what I should have been learning. There are consequences. I'm extremely unlikely to make any worthwhile contributions to any field I do choose to learn in depth. Take physics, for instance. It's a well-known fact, backed by assorted studies, that physicists most often make their big contributions as graduate students, or freshly-minted post-docs, in their 20s. This statistic applies to many other fields of creative endeavour. I think I've finally learned to accept it. I'll learn what I want, when I want, how I want, on my own, to the best of my limited abilities. Hopefully I'll earn enough money along the way to eat and afford a few toys. The grandiouse plans of my teenage years, built on a foundation of slacking, daydreaming, and reading, now look like trifling (yet dearly regarded) little optimistic far-away sand castles being washed away by the tide of cold, hard, and bracing reality. There's something to be said for reality. It is cold, and it may occasionally be bleak, but it is real, and swimming in it can be invigorating. I have learned to accepted all that. I think. Perhaps. It's arguably a curious form of defeatism, I suppose, but I've learned I prefer it to optimism which constantly accumulates evidence against itself. It allows for some measure of joy, occasionally, and that's all I can really ask for. ## Wednesday, June 05, 2002 Ok, this is really, really cool. From Finite Sets to Feynman Diagrams, written by John Baez and James Dolan. It's not nearly as forbidding as it sounds, because it starts out by talking about basic arithmetic, and throwing out some provocative thoughts on the very nature of equations in general. It's very, very cool. Just read it. I'm done messing with DOCTYPEs and CSS and so on for the evening. Oh yeah. If the CSS link-rollover effect looks like bunny arse in your browser, making text reflow in stupid ways and so on, then it's cause you're using IE. IE has issues with perfectly legal CSS. Go figure. Mozilla has no problems that I've found so far with the page. And Mozilla hit 1.0 today. On the other hand, the color-related garishness on this page is entirely my fault. I really ought to do something about that link color... I use various derivatives of the 'f-word' too much in my posts here. Perhaps I'll now use the word 'bunny' whenever I feel the urge to use the Worst Expletive In Existence. Or something like that. Well. In a stunning display of deranged bumfuckerry, I have yet again got my basic algebra wrong. I need to go back to middle school. That, or crawl under a carpet in shame. Or both. Because, if you get the algebra right, as I did in my next to last post, then the problem actually makes some fucking sense, and is really simple. Because we now have k_image = (1 + Re[z*f''/f'])/Length[z*f'] and substituting in the proper derivatives for the case that f(z) = z^m k_image = (1 + m - 1)/Length[m*z^m] = 1/Length[z^m] Which actually makes some fucking sense, and matches the intuitive picture. Problem solved. Woo-hoo, and a case of whiskey. Chthulhu. Say f(z) = z^m. f' = m*z^(m-1). f'' = m*(m-1)*z^(m-2). f''/f' = (m-1)*z^(m-2-m+1) = (m-1)*z^(-1). Call K the complex curvature of f(z). -i*Conj[K] = (m-1)/(z*Length[z]). Ok. So that pile of crap right there is an intrinsic property of the mapping f: z -> z^m. It tells us that even if we were to apply the mapping to a straight line, with zero curvature, the image curve would have non-zero curvature. For m=1, a linear mapping, it produces zero, as expected. So. Take a circle on a plane. Make it a complex plane, just for kicks. The unit tangent to the circle is i *z/Length[z]. That's because if our coordinate on the circle is z, then to get a unit vector we can divide by the length of z, and then we turn it by Pi/2 by multiplying by i, and now we've got a unit tangent. Now, we're interested in how the curvature of a given shape changes under an analytic mapping. It's possible to show that if we pick the above-mentioned circle as our 'shape', then the image curvature under an analytic mapping f(z) is given by k_image = (1 + Re[z*f''/f'])/(z*f') Now, the question is, without actually cranking out a calculation, what should the image curvature be if our mapping is given by f(z) = z^m? And then check your prediction using the formula. So. z^m is certainly analytic. Hmm. Well. z^m basically dilates the plane, more or less, right? I mean, it takes each point, and pushes it out z^m. Which suggests that the curvature, which starts out as 1/Length[z] is going to be something like 1/Length[z^m]. Ok. Doing the calculation, though, leads to a rather different conclusion. The image curvature is instead: image_curvature = 1/Length[m*z^m] + m*(m-1)/Length[m*z^m] WTF? The scary thing is that this bizarre result matches intuition if m = 1. Because then it's just 1/Length[z], which is the same as the initial curvature, which damn well better be the case, because m=1 we're just doing a linear mapping, that is, we're not changing jack shit. I've got a sneaking suspicion I'm misunderstanding how z^m works geometrically, which would be embarassing... ## Tuesday, June 04, 2002 Alright! Uglification complete. I just thought I'd note that it's rather annoying that Blogger doesn't really work too well with mozilla 1.0rc3. Also, the latest Onion is pure comedy gold. A sample quote from an article about the recent announcement by the National Science Federation of the discovery that "Science is hard": "Quantum physics has always been a particularly tough branch of science," UCLA physicist Dr. Hideki Watanabe said. "But in addition to being some of the smartest Einstein-y stuff around, it is undeniably a really stupid, pointless thing to study, something you could never actually use in the real world. This paradoxical dual state may one day lead to a new understanding of physics as a way to confuse and bore people." As I said, comedy gold. Also, some day I might write a polemical devil's advocate essay, complete with such unusual things as references and carefully framed arguments, regarding assorted 'copyright protection' laws being pushed by the entertainment industry, the half-hearted resistance to this legislation by hardware corporations, and the apparent (and, if one thinks about it, entirely understanble) lack of opposition from software companies to the legislation. If one hears of any reaction by software corporations to current copyright topics at all, it's effectively cheerful support of the legislation. (See: Microsoft.) Of course, techie outrage tends to be directed more towards the entertainment lobby, rather than the software industry. Not very surprising, given that many techies are employed by said software industry, like getting paid, and also like free movies and music. But 'some day' is not today, and the above incoherent and unsupported babble doesn't qualify as an essay. For some reason, I'm having trouble following Needham's description of the 'quick way' (as opposed to the painful, confusing, yet naive and simple way) of figuring out winding numbers. FWIW, a 'winding number' is an integer describing how many times a (directed) squiggle loops around some point. One can just, well, count, to find the winding number, but for complicated curves, this can be difficult and frustrating. And Needham shows a way of doing it 'just like that'. And I'm having trouble seeing why it works. Harrumph. What else is new? In other news, the South Park episode entitled 'Proper Condom Use' is a masterpiece of comedy. Just to give you a hint of the delightful flavor of this engrossing tale, the episode both begins and ends with one of the South Park kids (Cartman, if memory serves), digitally* stimulating a neighbourhood canine. A delightful and gut-busting episode. It should also be noted that it would be perfect for a middle-school level health and human sexuality course. * - By digitally, I mean to refer to the root word 'digit', meaning finger, rather than 'digit', meaning a number. Ah, curvature. It was something of a bane for me in Vector Calculus class. I understood it enough to pass, but I didn't really grok it. I think I do, now. So. Say you've got a squiggly (but not pathologically so) curve that you've drawn on a piece of paper. Say further that you want to be able to talk about how curved it is in various areas. Now, obviously, to do that, you've got to figure out what you mean by curvature. That's actually easier than it looks, if you squint just right. We can use our intuitive idea of what 'how curved something is' means as a guide. Now. Here's the key insight. Look at a small part of your squiggle. Generally speaking, it's going to look like a small arc - like a piece of a circle. And that's all we need to get going. We're basically going to characterize the curvature of a piece of our squiggle by fitting a circle to it. We're going to characterize how curved a squiggle is at a point by saying how big a circle we can fit to it at that point. And we'll call that circle the 'circle of curvature', for future reference. Let's think about circles for a bit. First, to completely describe a circle, all we need is a radius (well, it's center is nice too, but let's ignore that). Remember that. Now, our intuitive idea of 'curvature' tells us that a Really Big circle is less curved than a small one (make sure you see this!). After all, a chunk (aka arc) of a Really Big circle looks a lot like a straight line, and our intuition tells us that straight lines are, well, straight - they ain't curved. So apply that back to squiggles. If a really big circle fits to a point on our squiggle, it's not curved much there. If a really small one fits to the squiggle at a point, then the squiggle is seriusly curved in that spot. In other words, the bigger the circle, the less the curvature. So, we can define curvature to be 1/(radius of the circle of curvature), and we now have a definition that fits with what we wanted: a careful way of thinking about how curvy something (or someone, if that's the application*) is. Now, to actually find the curvature given an equation for a curve requires some calculus, which is much less interesting (and more mechanical) than having the concept above in the first place, IMHO. So I won't talk about that. I wrote all that out to make sure the concept is properly sorted out in my head. Hopefully that'll do it. * - What's math without some sexual innuendo? ## Monday, June 03, 2002 Well. To imitate another blog, I'm going to try to sketch a rough intellectual 'plan' for myself. To make a long story short, my latest obsession was kicked off when I read a book. This is how almost all of my obsessions start, actually, except obviously ones involving pretty young ladies, which aren't the subject of this blog (if you've got a hankering for that, there are plenty of websites specifically devoted to such matters).* The book is called The Life of the Cosmos , by Lee Smolin. It's a mindblowing book. There's a bunch of mysteries in physical science today. One of the biggies is the mismatch between quantum physics and general relativity. Another is in cosmology - why the hell is the universe the way it is, and why are we possible in it? Smolin tackles the latter question head on, along with a huge array of related philosophical and science issues. He avoids the unsatisfying anthropic arguments, instead presenting a radical, yet highly seductive and persuasive argument. He argues that there are an infinite number of universes, with varying properties. And they can 'reproduce' - when a black hole is formed, a universe is created. So a kind of evolution on the scale of universe operates, tending to favor universes which have physical laws conducive to the creation of many black holes. And he then argues that universes that make a lot of black holes also end up being favorable for life as we know it. Now, this all sounds like kookish handwaving bullshit, but it should be emphasized that Smolin is far from a kook, but is instead a leading theoretical physicist in the field of loop quantum gravity (LQG) and related areas. He manages to marshall arguments which make all the above far-fetched ideas sound plausible, and lays out a way to actually 'test' his arguments. For more, read the book. Highly recommended. As I noted above, Smolin works in LQG, and mentions some work in that field in his book. It was real interesting, but I kind of left it at that. A few months later, I bumped into a paper by Seth Major on spin networks (another way of talking about LQG), and I tried (and failed) to work through it. But I became very interested in the topic. A few months after that, I bought a href="http://www.amazon.com/exec/obidos/ASIN/0198534469">math textbook , on impulse, from a local Borders. It turned out to be an absolutely fantastic book, and it's the one I'm currently reading. First time I've ever really enjoyed math. (Well, there was that abortive attempt at learning tensor calculus a couple of years back, but that ended disastrously, and I don't want to talk about it.) So, err, now I finally get to the Plan. It's a crappy plan, very hazy, and won't ever actually be followed. It's main purpose is an attempt at motivation. The plan is to learn enough mathematics and physics to read papers on quantum gravity research. And understand them. To that end, I'll be studying complex analysis (in progress), real analysis (in progress), groups, categories, group representations, differential geometry, general relativity, electrodynamics, mechanics, and other things. This is all intended to be worked through on my own. You can see why this is a crappy plan. I also intend to finally read some T.S. Eliot, Milton, Keats, and generally try to become a bit more cultured. * - Well, ok, not specifically about my favored young ladies (I don't run, or contribute material to, any adult sites, despite constant offers for over a year now to one of my email addresses to join the growing industry of adult sites devoted to assorted randy barnyard animals and their relations with sexy lolita sluts.) This just in: I've reached the level of understanding of a fifth grader and I might now understand the aforementioned triangle inequality. I'll have to think about it some more. Things to do: 1) Get over my lack of desire to go to UMD again, bust into the library there, and look at a few books (spivak's geometry and calculus texts, also gallian, perhaps herstein, more as inspired). 2) Get over to Bor. and look at Abbott's 'understanding analysis'. Is it really as good as all that? 3) Errr. ## Sunday, June 02, 2002 Yay. Turns out if you assume \partial_\theta v = 0 for an analytic function, that function is going to be the complex log, up to assorted constants. Or so I think. Neat. One more problem solved, then. Only thirty or so more to go for this section... Also, for some reason I've forgotten why the hell the triangle inequality for the real numbers works. This is, sadly, rather inconvinient *cough*, as it's used ALL THE DAMN TIME in the anal-retentive construction of the real number system that my analysis text is doing. It also makes me feel like a complete retard. And all the online 'proofs' of the triangle inequality that I've found focus on vectors and/or complex numbers, not the real numbers. And I know it should be trivial to get the real proof from the complex one, say. As I said, this makes me feel retarded. ## Saturday, June 01, 2002 Damn. Today has been remarkably unproductive. Of the six problems I've tried today, I've solved two. Whoop-tee-fucking-do, that a positively blistering progress rate. Sometimes, I amaze even myself. I'm not even entirely sure that the two problems I did solve were solved correctly. That is, I'm quite certain my answers are correct, but I'm not sure if my approach is the one that was being asked for. Rats. I'll try again before bed, I guess. [Maj. Gen. Obvious mode] It seems more and more web sites are going to a pay-site model. Moreover, some are even going to the 'sedated cat in a bag' paysite model. I can empathize with the cost concerns which prompt these moves, but they seem boneheaded all the same. I suppose that that's because I think pay sites are contrary to the entire ideal of the Net. Hurray for the commercialization of the WWW.[/Maj. Gen. Obvious mode] Oh good lord. There's something called a 'Fourier-Wiener transform'. On a terribly juvenile level, that's hilarious (I wonder if people forced to say that still pronounce Fourier as 'Furry-yer"?). Also, apparently blog entries are 'supposed to' contain links to other blogs. Fuck that. I try to link to reasonably reputable and useful sources of info, when I do create links, and some fucktard's ravings on the latest political scandal or a math problem they are too stupid to solve are hardly 'reputable and useful' sources of information. Lord knows anyone who links to this blog for anything other than entertainment is nuts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5157583951950073, "perplexity": 1405.4465048324776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00679.warc.gz"}
http://math.stackexchange.com/questions/186081/proving-pointwise-convergence-to-a-dirichlet-like-function
# Proving pointwise convergence to a “Dirichlet-like” function Question: Let $\{r_1, r_2, \dots\}$ be the set of rationals in the interval $[0,1]$. For $x \in [0,1]$ and $n \in \Bbb N$, let $f_n(x)$ and $f(x)$ be given by the following: $$f_n(x) = \begin{cases} 1 & \text{ if } x= r_1, \dots, r_n \\ 0 & \text{ otherwise } \end{cases} \qquad f(x) = \begin{cases} 1 & \text{ if } x \text{ rational}\\ 0 & \text{ if } x \text{ irrational} \end{cases}$$ Prove that $f_n \to f$ pointwise, but not uniformly. My Thoughts: I'm not sure how to show either convergence result. For pointwise convergence, $| f_n(x) - f(x) |$ becomes $0$ at $x$ irrational or $x \in \{r_1,\dots, r_n\}$, and $1$ at all the rationals not yet enumerated. How can I work this into my proof? Edit: Let's suppose to the contrary that there is a sufficiently large $N_0$ so that for some $\epsilon_0$ for all $x \in [0,1]\Rightarrow|f_{N_0}(x) - f(x)| \ge \epsilon_0$. Here is where I am stuck now Edit 2: I have found it! I went through my textbook, and I found the following key sentence: In pointwise convergence, one might have to choose a different $N$ for each different $x$. In uniform convergence, there is an $N$ which works for all $x$ in the set $E$. So the proofs follow: Proof of pointwise convergence: Let $\epsilon > 0$ be given. Then let $x_0$ be the $N$th rational number in $[0,1]$. Taking $n = N$, we have $|f_n(x) - f(x)| \le \epsilon$ for all $x \le x_0$, and we can successively take larger and larger $n$ to always guarantee that $|f_n(x) - f(x)| \le \epsilon$. Proof of lack of uniform convergence: Let $\epsilon > 0$ be given. Then if $f_n \to f$ uniformly, there exists an $M$ so that $n \ge M$ implies $|f_n(x) - f(x)| \le \epsilon$ for all $x$. Taking $x$ to be the $(n+1)$th rational number, we have that for sufficiently small $\epsilon$, $|f_n(x) - f(x)| \ge \epsilon$, so $f_n \not \! \to f$ uniformly. - Do I appeal to the countability of $\Bbb Q$ and show that $|\Bbb Q| = |\Bbb N|$ implies pointwise convergence? –  KingOliver Aug 23 '12 at 23:31 I don't understand, the set of all rationals in $[0,1]$ is not finite, so how can you list them up to $n$? –  user38268 Aug 24 '12 at 12:08 @BenjaLim I think that is the point as to why it does not converge uniformly, but we can always take $n$ sufficiently large as to make the difference less than $\epsilon$. We list up to $n$ simply by AC I believe –  KingOliver Aug 24 '12 at 12:20 Pick an $x\in[0,1]$. Then can you say that $f_n(x)=f(x)$ when $n$ is sufficiently large? Take any $n$. Then can you say that there is $x\in[0,1]$ such that $f_n(x)=0$ but $f(x)=1$? Building off of this, would I suppose to the contrary that there is a sufficiently large $N_0$ so that there is an $x \in [0,1]$ for which $f_{N_0}(x) = 0, f(x) = 1$. Taking $n \ge N_0$, I have that for all $\epsilon > 0$, $|f_n(x) - f(x)| \le \epsilon. This can be done as many times as necessary to achieve arbitrary precision. Is this the lines along which you would approach? – KingOliver Aug 23 '12 at 23:58 @jmi4: You don't really need prove by contradiction. Answer my questions then you will see. – timur Aug 24 '12 at 0:01 I am not sure what you mean. Can you elaborate further on your hint? I thought I had answered your questions. – KingOliver Aug 24 '12 at 2:47 Pick$x\in[0,1]$. If$x$is irrational, then$f_n(x)=f(x)=0$regardless of what$n$is. If$x$is rational, say$x=r_k$, then$f_n(x)=f(x)=1$for all$n\geq k$. This is pointwise convergence. – timur Aug 24 '12 at 14:15 Uniform convergence would mean$\sup_{x\in[0,1]}|f_n(x)-f(x)|$converges to$0$as$n\to\infty$. Take any$n$. Then for$x=r_{n+1}$, we have$f_n(x)=0$, and of course$f(x)=1$. This means$\sup_{x\in[0,1]}|f_n(x)-f(x)|\geq1$. As$n$can be chosen as large as we want, this shows$\sup_{x\in[0,1]}|f_n(x)-f(x)|$does not go to$0\$, so no uniform convergence. –  timur Aug 24 '12 at 14:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732900261878967, "perplexity": 135.14847739419125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928831.69/warc/CC-MAIN-20150521113208-00184-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.science.gov/topicpages/0-9/3d+elastic+wave.html
#### Sample records for 3d elastic wave 1. Elastic wave modelling in 3D heterogeneous media: 3D grid method Jianfeng, Zhang; Tielin, Liu 2002-09-01 We present a new numerical technique for elastic wave modelling in 3D heterogeneous media with surface topography, which is called the 3D grid method in this paper. This work is an extension of the 2D grid method that models P-SV wave propagation in 2D heterogeneous media. Similar to the finite-element method in the discretization of a numerical mesh, the proposed scheme is flexible in incorporating surface topography and curved interfaces; moreover it satisfies the free-surface boundary conditions of 3D topography naturally. The algorithm, developed from a parsimonious staggered-grid scheme, solves the problem using integral equilibrium around each node, instead of satisfying elastodynamic differential equations at each node as in the conventional finite-difference method. The computational cost and memory requirements for the proposed scheme are approximately the same as those used by the same order finite-difference method. In this paper, a mixed tetrahedral and parallelepiped grid method is presented; and the numerical dispersion and stability criteria on the tetrahedral grid method and parallelepiped grid method are discussed in detail. The proposed scheme is successfully tested against an analytical solution for the 3D Lamb problem and a solution of the boundary method for the diffraction of a hemispherical crater. Moreover, examples of surface-wave propagation in an elastic half-space with a semi-cylindrical trench on the surface and 3D plane-layered model are presented. 2. 3D mapping of elastic modulus using shear wave optical micro-elastography Zhu, Jiang; Qi, Li; Miao, Yusi; Ma, Teng; Dai, Cuixia; Qu, Yueqiao; He, Youmin; Gao, Yiwei; Zhou, Qifa; Chen, Zhongping 2016-10-01 Elastography provides a powerful tool for histopathological identification and clinical diagnosis based on information from tissue stiffness. Benefiting from high resolution, three-dimensional (3D), and noninvasive optical coherence tomography (OCT), optical micro-elastography has the ability to determine elastic properties with a resolution of ~10 μm in a 3D specimen. The shear wave velocity measurement can be used to quantify the elastic modulus. However, in current methods, shear waves are measured near the surface with an interference of surface waves. In this study, we developed acoustic radiation force (ARF) orthogonal excitation optical coherence elastography (ARFOE-OCE) to visualize shear waves in 3D. This method uses acoustic force perpendicular to the OCT beam to excite shear waves in internal specimens and uses Doppler variance method to visualize shear wave propagation in 3D. The measured propagation of shear waves agrees well with the simulation results obtained from finite element analysis (FEA). Orthogonal acoustic excitation allows this method to measure the shear modulus in a deeper specimen which extends the elasticity measurement range beyond the OCT imaging depth. The results show that the ARFOE-OCE system has the ability to noninvasively determine the 3D elastic map. 3. 3D mapping of elastic modulus using shear wave optical micro-elastography PubMed Central Zhu, Jiang; Qi, Li; Miao, Yusi; Ma, Teng; Dai, Cuixia; Qu, Yueqiao; He, Youmin; Gao, Yiwei; Zhou, Qifa; Chen, Zhongping 2016-01-01 Elastography provides a powerful tool for histopathological identification and clinical diagnosis based on information from tissue stiffness. Benefiting from high resolution, three-dimensional (3D), and noninvasive optical coherence tomography (OCT), optical micro-elastography has the ability to determine elastic properties with a resolution of ~10 μm in a 3D specimen. The shear wave velocity measurement can be used to quantify the elastic modulus. However, in current methods, shear waves are measured near the surface with an interference of surface waves. In this study, we developed acoustic radiation force (ARF) orthogonal excitation optical coherence elastography (ARFOE-OCE) to visualize shear waves in 3D. This method uses acoustic force perpendicular to the OCT beam to excite shear waves in internal specimens and uses Doppler variance method to visualize shear wave propagation in 3D. The measured propagation of shear waves agrees well with the simulation results obtained from finite element analysis (FEA). Orthogonal acoustic excitation allows this method to measure the shear modulus in a deeper specimen which extends the elasticity measurement range beyond the OCT imaging depth. The results show that the ARFOE-OCE system has the ability to noninvasively determine the 3D elastic map. PMID:27762276 4. 3D Discontinuous Galerkin elastic seismic wave modeling based upon a grid injection method Monteiller, V. 2015-12-01 Full waveform inversion (FWI) is a seismic imaging method that estimates thesub-surface physical properties with a spatial resolution of the order of thewavelength. FWI is generally recast as the iterative optimization of anobjective function that measures the distance between modeled and recordeddata. In the framework of local descent methods, FWI requires to perform atleast two seismic modelings per source and per FWI iteration.Due to the resulting computational burden, applications of elastic FWI have been usuallyrestricted to 2D geometries. Despite the continuous growth of high-performancecomputing facilities, application of 3D elastic FWI to real-scale problemsremain computationally too expensive. To perform elastic seismic modeling with a reasonable amount of time, weconsider a reduced computational domain embedded in a larger background modelin which seismic sources are located. Our aim is to compute repeatedly thefull wavefield in the targeted domain after model alteration, once theincident wavefield has been computed once for all in the background model. Toachieve this goal, we use a grid injection method referred to as the Total-Field/Scattered-Field (TF/SF) technique in theelectromagnetic community. We implemented the Total-Field/Scattered-Field approach in theDiscontinuous Galerkin Finite Element method (DG-FEM) that is used to performmodeling in the local domain. We show how to interface the DG-FEM with any modeling engine (analytical solution, finite difference or finite elements methods) that is suitable for the background simulation. One advantage of the Total-Field/Scattered-Field approach is related to thefact that the scattered wavefield instead of the full wavefield enter thePMLs, hence making more efficient the absorption of the outgoing waves at theouter edges of the computational domain. The domain reduction in which theDG-FEM is applied allows us to use modest computational resources opening theway for high-resolution imaging by full 5. Exploring the resolution capabilities of subduction zone guided waves: 2D visco-elastic and 3D wave simulations Garth, T.; Rietbrock, A. 2011-12-01 Dispersion of body wave arrivals observed in the fore-arc have been attributed to high frequency guided waves being retained and delayed by a low velocity layer (LVL) in the subducted crust. Lower frequency seismic waves travel at higher velocities in the surrounding mantle. These subduction zone guided waves have the potential to offer unique insights into subducting oceanic crust. Two and three dimensional finite difference (FD) wave propagation models are used to investigate the factors controlling guided wave dispersion and to test which features of the subducted crust can be resolved by guided waves. Other factors that may affect the frequency content of arrivals in the fore-arc such as elevated attenuation are also investigated. Modeling results are compared to observed guided wave dispersion in the Japan, Aleutian and Central American subduction zones. Modeling has shown that trade-offs occur between the velocity contrast and the thickness of the waveguide, with both parameters potentially affecting the frequency content that is delayed. We combine amplitude spectra plots with displacement spectrograms so that the relative amplitudes and relative arrival times of different frequencies can be compared. This allows the specific effects of given parameters to be understood. The effect of elevated attenuation on the frequency content of arrivals in the fore-arc is investigated using a visco-elastic FD wave propagation model (Bohlen 2002). The sensitivity of observed dispersion to variations in the Vp/Vs ratio of the waveguide material is also investigated. Understanding the relative dispersion of P and S waves as well as the relative importance of attenuation in the subduction system may allow us to understand more about the hydrous conditions in subduction zones. Systematic variations in the contrast between the LVL and the surrounding material are investigated. Modeling is designed to test if guided wave dispersion can resolve down dip velocity changes in the 6. 3D Simulation of Elastic Wave Propagation in Heterogeneous Anisotropic Media in Laplace Domain for Electromagnetic-Seismic Inverse Modeling Petrov, P.; Newman, G. A. 2011-12-01 averaging elastic coefficients and three averaging densities are necessary to describe the heterogeneous medium with VTI anisotropy. The resulting system is solved with iterative Krylov methods. The developed method will be incorporated in an inversion scheme for joint seismic-electromagnetic imaging. References. Brown, B.M., M. Jais, I.W. Knowles, 2005, A variational approach to an elastic inverse problem: Inverse Problems, 21, 1953-1973. Commer, M., G. Newman, 2008, New advances in three-dimensional controlled-source electromagnetic inversion: Geophysical Journal International, 172, 513-535. Newman, G. A., M. Commer and J.J. Carazzone, 2010, Imaging CSEM data in the presence of electrical anisotropy: Geophysics 75, 51-61 Petrov, P.V., G. A. Newman (2010), Using 3D Simulation of Elastic Wave Propagation in Laplace Domain for Electromagnetic-Seismic Inverse Modeling, Abstract T21A-2140 presented at 2010 Fall Meeting, AGU, San Francisco, Calif., 13-17 Dec. Shin, C. , W. Ha, 2008, A comparison between the behavior of objective functions for waveform inversion in the frequency and Laplace domains: Geophysics, 73, 119-133. Shin, C. , Y. H. Cha, 2008. Waveform inversion in the Laplace domain: Geophysical Journal International, 173, 922-931. 7. Using 3D Simulation of Elastic Wave Propagation in Laplace Domain for Electromagnetic-Seismic Inverse Modeling Petrov, P.; Newman, G. A. 2010-12-01 -Fourier domain we had developed 3D code for full-wave field simulation in the elastic media which take into account nonlinearity introduced by free-surface effects. Our approach is based on the velocity-stress formulation. In the contrast to conventional formulation we defined the material properties such as density and Lame constants not at nodal points but within cells. This second order finite differences method formulated in the cell-based grid, generate numerical solutions compatible with analytical ones within the range errors determinate by dispersion analysis. Our simulator will be embedded in an inversion scheme for joint seismic- electromagnetic imaging. It also offers possibilities for preconditioning the seismic wave propagation problems in the frequency domain. References. Shin, C. & Cha, Y. (2009), Waveform inversion in the Laplace-Fourier domain, Geophys. J. Int. 177(3), 1067- 1079. Shin, C. & Cha, Y. H. (2008), Waveform inversion in the Laplace domain, Geophys. J. Int. 173(3), 922-931. Commer, M. & Newman, G. (2008), New advances in three-dimensional controlled-source electromagnetic inversion, Geophys. J. Int. 172(2), 513-535. Newman, G. A., Commer, M. & Carazzone, J. J. (2010), Imaging CSEM data in the presence of electrical anisotropy, Geophysics, in press. 8. Modeling of elastic and plastic waves for HCP single crystals in a 3D formulation based on zinc single crystal Krivosheina, Marina; Kobenko, Sergey; Tuch, Elena; Kozlova, Maria 2016-11-01 This paper investigates elastic and plastic waves in HCP single crystals through the numerical simulation of strain processes in anisotropic materials based on a zinc single crystal. Velocity profiles for compression waves in the back surfaces of single-crystal zinc plates with impact loading oriented in 0001 and 10 1 ¯0 are presented in this work as a part of results obtained in numerical simulations. The mathematical model implemented in this study reflects the following characteristics of the mechanical properties inherent in anisotropic (transtropic) materials: varying degree of anisotropy of elastic and plastic properties, which includes reverse anisotropy, dependence of distribution of all types of waves on the velocity orientation, and the anisotropy of compressibility. Another feature of elastic and plastic waves in HCP single crystals is that the shock wave does not split into an elastic precursor and "plastic" compression shock wave, which is inherent in zinc single crystals with loading oriented in 0001. The study compares numerical results obtained in a three-dimensional formulation with the results of velocity profiles from the back surfaces of target plates obtained in real experiments. These results demonstrate that the mathematical model is capable of describing the properties of the above-mentioned anisotropic (transtropic) materials. 9. Lapse-time-dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel 2016-10-01 In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: first, we evaluate the contribution of surface- and body-wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time-dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Second, we compare the lapse-time behaviour in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes. 10. 3D elastic control for mobile devices. PubMed Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal 2008-01-01 To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications. 11. Investigation of surface wave amplitudes in 3-D velocity and 3-D Q models Ruan, Y.; Zhou, Y. 2010-12-01 It has been long recognized that seismic amplitudes depend on both wave speed structures and anelasticity (Q) structures. However, the effects of lateral heterogeneities in wave speed and Q structures on seismic amplitudes has not been well understood. We investigate the effects of 3-D wave speed and 3-D anelasticity (Q) structures on surface-wave amplitudes based upon wave propagation simulations of twelve globally-distributed earthquakes and 801 stations in Earth models with and without lateral heterogeneities in wave speed and anelasticity using a Spectral Element Method (SEM). Our tomographic-like 3-D Q models are converted from a velocity model S20RTS using a set of reasonable mineralogical parameters, assuming lateral perturbations in both velocity and Q are due to temperature perturbations. Surface-wave amplitude variations of SEM seismograms are measured in the period range of 50--200 s using boxcar taper, cosine taper and Slepian multi-tapers. We calculate ray-theoretical predictions of surface-wave amplitude perturbations due to elastic focusing, attenuation, and anelastic focusing which respectively depend upon the second spatial derivative (''roughness'') of perturbations in phase velocity, 1/Q, and the roughness of perturbations in 1/Q. Both numerical experiments and theoretical calculations show that (1) for short-period (~ 50 s) surface waves, the effects of amplitude attenuation due to 3-D Q structures are comparable with elastic focusing effects due to 3-D wave speed structures; and (2) for long-period (> 100 s) surface waves, the effects of attenuation become much weaker than elastic focusing; and (3) elastic focusing effects are correlated with anelastic focusing at all periods due to the correlation between velocity and Q models; and (4) amplitude perturbations are depend on measurement techniques and therefore cannot be directly compared with ray-theoretical predictions because ray theory does not account for the effects of measurement 12. Modeling and Processing of Continuous 3D Elastic Wavefield Data Milkereit, B.; Bohlen, T. 2001-12-01 Continuous seismic wavefields are excited by earthquake clustering, induced seismicity in reservoirs, and mining. In hydrocarbon reservoirs, for example, pore pressure changes and fluid flow (mass transfer) will cause incremental deviatoric stresses sufficient to trigger and sustain seismic activity. Here we address three aspects of seismic wavefields in three-dimensional heterogeneous media triggered by distributed sources in space and time: forward modeling, multichannel data processing, and source location imaging. A power law distribution of seismic sources (such as the Gutenberg-Richter law) is used for the modeling of viscoelastic/elastic wave propagation through a realistic earth model. 3D modeling provides new insight in the interaction of multi-source wavefields and the role of scale-dependend elastic model parameters on transmitted and reflected/back-scattered wavefields. There exists a strong correlation between the spatial properties of the compressional, shear wave and density perturbations and the lateral correlation length of the resulting reflected or transmitted seismic wavefields. Modeling is based on the implementation of 3D elastic/viscoelastic FD codes on massive parallel and/or distributed computing resources using MPI (message passing interface). For parallelization, large grid 3D earth models are decomposed into subvolume processing elements whereby each processing element is updating the wavefield within its portion of the grid. Processing of continuous seismic wavefields excited by multiple distributed sources is based on a combination of crosscorrelated or slowness-transformed array data and Kirchhoff or reverse time migration for source location or source volume imaging. The appearance of slowness in both migration and array data processing suggests the possibility of combining them into a single process. In order to place further constraints on the migration, the directivity properties of 3-component receiver arrays can be included in 13. 3D Ultrasonic Wave Simulations for Structural Health Monitoring NASA Technical Reports Server (NTRS) Campbell, Leckey Cara A/; Miler, Corey A.; Hinders, Mark K. 2011-01-01 Structural health monitoring (SHM) for the detection of damage in aerospace materials is an important area of research at NASA. Ultrasonic guided Lamb waves are a promising SHM damage detection technique since the waves can propagate long distances. For complicated flaw geometries experimental signals can be difficult to interpret. High performance computing can now handle full 3-dimensional (3D) simulations of elastic wave propagation in materials. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate ultrasound scattering from flaws in materials. EFIT results have been compared to experimental data and the simulations provide unique insight into details of the wave behavior. This type of insight is useful for developing optimized experimental SHM techniques. 3D EFIT can also be expanded to model wave propagation and scattering in anisotropic composite materials. 14. Filament-length-controlled elasticity in 3D fiber networks. PubMed Broedersz, C P; Sheinman, M; Mackintosh, F C 2012-02-17 We present a model for disordered 3D fiber networks to study their linear and nonlinear elasticity. In contrast to previous 2D models, these 3D networks with binary crosslinks are underconstrained with respect to fiber stretching elasticity, suggesting that bending may dominate their response. We find that such networks exhibit a bending-dominated elastic regime controlled by fiber length, as well as a crossover to a stretch-dominated regime for long fibers. Finally, by extending the model to the nonlinear regime, we show that these networks become intrinsically nonlinear with a vanishing linear response regime in the limit of flexible or long filaments. 15. An unsplit Convolutional perfectly matched layer technique improved at grazing incidence for the differential anisotropic elastic wave equation: application to 3D heterogeneous near surface slices. Martin, R.; Komatitsch, D. 2007-05-01 In geophysical exploration, high computational cost of full waveform inverse problem can be drastically reduced by implementing efficient boundary conditions. In many regions of interest for the oil industry or geophysical exploration, nearly tabular geological structures can be handled and analyzed by setting receivers in wells or/and at large offset. Then, the numerical modelling of waves travelling in thin slices along wells and near surface structures can provide very fast responses if highly accurate absorbing conditions around the slice are introduced in the wave propagation modelling. Here we propose then a Convolutional version of the well known Perfectly Matched layer technique. This optimized version allows the generation of seismic waves travelling close to the boundary layer at almost grazing incidence, which allows the treatment of thin 3D slices. The Perfectly Matched Layer (PML) technique, introduced in 1994 by Bérenger for Maxwell's equations, has become classical in the context of numerical simulations in electromagnetics, in particular for 3D finite difference in the time domain (FDTD) calculations. One of the most attractive properties of a PML model is that no reflection occurs at the interface between the physical domain and the absorbing layer before truncation to a finite-size layer and discretization by a numerical scheme. Therefore, the absorbing layer does not send spurious energy back into the medium. This property holds for any frequency and angle of incidence. However, the layer must be truncated in order to be able to perform numerical simulations, and such truncation creates a reflected wave whose amplitude is amplified by the discretization process. In 2001, Collino and Tsogka introduced a PML model for the elastodynamics equation written as a first-order system in velocity and stress with split unknowns, and discretized it based on the standard 2D staggered-grid finite-difference scheme of Virieux (1986). Then in 2001 and 2004 16. Linear Elastic Waves Revenough, Justin Elastic waves propagating in simple media manifest a surprisingly rich collection of phenomena. Although some can't withstand the complexities of Earth's structure, the majority only grow more interesting and more important as remote sensing probes for seismologists studying the planet's interior. To fully mine the information carried to the surface by seismic waves, seismologists must produce accurate models of the waves. Great strides have been made in this regard. Problems that were entirely intractable a decade ago are now routinely solved on inexpensive workstations. The mathematical representations of waves coded into algorithms have grown vastly more sophisticated and are troubled by many fewer approximations, enforced symmetries, and limitations. They are far from straightforward, and seismologists using them need a firm grasp on wave propagation in simple media. Linear Elastic Waves, by applied mathematician John G. Harris, responds to this need. 17. Laplace-domain waveform modeling and inversion for the 3D acoustic-elastic coupled media Shin, Jungkyun; Shin, Changsoo; Calandra, Henri 2016-06-01 Laplace-domain waveform inversion reconstructs long-wavelength subsurface models by using the zero-frequency component of damped seismic signals. Despite the computational advantages of Laplace-domain waveform inversion over conventional frequency-domain waveform inversion, an acoustic assumption and an iterative matrix solver have been used to invert 3D marine datasets to mitigate the intensive computing cost. In this study, we develop a Laplace-domain waveform modeling and inversion algorithm for 3D acoustic-elastic coupled media by using a parallel sparse direct solver library (MUltifrontal Massively Parallel Solver, MUMPS). We precisely simulate a real marine environment by coupling the 3D acoustic and elastic wave equations with the proper boundary condition at the fluid-solid interface. In addition, we can extract the elastic properties of the Earth below the sea bottom from the recorded acoustic pressure datasets. As a matrix solver, the parallel sparse direct solver is used to factorize the non-symmetric impedance matrix in a distributed memory architecture and rapidly solve the wave field for a number of shots by using the lower and upper matrix factors. Using both synthetic datasets and real datasets obtained by a 3D wide azimuth survey, the long-wavelength component of the P-wave and S-wave velocity models is reconstructed and the proposed modeling and inversion algorithm are verified. A cluster of 80 CPU cores is used for this study. 18. Nonstationary 3D motion of an elastic spherical shell Tarlakovskii, D. V.; Fedotenkov, G. V. 2015-03-01 A 3D model of motion of a thin elastic spherical Timoshenko shell under the action of arbitrarily distributed nonstationary pressure is considered. An approach for splitting the system of equations of 3D motion of the shell is proposed. The integral representations of the solution with kernels in the form of influence functions, which can be determined analytically by using series expansions in the eigenfunctions and the Laplace transform, are constructed. An algorithm for solving the problem on the action of nonstationary normal pressure on the shell is constructed and implemented. The obtained results find practical use in aircraft and rocket construction and in many other industrial fields where thin-walled shell structural members under nonstationary working conditions are widely used. 19. Simulation of 3D Global Wave Propagation Through Geodynamic Models Schuberth, B.; Piazzoni, A.; Bunge, H.; Igel, H.; Steinle-Neumann, G. 2005-12-01 This project aims at a better understanding of the forward problem of global 3D wave propagation. We use the spectral element program "SPECFEM3D" (Komatitsch and Tromp, 2002a,b) with varying input models of seismic velocities derived from mantle convection simulations (Bunge et al., 2002). The purpose of this approach is to obtain seismic velocity models independently from seismological studies. In this way one can test the effects of varying parameters of the mantle convection models on the seismic wave field. In order to obtain the seismic velocities from the temperature field of the geodynamical simulations we follow a mineral physics approach. Assuming a certain mantle composition (e.g. pyrolite with CMASF composition) we compute the stable phases for each depth (i.e. pressure) and temperature by system Gibbs free energy minimization. Elastic moduli and density are calculated from the equations of state of the stable mineral phases. For this we use a mineral physics database derived from calorimetric experiments (enthalphy and entropy of formation, heat capacity) and EOS parameters. 20. Elastic Properties of 3D-Printed Rock Models: Dry and Saturated Cracks Huang, L.; Stewart, R.; Dyaur, N. 2014-12-01 Many regions of subsurface interest are, or will be, fractured. In addition, these zones many be subject to varying saturations and stresses. New 3D printing techniques using different materials and structures, provide opportunities to understand porous or fractured materials and fluid effects on their elastic properties. We use a 3D printer (Stratasys Dimension SST 768) to print two rock models: a solid octahedral prism and a porous cube with thousands of penny-shaped cracks. The printing material is ABS thermal plastic with a density of 1.04 g/cm3. After printing, we measure the elastic properties of the models, both dry and 100% saturated with water. Both models exhibit VTI (Vertical Transverse Isotropic) symmetry due to laying (about 0.25 mm thick) of the printing process. The prism has a density of 0.96 g/cm3 before saturation and 1.00 g/cm3 after saturation. Its effective porosity is calculated to be 4 %. We use ultrasonic transducers (500 kHz) to measure both P- and shear-wave velocities, and the raw material has a P-wave velocity of 1.89 km/s and a shear-wave velocity of 0.91 km/s. P-wave velocity in the un-saturated prism increases from 1.81 km/s to 1.84 km/s after saturation in the direction parallel to layering and from 1.73 km/s to 1.81 km/s in the direction perpendicular to layering. The fast shear-wave velocity decreases from 0.88 km/s to 0.87 km/s and the slow shear-wave velocity decreases from 0.82 km/s to 0.81 km/s. The cube, printed with penny-shaped cracks, gives a density of 0.79 g/cm3 and a porosity of 24 %. We measure its P-wave velocity as 1.78 km/s and 1.68 km/s in the direction parallel and perpendicular to the layering, respectively. Its fast shear-wave velocity is 0.88 km/s and slow shear-wave velocity is 0.70 km/s. The penny-shaped cracks have significant influence on the elastic properties of the 3D-printed rock models. To better understand and explain the fluid effects on the elastic properties of the models, we apply the extended 1. Diagnosis and control of 3D elastic mechanical structures Krajcin, Idriz; Soeffker, Dirk 2005-05-01 In this paper, a model-based approach for fault detection and vibration control of flexible structures is proposed and applied to 3D-structures. Faults like cracks or impacts acting on a flexible structure are considered as unknown inputs acting on the structure. The Proportional-Integral-Observer (PI-Observer) is used to estimate the system states as well as unknown inputs acting on a system. Also the effects of structural changes are understood as external effects (related to the unchanged structure) and are considered as fictitious external forces or moments. The paper deals with the design of the PI-Observer for practical applications when measurement noise and model uncertainties are present and shows its performance in experimental results. As examples, impacts acting upon a one side clamped elastic beam and on a thin plate structure are estimated using displacement or strain measurements. To control the vibration of the flexible plate, two piezoelectric patches bonded on the structure are used as actuators. The control algorithm introduced in this contribution contains a state feedback control and additionally a disturbance rejection. The disturbances are estimated using the PI-Observer. Experimental results show the performance and the robustness properties of the control strategy for the vibration control of a very thin plate. 2. High pressure system for 3-D study of elastic anisotropy Lokajicek, T.; Pros, Z.; Klima, K. 2003-04-01 New high pressure system was designed for the study of elastic anisotropy of condensed matter under high confining pressure up to 700 MPa. Simultaneously could be measured dynamic and static parameters: a) dynamic parameters by ultrasonic sounding, b) static parameters by measuring of spherical sample deformation. The measurement is carried out on spherical samples diameter 50 +/- 0.01 mm. Higher value of confining pressure was reached due to the new construction of sample positioning unit. The positioning unit is equipped with two Portecap step motors, which are located inside the vessel and make possible to rotate with the sphere and couple of piezoceramic transducers. Sample deformation is measured in the same direction as ultrasonic signal travel time. Only electric leads connects inner part of high pressure vessel with surrounding environment. Experimental set up enables: - simultaneous P-wave ultrasonic sounding, - measurement of current sample deformation at sounding points, - measurement of current value of confining pressure and - measurement of current stress media temperature. Air driven high pressure pump Haskel is used to produce high value of confining pressure up to 700 MPa. Ultrasonic signals are recorded by digital scope Agilent 54562 with sampling frequency 100 MHz. Control and measuring software was developed under Agilent VEE software environment working under MS Win 2000 operating system. Measuring set up was tested by measurement of monomineral spherical samples of quartz and corundum. Both of them have trigonal symmetry. The measurement showed that the P-wave velocity range of quartz was between 5.7-7.0 km/sec. and velocity range of corundum was between 9.7-10.9 km/sec. High pressure resistant LVDT transducers Mesing together with Intronix electronic unit were used to monitor sample deformation. Sample deformation is monitored with the accuracy of 0.1 micron. All test measurements proved the good accuracy of the whole measuring set up. This 3. A 3D staggered-grid finite difference scheme for poroelastic wave equation Zhang, Yijie; Gao, Jinghuai 2014-10-01 Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation. 4. 3-D FDTD simulation of shear waves for evaluation of complex modulus imaging. PubMed Orescanin, Marko; Wang, Yue; Insana, Michael 2011-02-01 The Navier equation describing shear wave propagation in 3-D viscoelastic media is solved numerically with a finite differences time domain (FDTD) method. Solutions are formed in terms of transverse scatterer velocity waves and then verified via comparison to measured wave fields in heterogeneous hydrogel phantoms. The numerical algorithm is used as a tool to study the effects on complex shear modulus estimation from wave propagation in heterogeneous viscoelastic media. We used an algebraic Helmholtz inversion (AHI) technique to solve for the complex shear modulus from simulated and experimental velocity data acquired in 2-D and 3-D. Although 3-D velocity estimates are required in general, there are object geometries for which 2-D inversions provide accurate estimations of the material properties. Through simulations and experiments, we explored artifacts generated in elastic and dynamic-viscous shear modulus images related to the shear wavelength and average viscosity. 5. Elastic waves in quasiperiodic structures Velasco, V. R.; Zárate, J. E. 2001-08-01 We study the transverse and sagittal elastic waves in different quasiperiodic structures by means of the full transfer-matrix technique and surface Green-function matching method. The quasiperiodic structures follow Fibonacci, Thue-Morse and Rudin-Shapiro sequences, respectively. We consider finite structures having stress-free bounding surfaces and different generation orders, including up to more than 1000 interfaces. We obtain the dispersion relations for elastic waves and spatial localization of the different modes. The fragmentation of the spectrum for different sequences is evident for intermediate generation orders, in the case of transverse elastic waves, whereas, for sagittal elastic waves, higher generation orders are needed to show clearly the spectrum fragmentation. The results of Fibonacci and Thue-Morse sequences exhibit similarities not present in the results of Rudin-Shapiro sequences. 6. 3D Guided Wave Motion Analysis on Laminated Composites NASA Technical Reports Server (NTRS) Tian, Zhenhua; Leckey, Cara; Yu, Lingyu 2013-01-01 Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end. 7. A 3D Orthotropic Elastic Continuum Damage Material Model SciTech Connect English, Shawn Allen; Brown, Arthur A. 2013-08-01 A three dimensional orthotropic elastic constitutive model with continuum damage is implemented for polymer matrix composite lamina. Damage evolves based on a quadratic homogeneous function of thermodynamic forces in the orthotropic planes. A small strain formulation is used to assess damage. In order to account for large deformations, a Kirchhoff material formulation is implemented and coded for numerical simulation in Sandia’s Sierra Finite Element code suite. The theoretical formulation is described in detail. An example of material parameter determination is given and an example is presented. 8. 3D volumetric radar using 94-GHz millimeter waves Takács, Barnabás 2006-05-01 This article describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave scanning radar. The MMW radar system employs a spinning antenna to generate a fan-shaped scanning pattern of the entire scene. The beams formed this way provide all weather 3D distance measurements (range/azimuth display) of objects as they appear on the ground. The beam width of the antenna and its side lobes are optimized to produce the best possible resolution even at distances of up to 15 Kms. To create a full 3D data set the fan-pattern is tilted up and down with the help of a controlled stepper motor. For our experiments we collected data at 0.1 degrees increments while using both bi-static as well as a mono-static antennas in our arrangement. The data collected formed a stack of range-azimuth images in the shape of a cone. This information is displayed using our high-end 3D visualization engine capable of displaying high-resolution volumetric models with 30 frames per second. The resulting 3D scenes can then be viewed from any angle and subsequently processed to integrate, fuse or match them against real-life sensor imagery or 3D model data stored in a synthetic database. 9. Frozen Gaussian approximation for 3-D seismic wave propagation Chai, Lihui; Tong, Ping; Yang, Xu 2017-01-01 We present a systematic introduction on applying frozen Gaussian approximation (FGA) to compute synthetic seismograms in 3-D earth models. In this method, seismic wavefield is decomposed into frozen (fixed-width) Gaussian functions, which propagate along ray paths. Rather than the coherent state solution to the wave equation, this method is rigorously derived by asymptotic expansion on phase plane, with analysis of its accuracy determined by the ratio of short wavelength over large domain size. Similar to other ray-based beam methods (e.g. Gaussian beam methods), one can use relatively small number of Gaussians to get accurate approximations of high-frequency wavefield. The algorithm is embarrassingly parallel, which can drastically speed up the computation with a multicore-processor computer station. We illustrate the accuracy and efficiency of the method by comparing it to the spectral element method for a 3-D seismic wave propagation in homogeneous media, where one has the analytical solution as a benchmark. As another proof of methodology, simulations of high-frequency seismic wave propagation in heterogeneous media are performed for 3-D waveguide model and smoothed Marmousi model, respectively. The second contribution of this paper is that, we incorporate the Snell's law into the FGA formulation, and asymptotically derive reflection, transmission and free surface conditions for FGA to compute high-frequency seismic wave propagation in high contrast media. We numerically test these conditions by computing traveltime kernels of different phases in the 3-D crust-over-mantle model. 10. Importance of a 3D forward modeling tool for surface wave analysis methods Pageot, Damien; Le Feuvre, Mathieu; Donatienne, Leparoux; Philippe, Côte; Yann, Capdeville 2016-04-01 Since a few years, seismic surface waves analysis methods (SWM) have been widely developed and tested in the context of subsurface characterization and have demonstrated their effectiveness for sounding and monitoring purposes, e.g., high-resolution tomography of the principal geological units of California or real time monitoring of the Piton de la Fournaise volcano. Historically, these methods are mostly developed under the assumption of semi-infinite 1D layered medium without topography. The forward modeling is generally based on Thomson-Haskell matrix based modeling algorithm and the inversion is driven by Monte-Carlo sampling. Given their efficiency, SWM have been transfered to several scale of which civil engineering structures in order to, e.g., determine the so-called V s30 parameter or assess other critical constructional parameters in pavement engineering. However, at this scale, many structures may often exhibit 3D surface variations which drastically limit the efficiency of SWM application. Indeed, even in the case of an homogeneous structure, 3D geometry can bias the dispersion diagram of Rayleigh waves up to obtain discontinuous phase velocity curves which drastically impact the 1D mean velocity model obtained from dispersion inversion. Taking advantages of high-performance computing center accessibility and wave propagation modeling algorithm development, it is now possible to consider the use of a 3D elastic forward modeling algorithm instead of Thomson-Haskell method in the SWM inversion process. We use a parallelized 3D elastic modeling code based on the spectral element method which allows to obtain accurate synthetic data with very low numerical dispersion and a reasonable numerical cost. In this study, we choose dike embankments as an illustrative example. We first show that their longitudinal geometry may have a significant effect on dispersion diagrams of Rayleigh waves. Then, we demonstrate the necessity of 3D elastic modeling as a forward 11. 3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds NASA Technical Reports Server (NTRS) Leckey, C.; Hinders, M. 2011-01-01 Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds. 12. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids SciTech Connect Duru, Kenneth; Dunham, Eric M. 2016-01-15 Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture 13. Ice-shelf tidal deflections modelled with a full 3D elastic model Konovalov, Yuri 2014-05-01 Ice-shelf flexure modelling was performed using a full 3D finite-difference elastic model, which takes into account sub-ice-shelf seawater flow. The numerical experiments were carried out for the thin plate of ice with changing ice thickness (with trapezoidal profile along the center line). The sub-ice seawater flow was described by the wave channel equation (Holdsworth and Glynn, 1978). In the model ice shelf flexures result from variations in the incoming (outgoing) sea water flux, which flows into (out of) the sub-ice-shelf channel. The numerical experiments were carried out for harmonic incoming seawater fluxes and the ice-shelf flexures were obtained for tidal ocean impacts and for different ice-shelf spatial extents. References Bassis J.N., Fricker H.A., Coleman R., Minster J.-B.: An investigation into the forces that drive ice-shelf rift propagation on the Amery Ice Shelf, East Antarcyica. J. of Glaciol. 54 (184): 17-27, 2008. Holdsworth G and Glynn J.: Iceberg calving from floating glaciers by a vibrating mechanism. Nature. 274, 464-466, 1978. Konovalov Y. V.: Ice-shelf resonance deflections modelled with a 2D elastic centre-line model. Physical Review & Research International, 4(1), 9-29, 2014. Vaughan D.G.: Tidal flexure at ice shelf margins. J. Geophys. Res. 100(B4), 6213-6224, 2002. 14. Fast wave current drive antenna performance on D3-D Mayberry, M. J.; Pinsker, R. I.; Petty, C. C.; Chiu, S. C.; Jackson, G. L.; Lippmann, S. I.; Prater, R.; Porkolab, M. 1991-10-01 Fast wave current drive (FWCD) experiments at 60 MHz are being performed on the D3-D tokamak for the first time in high electron temperature, high (beta) target plasmas. A four-element phased-array antenna is used to launch a directional wave spectrum with the peak n(sub parallel) value (approximately = 7) optimized for strong single-pass electron absorption due to electron Landau damping. For this experiment, high power FW injection (2 MW) must be accomplished without voltage breakdown in the transmission lines or antenna, and without significant impurity influx. In addition, there is the technological challenge of impedance matching a four-element antenna while maintaining equal currents and the correct phasing (90 degrees) in each of the straps for a directional spectrum. We describe the performance of the D3-D FWCD antenna during initial FW electron heating and current drive experiments in terms of these requirements. 15. Fast 3D elastic micro-seismic source location using new GPU features Xue, Qingfeng; Wang, Yibo; Chang, Xu 2016-12-01 In this paper, we describe new GPU features and their applications in passive seismic - micro-seismic location. Locating micro-seismic events is quite important in seismic exploration, especially when searching for unconventional oil and gas resources. Different from the traditional ray-based methods, the wave equation method, such as the method we use in our paper, has a remarkable advantage in adapting to low signal-to-noise ratio conditions and does not need a person to select the data. However, because it has a conspicuous deficiency due to its computation cost, these methods are not widely used in industrial fields. To make the method useful, we implement imaging-like wave equation micro-seismic location in a 3D elastic media and use GPU to accelerate our algorithm. We also introduce some new GPU features into the implementation to solve the data transfer and GPU utilization problems. Numerical and field data experiments show that our method can achieve a more than 30% performance improvement in GPU implementation just by using these new features. 16. Instability and Wave Propagation in Structured 3D Composites Kaynia, Narges; Fang, Nicholas X.; Boyce, Mary C. 2014-03-01 Many structured composites found in nature possess undulating and wrinkled interfacial layers that regulate mechanical, chemical, acoustic, adhesive, thermal, electrical and optical functions of the material. This research focused on the complex instability and wrinkling pattern arising in 3D structured composites and the effect of the buckling pattern on the overall structural response. The 3D structured composites consisted of stiffer plates supported by soft matrix on both sides. Compression beyond the critical strain led to complex buckling patterns in the initially straight plates. The motivation of our work is to elaborate the formation of a system of prescribed periodic scatterers (metamaterials) due to buckling, and their effect to interfere wave propagation through the metamaterial structures. Such metamaterials made from elastomers enable large reversible deformation and, as a result, significant changes of the wave propagation properties. We developed analytical and finite element models to capture various aspects of the instability mechanism. Mechanical experiments were designed to further explore the modeling results. The ability to actively alter the 3D composite structure can enable on-demand tunability of many different functions, such as active control of wave propagation to create band-gaps and waveguides. 17. Protrusive waves guide 3D cell migration along nanofibers PubMed Central Guetta-Terrier, Charlotte; Monzo, Pascale; Zhu, Jie; Long, Hongyan; Venkatraman, Lakshmi; Zhou, Yue; Wang, PeiPei; Chew, Sing Yian; Mogilner, Alexander 2015-01-01 In vivo, cells migrate on complex three-dimensional (3D) fibrous matrices, which has made investigation of the key molecular and physical mechanisms that drive cell migration difficult. Using reductionist approaches based on 3D electrospun fibers, we report for various cell types that single-cell migration along fibronectin-coated nanofibers is associated with lateral actin-based waves. These cyclical waves have a fin-like shape and propagate up to several hundred micrometers from the cell body, extending the leading edge and promoting highly persistent directional movement. Cells generate these waves through balanced activation of the Rac1/N-WASP/Arp2/3 and Rho/formins pathways. The waves originate from one major adhesion site at leading end of the cell body, which is linked through actomyosin contractility to another site at the back of the cell, allowing force generation, matrix deformation and cell translocation. By combining experimental and modeling data, we demonstrate that cell migration in a fibrous environment requires the formation and propagation of dynamic, actin based fin-like protrusions. PMID:26553933 18. Protrusive waves guide 3D cell migration along nanofibers. PubMed Guetta-Terrier, Charlotte; Monzo, Pascale; Zhu, Jie; Long, Hongyan; Venkatraman, Lakshmi; Zhou, Yue; Wang, PeiPei; Chew, Sing Yian; Mogilner, Alexander; Ladoux, Benoit; Gauthier, Nils C 2015-11-09 In vivo, cells migrate on complex three-dimensional (3D) fibrous matrices, which has made investigation of the key molecular and physical mechanisms that drive cell migration difficult. Using reductionist approaches based on 3D electrospun fibers, we report for various cell types that single-cell migration along fibronectin-coated nanofibers is associated with lateral actin-based waves. These cyclical waves have a fin-like shape and propagate up to several hundred micrometers from the cell body, extending the leading edge and promoting highly persistent directional movement. Cells generate these waves through balanced activation of the Rac1/N-WASP/Arp2/3 and Rho/formins pathways. The waves originate from one major adhesion site at leading end of the cell body, which is linked through actomyosin contractility to another site at the back of the cell, allowing force generation, matrix deformation and cell translocation. By combining experimental and modeling data, we demonstrate that cell migration in a fibrous environment requires the formation and propagation of dynamic, actin based fin-like protrusions. 19. Generation Of 3d Periodic Internal Wave Beams: Chashechkin, Yuli D.; Vasiliev, Alexey Yu. We study generation of 2D and 3D periodic internal wave beams in continuously strat- ified viscous liquid basing on a complete set of governing equations and exact bound- ary conditions that is no-slip for velocity and attenuation of all disturbances at infinite distance from the source. The linearized governing equations are solved by an integral transform method. A total set of dispersion equation roots contains terms correspond- ing to internal waves and additional roots describing two kinds of periodic boundary layers. The first one is a viscous boundary layer and has an analogue that is a periodic or Stokes' layer in a homogeneous fluid. Its thickness is defined by a kinematic viscos- ity coefficient and a buoyancy frequency. The second one, that is an internal boundary layer, is a specific feature of stratified flows. Its thickness besides the Stokes' scale contains additional factor depending on relative wave frequency and geometry of the problem that is on the local slope of emitting surface and a direction of the waves propagation. We have constructed exact solutions of linear problems describing gen- eration of 2D waves by a strip and 3D by a rectangular with an arbitrary ratio of sides moving along or normally to a sloping plane. We also calculated the wave pattern gen- erated by a part of a vertical cylinder surface with different ratios of intrinsic scales that is of cylinder radius, thickness of the boundary layer and internal viscous scale. All solutions are regularly matched between themselves in limiting cases. The spatial decay of the waves depends on dimension and geometry of the problem. Non-linear generation of internal waves by the Stokes' boundary layer on a periodically rotating horizontal disk or by interacting boundary layers on an arbitrary moving strip is in- vestigated. We found conditions of generation of the main frequency and its second harmonic. In experiments periodic waves beams from different sources are visualised by the 20. Computation of elastic properties of 3D digital cores from the Longmaxi shale Zhang, Wen-Hui; Fu, Li-Yun; Zhang, Yan; Jin, Wei-Jun 2016-06-01 The dependence of elastic moduli of shales on the mineralogy and microstructure of shales is important for the prediction of sweet spots and shale gas production. Based on 3D digital images of the microstructure of Longmaxi black shale samples using X-ray CT, we built detailed 3D digital images of cores with porosity properties and mineral contents. Next, we used finite-element (FE) methods to derive the elastic properties of the samples. The FE method can accurately model the shale mineralogy. Particular attention is paid to the derived elastic properties and their dependence on porosity and kerogen. The elastic moduli generally decrease with increasing porosity and kerogen, and there is a critical porosity (0.75) and kerogen content (ca. ≤3%) over which the elastic moduli decrease rapidly and slowly, respectively. The derived elastic moduli of gas- and oil-saturated digital cores differ little probably because of the low porosity (4.5%) of the Longmaxi black shale. Clearly, the numerical experiments demonstrated the feasibility of combining microstructure images of shale samples with elastic moduli calculations to predict shale properties. 1. Second order Method for Solving 3D Elasticity Equations with Complex Interfaces PubMed Central Wang, Bao; Xia, Kelin; Wei, Guo-Wei 2015-01-01 Elastic materials are ubiquitous in nature and indispensable components in man-made devices and equipments. When a device or equipment involves composite or multiple elastic materials, elasticity interface problems come into play. The solution of three dimensional (3D) elasticity interface problems is significantly more difficult than that of elliptic counterparts due to the coupled vector components and cross derivatives in the governing elasticity equation. This work introduces the matched interface and boundary (MIB) method for solving 3D elasticity interface problems. The proposed MIB elasticity interface scheme utilizes fictitious values on irregular grid points near the material interface to replace function values in the discretization so that the elasticity equation can be discretized using the standard finite difference schemes as if there were no material interface. The interface jump conditions are rigorously enforced on the intersecting points between the interface and the mesh lines. Such an enforcement determines the fictitious values. A number of new techniques has been developed to construct efficient MIB elasticity interface schemes for dealing with cross derivative in coupled governing equations. The proposed method is extensively validated over both weak and strong discontinuity of the solution, both piecewise constant and position-dependent material parameters, both smooth and nonsmooth interface geometries, and both small and large contrasts in the Poisson’s ratio and shear modulus across the interface. Numerical experiments indicate that the present MIB method is of second order convergence in both L∞ and L2 error norms for handling arbitrarily complex interfaces, including biomolecular surfaces. To our best knowledge, this is the first elasticity interface method that is able to deliver the second convergence for the molecular surfaces of proteins.. PMID:25914422 2. ZIP3D: An elastic and elastic-plastic finite-element analysis program for cracked bodies NASA Technical Reports Server (NTRS) Shivakumar, K. N.; Newman, J. C., Jr. 1990-01-01 ZIP3D is an elastic and an elastic-plastic finite element program to analyze cracks in three dimensional solids. The program may also be used to analyze uncracked bodies or multi-body problems involving contacting surfaces. For crack problems, the program has several unique features including the calculation of mixed-mode strain energy release rates using the three dimensional virtual crack closure technique, the calculation of the J integral using the equivalent domain integral method, the capability to extend the crack front under monotonic or cyclic loading, and the capability to close or open the crack surfaces during cyclic loading. The theories behind the various aspects of the program are explained briefly. Line-by-line data preparation is presented. Input data and results for an elastic analysis of a surface crack in a plate and for an elastic-plastic analysis of a single-edge-crack-tension specimen are also presented. 3. Traveling Lamb wave in elastic metamaterial layer Shu, Haisheng; Xu, Lihuan; Shi, Xiaona; Zhao, Lei; Zhu, Jie 2016-10-01 The propagation of traveling Lamb wave in single layer of elastic metamaterial is investigated in this paper. We first categorized the traveling Lamb wave modes inside an elastic metamaterial layer according to different combinations (positive or negative) of effective medium parameters. Then the impacts of the frequency dependence of effective parameters on dispersion characteristics of traveling Lamb wave were studied. Distinct differences could be observed when comparing the traveling Lamb wave along an elastic metamaterial layer with one inside the traditional elastic layer. We further examined in detail the traveling Lamb wave mode supported in elastic metamaterial layer, when the effective P and S wave velocities were simultaneously imaginary. It was found that the effective modulus ratio is the key factor for the existence of special traveling wave mode, and the main results were verified by FEM simulations from two levels: the level of effective medium and the level of microstructure unit cell. 4. Reconstruction of a 3D stereotactic brain atlas and its contour-to-contour elastic deformation Kimura, Masahiko; Otsuki, Taisuke 1993-06-01 We describe a refined method for estimating the 3-D geometry of cerebral structures of a patient's brain from magnetic resonance (MR) images by adapting a 3-D atlas to the images. The 3-D atlas represents the figures of anatomical subdivisions of deep cerebral structures as series of contours reconstructed from a stereotactic printed atlas. The method correlates corresponding points and curve segments that are recognizable in both the atlas and the image, by elastically deforming the atlas two-dimensionally, while maintaining the point-to-point and contour-to-contour correspondence, until equilibrium is reached. We have used the method experimentally for a patient with Parkinson's disease, and successfully estimated the substructures of the thalamus to be treated. 5. 3D reconstruction method from biplanar radiography using non-stereocorresponding points and elastic deformable meshes. PubMed Mitton, D; Landry, C; Véron, S; Skalli, W; Lavaste, F; De Guise, J A 2000-03-01 Standard 3D reconstruction of bones using stereoradiography is limited by the number of anatomical landmarks visible in more than one projection. The proposed technique enables the 3D reconstruction of additional landmarks that can be identified in only one of the radiographs. The principle of this method is the deformation of an elastic object that respects stereocorresponding and non-stereocorresponding observations available in different projections. This technique is based on the principle that any non-stereocorresponding point belongs to a line joining the X-ray source and the projection of the point in one view. The aim is to determine the 3D position of these points on their line of projection when submitted to geometrical and topological constraints. This technique is used to obtain the 3D geometry of 18 cadaveric upper cervical vertebrae. The reconstructed geometry obtained is compared with direct measurements using a magnetic digitiser. The order of precision determined with the point-to-surface distance between the reconstruction obtained with that technique and reference measurements is about 1 mm, depending on the vertebrae studied. Comparison results indicate that the obtained reconstruction is close to the actual vertebral geometry. This method can therefore be proposed to obtain the 3D geometry of vertebrae. 6. Faraday wave lattice as an elastic metamaterial. PubMed Domino, L; Tarpin, M; Patinet, S; Eddi, A 2016-05-01 Metamaterials enable the emergence of novel physical properties due to the existence of an underlying subwavelength structure. Here, we use the Faraday instability to shape the fluid-air interface with a regular pattern. This pattern undergoes an oscillating secondary instability and exhibits spontaneous vibrations that are analogous to transverse elastic waves. By locally forcing these waves, we fully characterize their dispersion relation and show that a Faraday pattern presents an effective shear elasticity. We propose a physical mechanism combining surface tension with the Faraday structured interface that quantitatively predicts the elastic wave phase speed, revealing that the liquid interface behaves as an elastic metamaterial. 7. Investigation of 3D surface acoustic waves in granular media with 3-color digital holography Leclercq, Mathieu; Picart, Pascal; Penelet, Guillaume; Tournat, Vincent 2017-01-01 This paper reports the implementation of digital color holography to investigate elastic waves propagating along a layer of a granular medium. The holographic set-up provides simultaneous recording and measurement of the 3D dynamic displacement at the surface. Full-field measurements of the acoustic amplitude and phase at different excitation frequencies are obtained. It is shown that the experimental data can be used to obtain the dispersion curve of the modes propagating in this granular medium layer. The experimental dispersion curve and that obtained from a finite element modeling of the problem are found to be in good agreement. In addition, full-field images of the interaction of an acoustic wave guided in the granular layer with a buried object are also shown. 8. Robust and Elastic Lunar and Martian Structures from 3D-Printed Regolith Inks Jakus, Adam E.; Koube, Katie D.; Geisendorfer, Nicholas R.; Shah, Ramille N. 2017-03-01 Here, we present a comprehensive approach for creating robust, elastic, designer Lunar and Martian regolith simulant (LRS and MRS, respectively) architectures using ambient condition, extrusion-based 3D-printing of regolith simulant inks. The LRS and MRS powders are characterized by distinct, highly inhomogeneous morphologies and sizes, where LRS powder particles are highly irregular and jagged and MRS powder particles are rough, but primarily rounded. The inks are synthesized via simple mixing of evaporant, surfactant, and plasticizer solvents, polylactic-co-glycolic acid (30% by solids volume), and regolith simulant powders (70% by solids volume). Both LRS and MRS inks exhibit similar rheological and 3D-printing characteristics, and can be 3D-printed at linear deposition rates of 1–150 mm/s using 300 μm to 1.4 cm-diameter nozzles. The resulting LRS and MRS 3D-printed materials exhibit similar, but distinct internal and external microstructures and material porosity (~20–40%). These microstructures contribute to the rubber-like quasi-static and cyclic mechanical properties of both materials, with young’s moduli ranging from 1.8 to 13.2 MPa and extension to failure exceeding 250% over a range of strain rates (10–1‑102 min‑1). Finally, we discuss the potential for LRS and MRS ink components to be reclaimed and recycled, as well as be synthesized in resource-limited, extraterrestrial environments. 9. Robust and Elastic Lunar and Martian Structures from 3D-Printed Regolith Inks PubMed Central Jakus, Adam E.; Koube, Katie D.; Geisendorfer, Nicholas R.; Shah, Ramille N. 2017-01-01 Here, we present a comprehensive approach for creating robust, elastic, designer Lunar and Martian regolith simulant (LRS and MRS, respectively) architectures using ambient condition, extrusion-based 3D-printing of regolith simulant inks. The LRS and MRS powders are characterized by distinct, highly inhomogeneous morphologies and sizes, where LRS powder particles are highly irregular and jagged and MRS powder particles are rough, but primarily rounded. The inks are synthesized via simple mixing of evaporant, surfactant, and plasticizer solvents, polylactic-co-glycolic acid (30% by solids volume), and regolith simulant powders (70% by solids volume). Both LRS and MRS inks exhibit similar rheological and 3D-printing characteristics, and can be 3D-printed at linear deposition rates of 1–150 mm/s using 300 μm to 1.4 cm-diameter nozzles. The resulting LRS and MRS 3D-printed materials exhibit similar, but distinct internal and external microstructures and material porosity (~20–40%). These microstructures contribute to the rubber-like quasi-static and cyclic mechanical properties of both materials, with young’s moduli ranging from 1.8 to 13.2 MPa and extension to failure exceeding 250% over a range of strain rates (10–1−102 min−1). Finally, we discuss the potential for LRS and MRS ink components to be reclaimed and recycled, as well as be synthesized in resource-limited, extraterrestrial environments. PMID:28317904 10. Robust and Elastic Lunar and Martian Structures from 3D-Printed Regolith Inks. PubMed Jakus, Adam E; Koube, Katie D; Geisendorfer, Nicholas R; Shah, Ramille N 2017-03-20 Here, we present a comprehensive approach for creating robust, elastic, designer Lunar and Martian regolith simulant (LRS and MRS, respectively) architectures using ambient condition, extrusion-based 3D-printing of regolith simulant inks. The LRS and MRS powders are characterized by distinct, highly inhomogeneous morphologies and sizes, where LRS powder particles are highly irregular and jagged and MRS powder particles are rough, but primarily rounded. The inks are synthesized via simple mixing of evaporant, surfactant, and plasticizer solvents, polylactic-co-glycolic acid (30% by solids volume), and regolith simulant powders (70% by solids volume). Both LRS and MRS inks exhibit similar rheological and 3D-printing characteristics, and can be 3D-printed at linear deposition rates of 1-150 mm/s using 300 μm to 1.4 cm-diameter nozzles. The resulting LRS and MRS 3D-printed materials exhibit similar, but distinct internal and external microstructures and material porosity (~20-40%). These microstructures contribute to the rubber-like quasi-static and cyclic mechanical properties of both materials, with young's moduli ranging from 1.8 to 13.2 MPa and extension to failure exceeding 250% over a range of strain rates (10(-1)-10(2) min(-1)). Finally, we discuss the potential for LRS and MRS ink components to be reclaimed and recycled, as well as be synthesized in resource-limited, extraterrestrial environments. 11. Jammed elastic shells - a 3D experimental soft frictionless granular system Jose, Jissy; Blab, Gerhard A.; van Blaaderen, Alfons; Imhof, Arnout 2015-03-01 We present a new experimental system of monodisperse, soft, frictionless, fluorescent labelled elastic shells for the characterization of structure, universal scaling laws and force networks in 3D jammed matter. The interesting fact about these elastic shells is that they can reversibly deform and therefore serve as sensors of local stress in jammed matter. Similar to other soft particles, like emulsion droplets and bubbles in foam, the shells can be packed to volume fractions close to unity, which allows us to characterize the contact force distribution and universal scaling laws as a function of volume fraction, and to compare them with theoretical predictions and numerical simulations. However, our shells, unlike other soft particles, deform rather differently at large stresses. They deform without conserving their inner volume, by forming dimples at contact regions. At each contact one of the shells buckled with a dimple and the other remained spherical, closely resembling overlapping spheres. We conducted 3D quantitative analysis using confocal microscopy and image analysis routines specially developed for these particles. In addition, we analysed the randomness of the process of dimpling, which was found to be volume fraction dependent. 12. Shielding of elastic nonstationary waves by interfaces Gulyaev, V. I.; Lugovoi, P. Z.; Zayets, Yu. A. 2012-07-01 The ray method is used to solve the problem of the propagation of discontinuous (weak shock) waves in inhomogeneous elastic media. A procedure for drawing the fronts of reflected and refracted waves at interfaces and calculating their intensities is proposed. The effect of shielding discontinuous waves by one or two interfaces is studied. The cases of slipping and non-slipping contact are examined 13. 3D elastic full waveform inversion: case study from a land seismic survey Kormann, Jean; Marti, David; Rodriguez, Juan-Esteban; Marzan, Ignacio; Ferrer, Miguel; Gutierrez, Natalia; Farres, Albert; Hanzich, Mauricio; de la Puente, Josep; Carbonell, Ramon 2016-04-01 Full Waveform Inversion (FWI) is one of the most advanced processing methods that is recently reaching a mature state after years of solving theoretical and technical issues such as the non-uniqueness of the solution and harnessing the huge computational power required by realistic scenarios. BSIT (Barcelona Subsurface Imaging Tools, www.bsc.es/bsit) includes a FWI algorithm that can tackle with very complex problems involving large datasets. We present here the application of this system to a 3D dataset acquired to constrain the shallow subsurface. This is where the wavefield is the most complicated, because most of the wavefield conversions takes place in the shallow region and also because the media is much more laterally heterogeneous. With this in mind, at least isotropic elastic approximation would be suitable as kernel engine for FWI. The current study explores the possibilities to apply elastic isotropic FWI using only the vertical component of the recorded seismograms. The survey covers an area of 500×500 m2, and consists in a receivers grid of 10 m×20 m combined with a 250 kg accelerated weight-drop as source on a displaced grid of 20 m×20 m. One of the main challenges in this case study is the costly 3D modeling that includes topography and substantial free surface effects. FWI is applied to a data subset (shooting lines 4 to 12), and is performed for 3 frequencies ranging from 15 to 25 Hz. The starting models are obtained from travel-time tomography and the all computation is run on 75 nodes of Mare Nostrum supercomputer during 3 days. The resulting models provide a higher resolution of the subsurface structures, and show a good correlation with the available borehole measurements. FWI allows to extend in a reliable way this 1D knowledge (borehole) to 3D. 14. A modified elastic foundation contact model for application in 3D models of the prosthetic knee. PubMed Pérez-González, Antonio; Fenollosa-Esteve, Carlos; Sancho-Bru, Joaquín L; Sánchez-Marín, Francisco T; Vergara, Margarita; Rodríguez-Cervantes, Pablo J 2008-04-01 Different models have been used in the literature for the simulation of surface contact in biomechanical knee models. However, there is a lack of systematic comparisons of these models applied to the simulation of a common case, which will provide relevant information about their accuracy and suitability for application in models of the implanted knee. In this work a comparison of the Hertz model (HM), the elastic foundation model (EFM) and the finite element model (FEM) for the simulation of the elastic contact in a 3D model of the prosthetic knee is presented. From the results of this comparison it is found that although the nature of the EFM offers advantages when compared with that of the HM for its application to realistic prosthetic surfaces, and when compared with the FEM in CPU time, its predictions can differ from FEM in some circumstances. These differences are considerable if the comparison is performed for prescribed displacements, although they are less important for prescribed loads. To solve these problems a new modified elastic foundation model (mEFM) is proposed that maintains basically the simplicity of the original model while producing much more accurate results. In this paper it is shown that this new mEFM calculates pressure distribution and contact area with accuracy and short computation times for toroidal contacting surfaces. Although further work is needed to confirm its validity for more complex geometries the mEFM is envisaged as a good option for application in 3D knee models to predict prosthetic knee performance. 15. The scattering potential of partial derivative wavefields in 3-D elastic orthorhombic media: an inversion prospective Oh, Ju-Won; Alkhalifah, Tariq 2016-09-01 Multiparameter full waveform inversion (FWI) applied to an elastic orthorhombic model description of the subsurface requires in theory a nine-parameter representation of each pixel of the model. Even with optimal acquisition on the Earth surface that includes large offsets, full azimuth, and multicomponent sensors, the potential for trade-off between the elastic orthorhombic parameters are large. The first step to understanding such trade-off is analysing the scattering potential of each parameter, and specifically, its scattering radiation patterns. We investigate such radiation patterns for diffraction and for scattering from a horizontal reflector considering a background isotropic model. The radiation patterns show considerable potential for trade-off between the parameters and the potentially limited resolution in their recovery. The radiation patterns of C11, C22, and C33 are well separated so that we expect to recover these parameters with limited trade-offs. However, the resolution of their recovery represented by recovered range of model wavenumbers varies between these parameters. We can only invert for the short wavelength components (reflection) of C33 while we can mainly invert for the long wavelength components (transmission) of the elastic coefficients C11 and C22 if we have large enough offsets. The elastic coefficients C13, C23, and C12 suffer from strong trade-offs with C55, C44, and C66, respectively. The trade-offs between C13 and C55, as well as C23 and C44, can be partially mitigated if we acquire P-SV and SV-SV waves. However, to reduce the trade-offs between C12 and C66, we require credible SH-SH waves. The analytical radiation patterns of the elastic constants are supported by numerical gradients of these parameters. 16. Jamming of a soft granular system of hollow elastic shells in 3D using confocal microscopy Jose, Jissy; van Blaaderen, Alfons; Imhof, Arnout 2014-03-01 We introduce a new system for jammed matter research consisting of monodisperse, fluorescent, hollow deformable shells, dispersed in an index matched solvent. The interesting fact about these elastic shells is that they undergo buckling: in each contact one of the shells receives an indentation from its neighbor under compressive stress. This kind of deformation is different from the soft granular systems experimentally studied so far like photo elastic disks, emulsions and foams, where the particles are flattened in the region of contact and conserve their volume. Using confocal microscopy and image analysis routines (ImageJ software) we identified the 3D position of the particles with sub pixel resolution. The force law to find the contact forces between pairs of particle is derived from the theory of elasticity of thin shells, where force is proportional to the square root of indentation depth. The distribution of normalized contact forces showed a similar trend like other jammed systems with a peak around the mean and a tail that decayed faster than exponential away from jamming threshold. Further, we also investigated the structure of the jammed packings and contact number distribution with distance to jamming. 17. Computing elastic moduli on 3-D X-ray computed tomography image stacks Garboczi, E. J.; Kushch, V. I. 2015-03-01 A numerical task of current interest is to compute the effective elastic properties of a random composite material by operating on a 3D digital image of its microstructure obtained via X-ray computed tomography (CT). The 3-D image is usually sub-sampled since an X-ray CT image is typically of order 10003 voxels or larger, which is considered to be a very large finite element problem. Two main questions for the validity of any such study are then: can the sub-sample size be made sufficiently large to capture enough of the important details of the random microstructure so that the computed moduli can be thought of as accurate, and what boundary conditions should be chosen for these sub-samples? This paper contributes to the answer of both questions by studying a simulated X-ray CT cylindrical microstructure with three phases, cut from a random model system with known elastic properties. A new hybrid numerical method is introduced, which makes use of finite element solutions coupled with exact solutions for elastic moduli of square arrays of parallel cylindrical fibers. The new method allows, in principle, all of the microstructural data to be used when the X-ray CT image is in the form of a cylinder, which is often the case. Appendix A describes a similar algorithm for spherical sub-samples, which may be of use when examining the mechanical properties of particles. Cubic sub-samples are also taken from this simulated X-ray CT structure to investigate the effect of two different kinds of boundary conditions: forced periodic and fixed displacements. It is found that using forced periodic displacements on the non-geometrically periodic cubic sub-samples always gave more accurate results than using fixed displacements, although with about the same precision. The larger the cubic sub-sample, the more accurate and precise was the elastic computation, and using the complete cylindrical sample with the new method gave still more accurate and precise results. Fortran 90 18. Application of 3D and 2D quantitative shear wave elastography (SWE) to differentiate between benign and malignant breast masses PubMed Central Tian, Jie; Liu, Qianqi; Wang, Xi; Xing, Ping; Yang, Zhuowen; Wu, Changjun 2017-01-01 As breast cancer tissues are stiffer than normal tissues, shear wave elastography (SWE) can locally quantify tissue stiffness and provide histological information. Moreover, tissue stiffness can be observed on three-dimensional (3D) colour-coded elasticity maps. Our objective was to evaluate the diagnostic performances of quantitative features in differentiating breast masses by two-dimensional (2D) and 3D SWE. Two hundred ten consecutive women with 210 breast masses were examined with B-mode ultrasound (US) and SWE. Quantitative features of 3D and 2D SWE were assessed, including elastic modulus standard deviation (ESDE) measured on SWE mode images and ESDU measured on B-mode images, as well as maximum elasticity (Emax). Adding quantitative features to B-mode US improved the diagnostic performance (p < 0.05) and reduced false-positive biopsies (p < 0.0001). The area under the receiver operating characteristic curve (AUC) of 3D SWE was similar to that of 2D SWE for ESDE (p = 0.026) and ESDU (p = 0.159) but inferior to that of 2D SWE for Emax (p = 0.002). Compared with ESDU, ESDE showed a higher AUC on 2D (p = 0.0038) and 3D SWE (p = 0.0057). Our study indicates that quantitative features of 3D and 2D SWE can significantly improve the diagnostic performance of B-mode US, especially 3D SWE ESDE, which shows considerable clinical value. PMID:28106134 19. Crack identification by 3D time-domain elastic or acoustic topological sensitivity Bellis, Cédric; Bonnet, Marc 2009-03-01 The topological sensitivity analysis, based on the asymptotic behavior of a cost functional associated with the creation of a small trial flaw in a defect-free solid, provides a computationally-fast, non-iterative approach for identifying flaws embedded in solids. This concept is here considered for crack identification using time-dependent measurements on the external boundary. The topological derivative of a cost function under the nucleation of a crack of infinitesimal size is established, in the framework of time-domain elasticity or acoustics. The simplicity and efficiency of the proposed formulation is enhanced by the recourse to an adjoint solution. Numerical results obtained on a 3-D elastodynamic example using the conventional FEM demonstrate the usefulness of the topological derivative as a crack indicator function. To cite this article: C. Bellis, M. Bonnet, C. R. Mecanique 337 (2009). 20. Swelling and folding as mechanisms of 3D shape formation in thin elastic sheets Dias, Marcelo A. We work with two different mechanisms to generate geometric frustration on thin elastic sheets; isotropic differential growth and folding. We describe how controlled growth and prescribing folding patterns are useful tools for designing three-dimensional objects from information printed in two dimensions. The first mechanism is inspired by the possibility to control shapes by swelling polymer films, where we propose a solution for the problem of shape formation by asking the question, “what 2D metric should be prescribed to achieve a given 3D shape?”', namely the reverse problem. We choose two different types of initial configurations of sheets, disk-like with one boundary and annular with two boundaries. We demonstrate our technique by choosing four examples of 3D axisymmetric shapes and finding the respective swelling factors to achieve the desired shape. Second, we present a mechanical model for a single curved fold that explains both the buckled shape of a closed fold and its mechanical stiffness. The buckling arises from the geometrical frustration between the prescribed crease angle and the bending energy of the sheet away from the crease. This frustration increases as the sheet's area increases. Stiff folds result in creases with constant space curvature while softer folds inherit the broken symmetry of the buckled shape. We extend the application of our numerical model to show the potential to study multiple fold structures. 1. Elastic shape analysis of cylindrical surfaces for 3D/2D registration in endometrial tissue characterization. PubMed Samir, Chafik; Kurtek, Sebastian; Srivastava, Anuj; Canis, Michel 2014-05-01 We study the problem of joint registration and deformation analysis of endometrial tissue using 3D magnetic resonance imaging (MRI) and 2D trans-vaginal ultrasound (TVUS) measurements. In addition to the different imaging techniques involved in the two modalities, this problem is complicated due to: 1) different patient pose during MRI and TVUS observations, 2) the 3D nature of MRI and 2D nature of TVUS measurements, 3) the unknown intersecting plane for TVUS in MRI volume, and 4) the potential deformation of endometrial tissue during TVUS measurement process. Focusing on the shape of the tissue, we use expert manual segmentation of its boundaries in the two modalities and apply, with modification, recent developments in shape analysis of parametric surfaces to this problem. First, we extend the 2D TVUS curves to generalized cylindrical surfaces through replication, and then we compare them with MRI surfaces using elastic shape analysis. This shape analysis provides a simultaneous registration (optimal reparameterization) and deformation (geodesic) between any two parametrized surfaces. Specifically, it provides optimal curves on MRI surfaces that match with the original TVUS curves. This framework results in an accurate quantification and localization of the deformable endometrial cells for radiologists, and growth characterization for gynecologists and obstetricians. We present experimental results using semi-synthetic data and real data from patients to illustrate these ideas. 2. 3D Full-Wave Simulations of Reflectometry SciTech Connect Valeo, E. J.; Kramer, G. J.; Nazikian, R. 2009-11-26 The characterization of fluctuation amplitudes, spatial correlation lengths, and wave vectors through measurement of the correlation properties of reflected microwave diagnostic signals depends on a quantitative knowledge of propagation in toroidal, magnetized plasma. The disparity between the radiation wavelength (mm) and the plasma size makes full wave computations challenging. We extend a two dimensional model which computes propagation in a poloidal plane to include toroidal variation. The model reduces the computational burden compared to that of solving the full-wave equation everywhere-but retains both diffraction and refraction-by merging a description appropriate to the under dense plasma (paraxial) with the required full-wave description near the reflection layer. Initial results for ITER-like profiles demonstrate the utility of the tool as an aid in specifying antenna positioning and setting sensitivity requirements. 3. Analysis of wave propagation in periodic 3D waveguides Schaal, Christoph; Bischoff, Stefan; Gaul, Lothar 2013-11-01 Structural Health Monitoring (SHM) is a growing research field in the realm of civil engineering. SHM concepts are implemented using integrated sensors and actuators to evaluate the state of a structure. Within this work, wave-based techniques are addressed. Dispersion effects for propagating waves in waveguides of different materials are analyzed for various different cross-sections. Since analytical theory is limited, a general approach based on the Waveguide Finite Element Method is applied. Numerical results are verified experimentally. 4. Solitary waves in a peridynamic elastic solid DOE PAGES Silling, Stewart A. 2016-06-23 The propagation of large amplitude nonlinear waves in a peridynamic solid is ana- lyzed. With an elastic material model that hardens in compression, sufficiently large wave pulses propagate as solitary waves whose velocity can far exceed the linear wave speed. In spite of their large velocity and amplitude, these waves leave the material they pass through with no net change in velocity and stress. They are nondissipative and nondispersive, and they travel unchanged over large distances. An approximate solution for solitary waves is derived that reproduces the main features of these waves observed in computational simulations. We demonstrate, by numericalmore » studies, that waves interact only weakly with each other when they collide. Finally, we found that wavetrains composed of many non-interacting solitary waves form and propagate under certain boundary and initial conditions.« less 5. Solitary waves in a peridynamic elastic solid SciTech Connect Silling, Stewart A. 2016-06-23 The propagation of large amplitude nonlinear waves in a peridynamic solid is ana- lyzed. With an elastic material model that hardens in compression, sufficiently large wave pulses propagate as solitary waves whose velocity can far exceed the linear wave speed. In spite of their large velocity and amplitude, these waves leave the material they pass through with no net change in velocity and stress. They are nondissipative and nondispersive, and they travel unchanged over large distances. An approximate solution for solitary waves is derived that reproduces the main features of these waves observed in computational simulations. We demonstrate, by numerical studies, that waves interact only weakly with each other when they collide. Finally, we found that wavetrains composed of many non-interacting solitary waves form and propagate under certain boundary and initial conditions. 6. Solitary waves in a peridynamic elastic solid Silling, S. A. 2016-11-01 The propagation of large amplitude nonlinear waves in a peridynamic solid is analyzed. With an elastic material model that hardens in compression, sufficiently large wave pulses propagate as solitary waves whose velocity can far exceed the linear wave speed. In spite of their large velocity and amplitude, these waves leave the material they pass through with no net change in velocity and stress. They are nondissipative and nondispersive, and they travel unchanged over large distances. An approximate solution for solitary waves is derived that reproduces the main features of these waves observed in computational simulations. It is demonstrated by numerical studies that the waves interact only weakly with each other when they collide. Wavetrains composed of many non-interacting solitary waves are found to form and propagate under certain boundary and initial conditions. 7. Elastic model-based segmentation of 3-D neuroradiological data sets. PubMed Kelemen, A; Székely, G; Gerig, G 1999-10-01 This paper presents a new technique for the automatic model-based segmentation of three-dimensional (3-D) objects from volumetric image data. The development closely follows the seminal work of Taylor and Cootes on active shape models, but is based on a hierarchical parametric object description rather than a point distribution model. The segmentation system includes both the building of statistical models and the automatic segmentation of new image data sets via a restricted elastic deformation of shape models. Geometric models are derived from a sample set of image data which have been segmented by experts. The surfaces of these binary objects are converted into parametric surface representations, which are normalized to get an invariant object-centered coordinate system. Surface representations are expanded into series of spherical harmonics which provide parametric descriptions of object shapes. It is shown that invariant object surface parametrization provides a good approximation to automatically determine object homology in terms of sets of corresponding sets of surface points. Gray-level information near object boundaries is represented by 1-D intensity profiles normal to the surface. Considering automatic segmentation of brain structures as our driving application, our choice of coordinates for object alignment was the well-accepted stereotactic coordinate system. Major variation of object shapes around the mean shape, also referred to as shape eigenmodes, are calculated in shape parameter space rather than the feature space of point coordinates. Segmentation makes use of the object shape statistics by restricting possible elastic deformations into the range of the training shapes. The mean shapes are initialized in a new data set by specifying the landmarks of the stereotactic coordinate system. The model elastically deforms, driven by the displacement forces across the object's surface, which are generated by matching local intensity profiles. Elastic 8. Effect of Kayak Ergometer Elastic Tension on Upper Limb EMG Activity and 3D Kinematics. PubMed Fleming, Neil; Donne, Bernard; Fletcher, David 2012-01-01 Despite the prevalence of shoulder injury in kayakers, limited published research examining associated upper limb kinematics and recruitment patterns exists. Altered muscle recruitment patterns on-ergometer vs. on-water kayaking were recently reported, however, mechanisms underlying changes remain to be elucidated. The current study assessed the effect of ergometer recoil tension on upper limb recruitment and kinematics during the kayak stroke. Male kayakers (n = 10) performed 4 by 1 min on-ergometer exercise bouts at 85%VO2max at varying elastic recoil tension; EMG, stroke force and three-dimensional 3D kinematic data were recorded. While stationary recoil forces significantly increased across investigated tensions (125% increase, p < 0.001), no significant differences were detected in assessed force variables during the stroke cycle. In contrast, increasing tension induced significantly higher Anterior Deltoid (AD) activity in the latter stages (70 to 90%) of the cycle (p < 0.05). No significant differences were observed across tension levels for Triceps Brachii or Latissimus Dorsi. Kinematic analysis revealed that overhead arm movements accounted for 39 ± 16% of the cycle. Elbow angle at stroke cycle onset was 144 ± 10°; maximal elbow angle (151 ± 7°) occurred at 78 ± 10% into the cycle. All kinematic markers moved to a more anterior position as tension increased. No significant change in wrist marker elevation was observed, while elbow and shoulder marker elevations significantly increased across tension levels (p < 0.05). In conclusion, data suggested that kayakers maintained normal upper limb kinematics via additional AD recruitment despite ergometer induced recoil forces. Key pointsKayak ergometer elastic tension significantly alters Anterior Deltoid recruitment patterns.Kayakers maintain optimal arm kinematics despite changing external forces via altered shoulder muscle recruitment.Overhead arm movements account for a high proportion of the kayak 9. Effect of Kayak Ergometer Elastic Tension on Upper Limb EMG Activity and 3D Kinematics PubMed Central Fleming, Neil; Donne, Bernard; Fletcher, David 2012-01-01 Despite the prevalence of shoulder injury in kayakers, limited published research examining associated upper limb kinematics and recruitment patterns exists. Altered muscle recruitment patterns on-ergometer vs. on-water kayaking were recently reported, however, mechanisms underlying changes remain to be elucidated. The current study assessed the effect of ergometer recoil tension on upper limb recruitment and kinematics during the kayak stroke. Male kayakers (n = 10) performed 4 by 1 min on-ergometer exercise bouts at 85%VO2max at varying elastic recoil tension; EMG, stroke force and three-dimensional 3D kinematic data were recorded. While stationary recoil forces significantly increased across investigated tensions (125% increase, p < 0.001), no significant differences were detected in assessed force variables during the stroke cycle. In contrast, increasing tension induced significantly higher Anterior Deltoid (AD) activity in the latter stages (70 to 90%) of the cycle (p < 0.05). No significant differences were observed across tension levels for Triceps Brachii or Latissimus Dorsi. Kinematic analysis revealed that overhead arm movements accounted for 39 ± 16% of the cycle. Elbow angle at stroke cycle onset was 144 ± 10°; maximal elbow angle (151 ± 7°) occurred at 78 ± 10% into the cycle. All kinematic markers moved to a more anterior position as tension increased. No significant change in wrist marker elevation was observed, while elbow and shoulder marker elevations significantly increased across tension levels (p < 0.05). In conclusion, data suggested that kayakers maintained normal upper limb kinematics via additional AD recruitment despite ergometer induced recoil forces. Key pointsKayak ergometer elastic tension significantly alters Anterior Deltoid recruitment patterns.Kayakers maintain optimal arm kinematics despite changing external forces via altered shoulder muscle recruitment.Overhead arm movements account for a high proportion of the kayak 10. High Resolution WENO Simulation of 3D Detonation Waves DTIC Science & Technology 2012-02-27 pocket behind the detonation front was not observed in their results because the rotating transverse detonation completely consumed the unburned gas. Dou...three-dimensional detonations We add source terms (functions of x, y, z and t) to the PDE system so that the following functions are exact solutions to... detonation rotates counter-clockwise, opposite to that in [48]. It can be seen that, the triple lines and transverse waves collide with the walls, and strong 11. Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems SciTech Connect Chen, Hsin-Chu; Warsi, N.A.; Sameh, A. 1996-12-31 In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix). 12. 3D dynamic simulation of crack propagation in extracorporeal shock wave lithotripsy Wijerathne, M. L. L.; Hori, Muneo; Sakaguchi, Hide; Oguni, Kenji 2010-06-01 Some experimental observations of Shock Wave Lithotripsy(SWL), which include 3D dynamic crack propagation, are simulated with the aim of reproducing fragmentation of kidney stones with SWL. Extracorporeal shock wave lithotripsy (ESWL) is the fragmentation of kidney stones by focusing an ultrasonic pressure pulse onto the stones. 3D models with fine discretization are used to accurately capture the high amplitude shear shock waves. For solving the resulting large scale dynamic crack propagation problem, PDS-FEM is used; it provides numerically efficient failure treatments. With a distributed memory parallel code of PDS-FEM, experimentally observed 3D photoelastic images of transient stress waves and crack patterns in cylindrical samples are successfully reproduced. The numerical crack patterns are in good agreement with the experimental ones, quantitatively. The results shows that the high amplitude shear waves induced in solid, by the lithotriptor generated shock wave, play a dominant role in stone fragmentation. 13. Tailored complex 3D vortex lattice structures by perturbed multiples of three-plane waves. PubMed Xavier, Jolly; Vyas, Sunil; Senthilkumaran, Paramasivam; Joseph, Joby 2012-04-20 As three-plane waves are the minimum number required for the formation of vortex-embedded lattice structures by plane wave interference, we present our experimental investigation on the formation of complex 3D photonic vortex lattice structures by a designed superposition of multiples of phase-engineered three-plane waves. The unfolding of the generated complex photonic lattice structures with higher order helical phase is realized by perturbing the superposition of a relatively phase-encoded, axially equidistant multiple of three noncoplanar plane waves. Through a programmable spatial light modulator assisted single step fabrication approach, the unfolded 3D vortex lattice structures are experimentally realized, well matched to our computer simulations. The formation of higher order intertwined helices embedded in these 3D spiraling vortex lattice structures by the superposition of the multiples of phase-engineered three-plane waves interference is also studied. 14. Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao 2016-04-01 Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle. 15. Wave propagation analysis of quasi-3D FG nanobeams in thermal environment based on nonlocal strain gradient theory 2016-09-01 This article examines the application of nonlocal strain gradient elasticity theory to wave dispersion behavior of a size-dependent functionally graded (FG) nanobeam in thermal environment. The theory contains two scale parameters corresponding to both nonlocal and strain gradient effects. A quasi-3D sinusoidal beam theory considering shear and normal deformations is employed to present the formulation. Mori-Tanaka micromechanical model is used to describe functionally graded material properties. Hamilton's principle is employed to obtain the governing equations of nanobeam accounting for thickness stretching effect. These equations are solved analytically to find the wave frequencies and phase velocities of the FG nanobeam. It is indicated that wave dispersion behavior of FG nanobeams is significantly affected by temperature rise, nonlocality, length scale parameter and material composition. 16. Elastic Wave Propagation Mechanisms in Underwater Acoustic Environments DTIC Science & Technology 2015-09-30 Elastic wave propagation mechanisms in underwater acoustic environments Scott D. Frank Marist College Department of Mathematics Poughkeepsie...conversion from elastic propagation to acoustic propagation, and intense interface waves on underwater acoustic environments with elastic bottoms... acoustic energy in the water column. Elastic material parameters will be varied for analysis of the dissipation of water column acoustic energy 17. Bulk solitary waves in elastic solids Samsonov, A. M.; Dreiden, G. V.; Semenova, I. V.; Shvartz, A. G. 2015-10-01 A short and object oriented conspectus of bulk solitary wave theory, numerical simulations and real experiments in condensed matter is given. Upon a brief description of the soliton history and development we focus on bulk solitary waves of strain, also known as waves of density and, sometimes, as elastic and/or acoustic solitons. We consider the problem of nonlinear bulk wave generation and detection in basic structural elements, rods, plates and shells, that are exhaustively studied and widely used in physics and engineering. However, it is mostly valid for linear elasticity, whereas dynamic nonlinear theory of these elements is still far from being completed. In order to show how the nonlinear waves can be used in various applications, we studied the solitary elastic wave propagation along lengthy wave guides, and remarkably small attenuation of elastic solitons was proven in physical experiments. Both theory and generation for strain soliton in a shell, however, remained unsolved problems until recently, and we consider in more details the nonlinear bulk wave propagation in a shell. We studied an axially symmetric deformation of an infinite nonlinearly elastic cylindrical shell without torsion. The problem for bulk longitudinal waves is shown to be reducible to the one equation, if a relation between transversal displacement and the longitudinal strain is found. It is found that both the 1+1D and even the 1+2D problems for long travelling waves in nonlinear solids can be reduced to the Weierstrass equation for elliptic functions, which provide the solitary wave solutions as appropriate limits. We show that the accuracy in the boundary conditions on free lateral surfaces is of crucial importance for solution, derive the only equation for longitudinal nonlinear strain wave and show, that the equation has, amongst others, a bidirectional solitary wave solution, which lead us to successful physical experiments. We observed first the compression solitary wave in the 18. Exact 3D elasticity solution for free vibrations of an eccentric hollow sphere 2011-01-01 An exact three-dimensional elastodynamic analysis for describing the natural oscillations of a freely suspended, isotropic, and homogeneous elastic sphere with an eccentrically located inner spherical cavity is developed. The translational addition theorem for spherical vector wave functions is employed to impose the zero traction boundary conditions, leading to frequency equations in the form of exact determinantal equations involving spherical Bessel functions and Wigner 3j symbols. Extensive numerical calculations have been carried out for the first five clusters of eigenfrequencies associated with both the axisymmetric and non-axisymmetric spheroidal as well as toroidal oscillation modes for selected inner-outer radii ratios in a wide range of cavity eccentricities. Also, the corresponding three-dimensional deformed mode shapes are illustrated in vivid graphical forms for selected eccentricities. The numerical results describe the imperative influence of cavity eccentricity, mode type, and radii ratio on the vibrational characteristics of the hollow sphere. The existence of "multiple degeneracies" and the trigger of "frequency splitting" are demonstrated and discussed. The accuracy of solution is checked through appropriate convergence studies, and the validity of results is established with the aid of a commercial finite element package as well as by comparison with the data in the existing literature. 19. Elastic wave invariants for acoustic emission Pardee, W. J. 1981-07-01 It is shown that there are four conserved properties of an elastic wave in an infinite isotropic plate: the energy, the two components of wave momentum parallel to the surface, and wave angular momentum normal to the surface. All four invariants are volume integrals of quadratic functions of the spatial (Eulerian) coordinates. The canonical energy-momentum density tensor and the orbital, spin, and total angular momentum density tensors are constructed and sufficient conditions for their conservation are demonstrated. A procedure for measuring the wave momentum of a surface wave is proposed. It is argued that these invariants are likely to be particularly useful characterizations of acoustic emission, e.g., from a growing crack. Experimental tests are proposed, and possible applications to practical monitoring problems described. 20. Elastic waves in ice-covered ocean Presnov, Dmitriy; Zhostkow, Ruslan; Gusev, Vladimir; Shurup, Andrey; Sobisevich, Alex 2014-05-01 The problem of propagation of acoustic waves in a shallow ice-covered sea is considered in frames of the mathematical model of the layered medium: ice sheet over a liquid layer (shallow sea) positioned on an elastic half-space (seabed). As the result of analytical solution the simplified dispersion equation has been derived and used for further analytical and numerical analysis. It has been shown that there are five types of waves subject to propagate in the layered model medium: flexural waves of ice-cover, Rayleigh-type wave on the boundary between elastic half-space and the liquid layer, normal modes in ice (as in waveguide), hydro-acoustic normal modes and quasi-longitudinal wave in ice plate. Variations initial conditions as well as source parameters allow obtaining solution for acoustical pressure. Field experiments with geophones, hydrophones and microphones were carried out on the Ladoga Lake (Leningrad Oblast in northwestern Russia) using small controllable explosions as source signals. The experiment has shown satisfactory agreement with theoretical results. Analysis of the dispersion equation for various parameters of the model provides an opportunity to estimate geophysical characteristics of the geophysical medium, based on the experimentally registered wave's velocities. It has been shown, that it is possible to extract valuable information from flexural and Rayleigh-type waves in the low-frequency domain of the recorded data via spatial-temporal analysis. Separate study of those waves allows measuring ice thickness (which is important because of ice melting and ecological situation in Arctic) and velocity of transverse waves in seabed (that can help to determine type of material and can be useful in mineral deposit prospecting). 1. Efficient global wave propagation adapted to 3-D structural complexity: a pseudospectral/spectral-element approach Leng, Kuangdai; Nissen-Meyer, Tarje; van Driel, Martin 2016-12-01 We present a new, computationally efficient numerical method to simulate global seismic wave propagation in realistic 3-D Earth models. We characterize the azimuthal dependence of 3-D wavefields in terms of Fourier series, such that the 3-D equations of motion reduce to an algebraic system of coupled 2-D meridian equations, which is then solved by a 2-D spectral element method (SEM). Computational efficiency of such a hybrid method stems from lateral smoothness of 3-D Earth models and axial singularity of seismic point sources, which jointly confine the Fourier modes of wavefields to a few lower orders. We show novel benchmarks for global wave solutions in 3-D structures between our method and an independent, fully discretized 3-D SEM with remarkable agreement. Performance comparisons are carried out on three state-of-the-art tomography models, with seismic period ranging from 34 s down to 11 s. It turns out that our method has run up to two orders of magnitude faster than the 3-D SEM, featured by a computational advantage expanding with seismic frequency. 2. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves PubMed Central Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng 2016-01-01 In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. PMID:27657066 3. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves. PubMed Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng 2016-09-19 In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. 4. Numerical Investigation of 3D multichannel analysis of surface wave method Wang, Limin; Xu, Yixian; Luo, Yinhe 2015-08-01 Multichannel analysis of surface wave (MASW) method is an efficient tool to obtain near-surface S-wave velocity, and it has gained popularity in engineering practice. Up to now, most examples of using the MASW technique are focused on 2D models or data from a 1D linear receiver spread. We propose a 3D MASW scheme. A finite-difference (FD) method is used to investigate the method using linear and fan-shaped receiver spreads. Results show that the 3D topography strongly affects propagation of Rayleigh waves. The energy concentration of dispersion image is distorted and bifurcated because of the influence of free-surface topography. These effects are reduced with the 3D MASW method. Lastly we investigate the relation between the array size and the resolution of dispersion measurement. 5. Bubbles attenuate elastic waves at seismic frequencies Tisato, Nicola; Quintal, Beatriz; Chapman, Samuel; Podladchikov, Yury; Burg, Jean-Pierre 2016-04-01 The vertical migration of multiphase fluids in the crust can cause hazardous events such as eruptions, explosions, pollution and earthquakes. Although seismic tomography could potentially provide a detailed image of such fluid-saturated regions, the interpretation of the tomographic signals is often controversial and fails in providing a conclusive map of the subsurface saturation. Seismic tomography should be improved considering seismic wave attenuation (1/Q) and the dispersive elastic moduli which allow accounting for the energy lost by the propagating elastic wave. In particular, in saturated media a significant portion of the energy carried by the propagating wave is dissipated by the wave-induced-fluid-flow and the wave-induced-gas-exsolution-dissolution (WIGED) mechanisms. The WIGED mechanism describes how a propagating wave modifies the thermodynamic equillibrium between different fluid phases causing the exsolution and the dissolution of the gas in the liquid, which in turn causes a significant frequency dependent 1/Q and moduli dispersion. The WIGED theory was initially postulated for bubbly magmas but only recently was extended to bubbly water and experimentally demonstrated. Here we report these theory and laboratory experiments. Specifically, we present i) attenuation measurements performed by means of the Broad Band Attenuation Vessel on porous media saturated with water and different gases, and ii) numerical experiments validating the laboratory observations. Finally, we will extend the theory to fluids and to pressure-temperature conditions which are typical of phreatomagmatic and hydrocarbon domains and we will compare the propagation of seismic waves in bubble-free and bubble-bearing subsurface domains. With the present contribution we extend the knowledge about attenuation in rocks which are saturated with multiphase fluid demonstrating that the WIGED mechanism could be extremely important to image subsurface gas plumes. 6. 3D extension of Tensorial Polar Decomposition. Application to (photo-)elasticity tensors Desmorat, Rodrigue; Desmorat, Boris 2016-06-01 The orthogonalized harmonic decomposition of symmetric fourth-order tensors (i.e. having major and minor indicial symmetries, such as elasticity tensors) is completed by a representation of harmonic fourth-order tensors H by means of two second-order harmonic (symmetric deviatoric) tensors only. A similar decomposition is obtained for non-symmetric tensors (i.e. having minor indicial symmetry only, such as photo-elasticity tensors or elasto-plasticity tangent operators) introducing a fourth-order major antisymmetric traceless tensor Z. The tensor Z is represented by means of one harmonic second-order tensor and one antisymmetric second-order tensor only. Representations of totally symmetric (rari-constant), symmetric and major antisymmetric fourth-order tensors are simple particular cases of the proposed general representation. Closed-form expressions for tensor decomposition are given in the monoclinic case. Practical applications to elasticity and photo-elasticity monoclinic tensors are finally presented. xml:lang="fr" 7. Extensions of 1d Bgk Electron Solitary Wave Solutions To 3d Magnetized and Unmagnetized Plasmas Chen, Li-Jen; Parks, George K. This paper will compare the key results for BGK electron solitary waves in 3D mag- netized and unmagnetized plasmas. For 3D magnetized plasmas with highly magnetic field-aligned electrons, our results predict that the parallel widths of the solitary waves can be smaller than one Debye length, the solitary waves can be large scale features of the magnetosphere, and the parallel width-amplitude relation has a dependence on the perpendicular size. We can thus obtain an estimate on the typical perpendicular size of the observed solitary waves assuming a series of consecutive solitary waves are in the same flux tude with a particular perpendicular span. In 3D unmagnetized plasma systems such as the neutral sheet and magnetic reconnection sites, our theory indi- cates that although mathematical solutions can be constructed as the time-stationary solutions for the nonlinear Vlasov-Poisson equations, there does not exist a param- eter range for the solutions to be physical. We conclude that single-humped solitary potential pulses cannot be self-consistently supported by charged particles in 3D un- magnetized plasmas. 8. Numerical simulations of full-wave fields and analysis of channel wave characteristics in 3-D coal mine roadway models Yang, Si-Tong; Wei, Jiu-Chuan; Cheng, Jiu-Long; Shi, Long-Qing; Wen, Zhi-Jie 2016-12-01 Currently, numerical simulations of seismic channel waves for the advance detection of geological structures in coal mine roadways focus mainly on modeling twodimensional wave fields and therefore cannot accurately simulate three-dimensional (3-D) full-wave fields or seismic records in a full-space observation system. In this study, we use the first-order velocity-stress staggered-grid finite difference algorithm to simulate 3-D full-wave fields with P-wave sources in front of coal mine roadways. We determine the three components of velocity V x, V y, and V z for the same node in 3-D staggered-grid finite difference models by calculating the average value of V y, and V z of the nodes around the same node. We ascertain the wave patterns and their propagation characteristics in both symmetrical and asymmetric coal mine roadway models. Our simulation results indicate that the Rayleigh channel wave is stronger than the Love channel wave in front of the roadway face. The reflected Rayleigh waves from the roadway face are concentrated in the coal seam, release less energy to the roof and floor, and propagate for a longer distance. There are surface waves and refraction head waves around the roadway. In the seismic records, the Rayleigh wave energy is stronger than that of the Love channel wave along coal walls of the roadway, and the interference of the head waves and surface waves with the Rayleigh channel wave is weaker than with the Love channel wave. It is thus difficult to identify the Love channel wave in the seismic records. Increasing the depth of the receivers in the coal walls can effectively weaken the interference of surface waves with the Rayleigh channel wave, but cannot weaken the interference of surface waves with the Love channel wave. Our research results also suggest that the Love channel wave, which is often used to detect geological structures in coal mine stopes, is not suitable for detecting geological structures in front of coal mine roadways 9. Efficient methods to model the scattering of ultrasonic guided waves in 3D Moreau, L.; Velichko, A.; Wilcox, P. D. 2010-03-01 The propagation of ultrasonic guided waves and their interaction with a defect is of interest to the nondestructive testing community. There is no general solution to the scattering problem and it is still an ongoing research topic. Due to the complexity of guided wave scattering problems, most existing models are related to the 2D case. However, thanks to the increase in computer calculation power, specific 3D problems can also be studied, with the help of numerical or semi-analytical methods. This paper describes two efficient methods aimed at modeling 3D scattering problems. The first method is the use of the Huygens' principle to reduce the size of finite element models. This principle allows the area of interest to be restricted to the very near field of the defect, for both the generation of the incident field and the modal decomposition of the scattered field. The second method consists of separating the 3D problem into two 2D problems for which the solutions are calculated and used to approximate the 3D solution. This can be used at low frequency-thickness products, where Lamb waves have a similar behavior to bulk waves. These two methods are presented briefly and compared on simple scattering cases. 10. HEMP 3D -- a finite difference program for calculating elastic-plastic flow SciTech Connect Wilkins, M.L. 1993-05-26 The HEMP 3D program can be used to solve problems in solid mechanics involving dynamic plasticity and time dependent material behavior and problems in gas dynamics. The equations of motion, the conservation equations, and the constitutive relations are solved by finite difference methods following the format of the HEMP computer simulation program formulated in two space dimensions and time. Presented here is an update of the 1975 report on the HEMP 3D numerical technique. The present report includes the sliding surface routines programmed by Robert Gulliford. 11. Elastic waves in structurally chiral composites SciTech Connect Yang, Shiuhkuang. 1990-01-01 Elastic wave propagation through structurally chiral (handed) media was studied. The primary objectives are to construct structurally chiral composites and to characterize their properties. Structurally chiral composites are constructed by stacking identical uniaxial plates, whose consecutive symmetric axes describe either a right- or a left-handed spiral. A matrix representation method is used to solve the elastic wave propagation in such layered composites. Numerical computation of the plane wave reflection and transmission characteristics for chiral arrangements are compared with those for the non-chiral one. It is concluded that the co-polarized characteristics are unaffected by the structural chirality, while the cross-polarized reflected and transmitted fields are greatly influenced by it. Numerical modeling is also applied for the real samples. The polarization ellipse of the transmitted field of each sample is calculated. To verify the form chirality, four glass-reinforced chiral and non-chiral composite samples are made from helix tape, molded, debulked, and cured individually under identical temperature and pressure histories. The spiral composites are characterized using shear and longitudinal wave transducers in ultrasonic experiments. Both the material properties and the polarization ellipse of the transmitted field of each sample are measured. It is proved conclusively that left and right handedness in the microstructures of a material rotates the plane of polarization of a propagating shear wave in the opposite directions. Thus it is now possible to say that by reducing the length scale of the handed microstructures tone more appropriate to its propagating wavelength, a medium is obtained that gives rise to effects similar to optical radar and optical dichroism. 12. GPU-accelerated elastic 3D image registration for intra-surgical applications. PubMed Ruijters, Daniel; ter Haar Romeny, Bart M; Suetens, Paul 2011-08-01 Local motion within intra-patient biomedical images can be compensated by using elastic image registration. The application of B-spline based elastic registration during interventional treatment is seriously hampered by its considerable computation time. The graphics processing unit (GPU) can be used to accelerate the calculation of such elastic registrations by using its parallel processing power, and by employing the hardwired tri-linear interpolation capabilities in order to efficiently perform the cubic B-spline evaluation. In this article it is shown that the similarity measure and its derivatives also can be calculated on the GPU, using a two pass approach. On average a speedup factor 50 compared to a straight-forward CPU implementation was reached. 13. Validation of a 3D computational fluid-structure interaction model simulating flow through an elastic aperture PubMed Central Quaini, A.; Canic, S.; Glowinski, R.; Igo, S.; Hartley, C.J.; Zoghbi, W.; Little, S. 2011-01-01 This work presents a validation of a fluid-structure interaction computational model simulating the flow conditions in an in vitro mock heart chamber modeling mitral valve regurgitation during the ejection phase during which the trans-valvular pressure drop and valve displacement are not as large. The mock heart chamber was developed to study the use of 2D and 3D color Doppler techniques in imaging the clinically relevant complex intra-cardiac flow events associated with mitral regurgitation. Computational models are expected to play an important role in supporting, refining, and reinforcing the emerging 3D echocardiographic applications. We have developed a 3D computational fluid-structure interaction algorithm based on a semi-implicit, monolithic method, combined with an arbitrary Lagrangian-Eulerian approach to capture the fluid domain motion. The mock regurgitant mitral valve corresponding to an elastic plate with a geometric orifice, was modeled using 3D elasticity, while the blood flow was modeled using the 3D Navier-Stokes equations for an incompressible, viscous fluid. The two are coupled via the kinematic and dynamic conditions describing the two-way coupling. The pressure, the flow rate, and orifice plate displacement were measured and compared with numerical simulation results. In-line flow meter was used to measure the flow, pressure transducers were used to measure the pressure, and a Doppler method developed by one of the authors was used to measure the axial displacement of the orifice plate. The maximum recorded difference between experiment and numerical simulation for the flow rate was 4%, the pressure 3.6%, and for the orifice displacement 15%, showing excellent agreement between the two. PMID:22138194 14. Evolution of a Directional Wave Spectrum in a 3D Marginal Ice Zone with Random Floe Size Distribution Montiel, F.; Squire, V. A. 2013-12-01 A new ocean wave/sea-ice interaction model is proposed that simulates how a directional wave spectrum evolves as it travels through a realistic marginal ice zone (MIZ), where wave/ice dynamics are entirely governed by coherent conservative wave scattering effects. Field experiments conducted by Wadhams et al. (1986) in the Greenland Sea generated important data on wave attenuation in the MIZ and, particularly, on whether the wave spectrum spreads directionally or collimates with distance from the ice edge. The data suggest that angular isotropy, arising from multiple scattering by ice floes, occurs close to the edge and thenceforth dominates wave propagation throughout the MIZ. Although several attempts have been made to replicate this finding theoretically, including by the use of numerical models, none have confronted this problem in a 3D MIZ with fully randomised floe distribution properties. We construct such a model by subdividing the discontinuous ice cover into adjacent infinite slabs of finite width parallel to the ice edge. Each slab contains an arbitrary (but finite) number of circular ice floes with randomly distributed properties. Ice floes are modeled as thin elastic plates with uniform thickness and finite draught. We consider a directional wave spectrum with harmonic time dependence incident on the MIZ from the open ocean, defined as a continuous superposition of plane waves traveling at different angles. The scattering problem within each slab is then solved using Graf's interaction theory for an arbitrary incident directional plane wave spectrum. Using an appropriate integral representation of the Hankel function of the first kind (see Cincotti et al., 1993), we map the outgoing circular wave field from each floe on the slab boundaries into a directional spectrum of plane waves, which characterizes the slab reflected and transmitted fields. Discretizing the angular spectrum, we can obtain a scattering matrix for each slab. Standard recursive 15. Elastic Wave Propagation and Generation in Seismology Lees, Jonathan M. The majority of mature seismologists of my generation were introduced to theoretical seismology via classic textbooks written in the early 1980s. Since this generation has matured and taken the mantle of teaching seismology to a new generation, several new books have been put forward as replacements, or alternatives, to the original classical texts. The target readers of the new texts range from beginner through intermediate to more advanced, although all have been attempts to improve upon what is now considered standard convention in quantitative seismology. To this plethora of choices we now have a new addition by Jose Pujol, titledElastic Wave Propagation and Generation in Seismology. 16. Superfast elastic registration of histologic images of a whole rat brain for 3D reconstruction Wirtz, Stefan; Fischer, Bernd; Modersitzki, Jan; Schmitt, Oliver 2004-05-01 We present a super-fast and parameter-free algorithm for non-rigid elastic registration of images of a serially sectioned whole rat brain. The purpose is to produce a three-dimensional high-resolution reconstruction. The registration is modelled as a minimization problem of a functional consisting of a distance measure and a regularizer based on the elastic potential of the displacement field. The minimization of the functional leads to a system of non-linear partial differential equations, the so-called Navier-Lame equations (NLE). Discretization of the NLE and a fixed point type iteration method lead to a linear system of equations, which has to be solved at each iteration step. We not only present a super-fast solution technique for this system, but also come up with sound strategies for accelerating the outer iteration. This does include a multi-scale approach based on a Gaussian pyramid as well as a clever estimation of the material constants for the elastic potential. The results of the registration process were controlled by an expert who was able to recognize histological details like laminations which was not possible before. Therefore, it is essential to apply elastic registration to this kind of imaging problem. Finally, the visually pleasing results were quantified by a distance measure leading to an improvement of about 79% after just 35 iteration steps. 17. Inverse obstacle scattering for elastic waves Li, Peijun; Wang, Yuliang; Wang, Zewen; Zhao, Yue 2016-11-01 Consider the scattering of a time-harmonic plane wave by a rigid obstacle which is embedded in an open space filled with a homogeneous and isotropic elastic medium. An exact transparent boundary condition is introduced to reduce the scattering problem into a boundary value problem in a bounded domain. Given the incident field, the direct problem is to determine the displacement of the wave field from the known obstacle; the inverse problem is to determine the obstacle’s surface from the measurement of the displacement on an artificial boundary enclosing the obstacle. In this paper, we consider both the direct and inverse problems. The direct problem is shown to have a unique weak solution by examining its variational formulation. The domain derivative is derived for the displacement with respect to the variation of the surface. A continuation method with respect to the frequency is developed for the inverse problem. Numerical experiments are presented to demonstrate the effectiveness of the proposed method. 18. Far-field subwavelength imaging for ultrasonic elastic waves in a plate using an elastic hyperlens Lee, Hyung Jin; Kim, Hoe Woong; Kim, Yoon Young 2011-06-01 Subwavelength imaging was experimentally performed for ultrasonic elastic waves by using an angularly stratified plat, an elastic plate hyperlens. It consists of alternating layers of aluminum and air, exhibiting a large contrast in elastic stiffness. A specially configured experimental setup is used to locate two sources within half the wavelength at 100 kHz. To explain the observed phenomenon, the homogenization of the elasticity coefficients of the stratified structure is employed. Because of the strong cylindrical anisotropy, an equifrequency contour becomes nearly flat along the angular wave vector so that evanescent waves involved with high angular resolution are converted to propagating waves. 19. Force sensing using 3D displacement measurements in linear elastic bodies Feng, Xinzeng; Hui, Chung-Yuen 2016-07-01 In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints. 20. A 3D Orthotropic Strain-Rate Dependent Elastic Damage Material Model. SciTech Connect English, Shawn Allen 2014-09-01 A three dimensional orthotropic elastic constitutive model with continuum damage and cohesive based fracture is implemented for a general polymer matrix composite lamina. The formulation assumes the possibility of distributed (continuum) damage followed b y localized damage. The current damage activation functions are simply partially interactive quadratic strain criteria . However, the code structure allows for changes in the functions without extraordinary effort. The material model formulation, implementation, characterization and use cases are presented. 1. 3D analysis of interaction of Lamb waves with defects in loaded steel plates. PubMed Kazys, R; Mazeika, L; Barauskas, R; Raisutis, R; Cicenas, V; Demcenko, A 2006-12-22 The objective of the research presented here is the investigation of the interaction of guided waves with welds, defects and other non-uniformities in steel plates loaded by liquid. The investigation has been performed using numerical simulation for 2D and 3D cases by the finite differences method, finite element method and measurement of 3D distributions of acoustic fields. Propagation of the S(0) mode in a steel plate and its interaction with non-uniformities was investigated. It was shown that using the measured leaky wave signals in the water loading of the steel plate and by application of signal processing, the 3D ultrasonic field structure inside and outside of the plate can be reconstructed. The presence of leaky wave signals over the defect caused by the mode conversion of Lamb waves has been proved using the numerical modelling and experimental investigations. The developed signal and data processing enables to visualise dynamics of ultrasonic fields over the plate, and also to estimate spatial positions of defects inside the steel plates. 2. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan 2016-04-01 Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement. 3. Gravitational Wave Signals from 2D and 3D Core Collapse Supernova Explosions Yakunin, Konstantin; Mezzacappa, Anthony; Marronetti, Pedro; Bruenn, Stephen; Hix, W. Raphael; Lentz, Eric J.; Messer, O. E. Bronson; Harris, J. Austin; Endeve, Eirik; Blondin, John 2016-03-01 We study two- and three-dimensional (2D and 3D) core-collapse supernovae (CCSN) using our first-principles CCSN simulations performed with the neutrino hydrodynamics code CHIMERA. The following physics is included: Newtonian hydrodynamics with a nuclear equation of state capable of describing matter in both NSE and non-NSE, MGFLD neutrino transport with realistic neutrino interactions, an effective GR gravitational potential, and a nuclear reaction network. Both our 2D and 3D models achieve explosion, which in turn enables us to determine their complete gravitational wave signals. In this talk, we present them, and we analyze the similarities and differences between the 2D and 3D signals. 4. 3D printed elastic honeycombs with graded density for tailorable energy absorption Bates, Simon R. G.; Farrow, Ian R.; Trask, Richard S. 2016-04-01 This work describes the development and experimental analysis of hyperelastic honeycombs with graded densities, for the purpose of energy absorption. Hexagonal arrays are manufactured from thermoplastic polyurethane (TPU) via fused filament fabrication (FFF) 3D printing and the density graded by varying cell wall thickness though the structures. Manufactured samples are subject to static compression tests and their energy absorbing potential analysed via the formation of energy absorption diagrams. It is shown that by grading the density through the structure, the energy absorption profile of these structures can be manipulated such that a wide range of compression energies can be efficiently absorbed. 5. 3D Elastic Solutions for Laterally Loaded Discs: Generalised Brazilian and Point Load Tests Serati, Mehdi; Alehossein, Habib; Williams, David J. 2014-07-01 This paper investigates the application of a double Fourier series technique to the construction of an elastic stress field in a cylindrical bar subject to lateral boundary loads. The lateral loads, including the constant load boundary conditions, are represented by two Fourier series: one on the perimeter of the circular section ( r 0, θ) and the other on the longitudinal curved surface parallel to the bar axis ( z). The technique invokes acceptable potential functions of the Papkovich-Neuber displacement field, satisfying the governing partial differential equations, to assign appropriate odd and even trigonometric Fourier terms in cylindrical coordinates ( r, θ, z). The generic solution decomposes the problem of interest to a state of stress caused by two independent boundary conditions along the z axis and θ-polar angle, both superimposed on a solution for which these potentials are the product of the trigonometric terms of the independent variables ( θ, z). Constants appearing in the resultant second-order partial differential equations are determined from the generally mixed (tractions and/or displacements) boundary conditions. While the solutions are satisfied exactly at the ends of an infinite bar, they are satisfied weakly on average, in the light of Saint Venant's approximation at the two ends of a finite bar. The application of the proposed analysis is verified against available elastic solutions for axisymmetric and non-axisymmetric engineering problems such as the indirect Brazilian Tensile Strength and Point Load Strength tests. 6. The Vajont disaster: a 3D numerical simulation for the slide and the waves Rubino, Angelo; Androsov, Alexey; Vacondio, Renato; Zanchettin, Davide; Voltzinger, Naum 2016-04-01 A very high resolution O(5 m), 3D hydrostatic nonlinear numerical model was used to simulate the dynamics of both the slide and the surface waves produced during the Vajont disaster (north Italy, 1963), one of the major landslide-induced tsunamis ever documented. Different simulated wave phenomena like, e.g., maximum run-up on the opposite shore, maximum height, and water velocity were analyzed and compared with data available in literature, including the results of a fully 3D simulation obtained with a Smoothed Particle Hydrodynamic code. The difference between measured and simulated after-slide bathymetries was calculated and used in an attempt to quantify the relative magnitude and extension of rigid and fluid motion components during the event. 7. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor DTIC Science & Technology 2010-01-31 G. Jahnke, Wave propagation in 3D spherical sections: effects of subduction zones , Phys. Earth Planet. Inter., 132, 219-234, 2002. Komastitsch, D...is at scales smaller than the Fresnel zone . For example, a 1-Hz P/Pn wave recorded by a receiver ~1000 km from the source has a Fresnel zone width...approach, Eos Trans. AGU, 89(53), Fall Meet. Suppl., abstract T11E-06 Invited, 2008b. Sigloch, K., N. McQuarrie, G. Nolet, Two-stage subduction 8. 3D Modeling of Antenna Driven Slow Waves Excited by Antennas Near the Plasma Edge Smithe, David; Jenkins, Thomas 2016-10-01 Prior work with the 3D finite-difference time-domain (FDTD) plasma and sheath model used to model ICRF antennas in fusion plasmas has highlighted the possibility of slow wave excitation at the very low end of the SOL density range, and thus the prudent need for a slow-time evolution model to treat SOL density modifications due to the RF itself. At higher frequency, the DIII-D helicon antenna has much easier access to a parasitic slow wave excitation, and in this case the Faraday screen provides the dominant means of controlling the content of the launched mode, with antenna end-effects remaining a concern. In both cases, the danger is the same, with the slow-wave propagating into a lower-hybrid resonance layer a short distance ( cm) away from the antenna, which would parasitically absorb power, transferring energy to the SOL edge plasma, primarily through electron-neutral collisions. We will present 3D modeling of antennas at both ICRF and helicon frequencies. We've added a slow-time evolution capability for the SOL plasma density to include ponderomotive force driven rarefaction from the strong fields in the vicinity of the antenna, and show initial application to NSTX antenna geometry and plasma configurations. The model is based on a Scalar Ponderomotive Potential method, using self-consistently computed local field amplitudes from the 3D simulation. 9. 3-D Inverse Teleseismic Scattered Wave Imaging using the Kirchhoff Approximation Liu, K.; Levander, A. 2012-04-01 We have developed a 3-D teleseismic imaging technique for scattered elastic wavefields using the Kirchhoff approximation. Kirchhoff migration/inversion have been well developed in exploration seismology within the inverse scattering framework (e.g. Miller et al., 1987; Beylkin and Burridge, 1990) to image subsurface structure that generates secondary wavefields caused by localized heterogeneities. Application of this method in global seismology has been largely limited to 2-D images made with 1-D reference models due to high computational cost and the lack of adequately dense receiver arrays (Bostock, 2002, Poppeliers and Pavlis, 2003; Frederiksen and Revenaugh, 2004; Cao et al., 2010). The deployment of the USArray Transportable and Flexible arrays in the United States and dense array recordings in other countries motivate developing teleseismic scattered wavefield imaging with the Kirchhoff approximation for 3-D velocity models for both scalar and vector wavefields to improve upper mantle imaging. Following Bostock's development of the 2-D problem (2002), we derive the 3-D P-to-S scattering inversion formula by phrasing the inverse problem in terms of the generalized Radon transform (GRT) and singular functions of discontinuity surfaces. In the forward scattering modeling, we extend the method to utilize a 3-D migration velocity model by calculating 3-D finite-difference traveltimes, backprojected from the receivers using an eikonal solver. To demonstrate the relative accuracy of the inversion, we examine several synthetic cases with a variety of discontinuity surfaces (sinuous, dipping, dome- and crater-shaped discontinuity interfaces, point scatterers, etc.). The Kirchhoff GRT imaging can successfully recover the shapes of these structures very well. We compare our Kirchhoff approximation imaging with the Born-approximate results, as well as the common-conversion point (CCP) stacked receiver function imaging for the various synthetic cases, and show a field 10. Effects of obliquely opposing and following currents on wave propagation in a new 3D wave-current basin Lieske, Mike; Schlurmann, Torsten 2016-04-01 INTRODUCTION & MOTIVATION The design of structures in coastal and offshore areas and their maintenance are key components of coastal protection. Usually, assessments of processes and loads on coastal structures are derived from experiments with flow and wave parameters in separate physical models. However, Peregrin (1976) already points out that processes in natural shallow coastal waters flow and sea state processes do not occur separately, but influence each other nonlinearly. Kemp & Simons (1982) perform 2D laboratory tests and study the interactions between a turbulent flow and following waves. They highlight the significance of wave-induced changes in the current properties, especially in the mean flow profiles, and draw attention to turbulent fluctuations and bottom shear stresses. Kemp & Simons (1983) also study these processes and features with opposing waves. Studies on the wave-current interaction in three-dimensional space for a certain wave height, wave period and water depth were conducted by MacIver et al. (2006). The research focus is set on the investigation of long-crested waves on obliquely opposing and following currents in the new 3D wave-current basin. METHODOLOGY In a first step the flow analysis without waves is carried out and includes measurements of flow profiles in the sweet spot of the basin at predefined measurement positions. Five measuring points in the water column have been delineated in different water depths in order to obtain vertical flow profiles. For the characterization of the undisturbed flow properties in the basin, an uniformly distributed flow was generated in the wave basin. In the second step wave analysis without current, the unidirectional wave propagation and wave height were investigated for long-crested waves in intermediate wave conditions. In the sweet spot of the wave basin waves with three different wave directions, three wave periods and uniform wave steepness were examined. For evaluation, we applied a common 11. Software to compute elastostatic Green's functions for sources in 3D homogeneous elastic layers above a (visco)elastic halfspace 2012-12-01 We describe software, in development, to calculate elastostatic displacement Green's functions and their derivatives for point and polygonal dislocations in three-dimensional homogeneous elastic layers above an elastic or a viscoelastic halfspace. The steps to calculate a Green's function for a point source at depth zs are as follows. 1. A grid in wavenumber space is chosen. 2. A six-element complex rotated stress-displacement vector x is obtained at each grid point by solving a two-point boundary value problem (2P-BVP). If the halfspace is viscoelastic, the solution is inverse Laplace transformed. 3. For each receiver, x is propagated to the receiver depth zr (often zr = 0) and then, 4, inverse Fourier transformed, with the Fourier component corresponding to the receiver's horizontal position. 5. The six elements are linearly combined into displacements and their derivatives. The dominant work is in step 2. The grid is chosen to represent the wavenumber-space solution with as few points as possible. First, the wavenumber space is transformed to increase sampling density near 0 wavenumber. Second, a tensor-product grid of Chebyshev points of the first kind is constructed in each quadrant of the transformed wavenumber space. Moment-tensor-dependent symmetries further reduce work. The numerical solution of the 2P-BVP problem in step 2 involves solving a linear equation A x = b. Half of the elements of x are of geophysical interest; the subset depends on whether zr ≤ zs. Denote these \\hat x. As wavenumber k increases, \\hat x can become inaccurate in finite precision arithmetic for two reasons: 1. The condition number of A becomes too large. 2. The norm-wise relative error (NWRE) in \\hat x is large even though it is small in x. To address this problem, a number of researchers have used determinants to obtain x. This may be the best approach for 6-dimensional or smaller 2P-BVP, where the combinatorial increase in work is still moderate. But there is an alternative 12. Bioconductive 3D nano-composite constructs with tunable elasticity to initiate stem cell growth and induce bone mineralization. PubMed Sagar, Nitin; Khanna, Kunal; Sardesai, Varda S; Singh, Atul K; Temgire, Mayur; Kalita, Mridula Phukan; Kadam, Sachin S; Soni, Vivek P; Bhartiya, Deepa; Bellare, Jayesh R 2016-12-01 Bioactive 3D composites play an important role in advanced biomaterial design to provide molecular coupling and improve integrity with the cellular environment of the native bone. In the present study, a hybrid lyophilized polymer composite blend of anionic charged sodium salt of carboxymethyl chitin and gelatin (CMChNa-GEL) reinforced with nano-rod agglomerated hydroxyapatite (nHA) has been developed with enhanced biocompatibility and tunable elasticity. The scaffolds have an open, uniform and interconnected porous structure with an average pore diameter of 157±30μm and 89.47+0.03% with four dimensional X-ray. The aspect ratio of ellipsoidal pores decrease from 4.4 to 1.2 with increase in gelatin concentration; and from 2.14 to 1.93 with decrease in gelling temperature. The samples were resilient with elastic stain at 1.2MPa of stress also decreased from 0.33 to 0.23 with increase in gelatin concentration. The crosslinker HMDI (hexamethylene diisocyanate) yielded more resilient samples at 1.2MPa in comparison to glutaraldehyde. Increased crosslinking time from 2 to 4h in continuous compression cycle show no improvement in maximum elastic stain of 1.2MPa stress. This surface elasticity of the scaffold enables the capacity of these materials for adherent self renewal and cultivation of the NTERA-2 cL.D1 (NT2/D1), pluripotent embryonal carcinoma cell with biomechanical surface, as is shown here. Proliferation with MG-63, ALP activity and Alizarin red mineralization assay on optimized scaffold demonstrated ***p<0.001 between different time points thus showing its potential for bone healing. In pre-clinical study histological bone response of the scaffold construct displayed improved activity of bone regeneration in comparison to self healing of control groups (sham) up to week 07 after implantation in rabbit tibia critical-size defect. Therefore, this nHA-CMChNa-GEL scaffold composite exhibits inherent and efficient physicochemical, mechanical and biological 13. Nonlinear dynamics of the 3D FMS and Alfven wave beams propagating in plasma of ionosphere and magnetosphere Belashov, Vasily We study the formation, structure, stability and dynamics of the multidimensional soliton-like beam structures forming on the low-frequency branch of oscillation in the ionospheric and magnetospheric plasma for cases when beta=4pinT/B(2) <<1 and beta>1. In first case with the conditions omegawaves are excited. Their dynamics under conditions {k_{x}}(2) >>{k_{yz}}(2,) v_{x}\$<3D Belashov-Karpman (BK) equation [1] for magnetic field h=B_{wave}/B with due account of the high order dispersive correction defined by values of plasma parameters and the angle Theta=(B,k) [2]. In another case the dynamics of the finite-amplitude Alfvén waves propagating in the ionosphere and magnetosphere near-to-parallel to the field B is described by the 3D derivative nonlinear Schrödinger (3-DNLS) equation for the magnetic field of the wave h=(B_{y}+iB_{z})/2B/1-beta/ [3]. To study the stability of multidimensional solitons in both cases we use the method developed in [2] and investigated the Hamiltonian bounding with its deformation conserving momentum by solving the corresponding variation problem. To study evolution of solitons and their collision dynamics the proper equations were being integrated numerically using the codes specially developed and described in detail in [3]. As a result, we have obtained that in both cases for a single solitons on a level with wave spreading and collapse the formation of multidimensional solitons can be observed. These results may be interpreted in terms of self-focusing phenomenon for the FMS and Alfvén waves’ beam as stationary beam formation, scattering and self-focusing of wave beam. The soliton collisions on a level with known elastic interaction can lead to formation of complex structures including the multisoliton bound states. For all cases the problem of multidimensional soliton dynamics in the ionospheric and 14. 3D resolution tests of two-plane wave approach using synthetic seismograms Ceylan, S.; Larmat, C. S.; Sandvol, E. A. 2012-12-01 Two-plane wave tomography (TPWT) is becoming a standard approach to obtain fundamental mode Rayleigh wave phase velocities for a variety of tectonic settings. A recent study by Ceylan et al. (2012) has applied this method to eastern Tibet, using data from INDEPTH-IV and Namche-Barwa seismic experiments. The TPWT assumes that distortion of wavefronts at each station can be expressed as the sum of two plane waves. However, there is currently no robust or complete resolution test for TPWT, to address its limitations such as wavefront healing. In this study, we test the capabilities of TPWT and resolution of INDEPTH-IV seismic experiment, by performing 3D resolution tests using synthetic seismograms. Utilizing SPECFEM3D software, we compute synthetic data sets resolving periods down to ~30 s. We implement a checkerboard upper mantle (for depths between 50 and 650 km) with variable cell sizes, superimposed to PREM as the background model. We then calculate fundamental mode surface wave phase velocities using TPWT for periods between 33-143 seconds, using synthetic seismograms computed from our three dimensional hypothetical model. Assuming a constant Poisson's ratio, we use partial derivatives from Saito (1988) to invert for shear wave velocities. We show that the combination of TPWT and Saito (1988) methods is capable of retrieving anomalies down to depths of ~200 km for Rayleigh waves. Below these depths, we observe evidence of both lateral and vertical smearing. We also find that the traditional method for estimating the resolution of TPWT consistently overestimates phase velocity resolutions. Love waves exhibit adequate resolution down to depths of ~100 km. At depths greater than 100 km, smearing is more evident in SH wave results than those of SV waves. Increased smearing of SH waves is most probably due to propagation characteristics and shallower sensitivity of Love waves. Our results imply that TPWT can be applied to Love waves, making future investigations of 15. Modeling shock waves in orthotropic elastic materials 2008-08-01 A constitutive relationship for modeling of shock wave propagation in orthotropic materials is proposed for nonlinear explicit transient large deformation computer codes (hydrocodes). A procedure for separation of material volumetric compression (compressibility effects equation of state) from deviatoric strain effects is formulated, which allows for the consistent calculation of stresses in the elastic regime as well as in the presence of shock waves. According to this procedure the pressure is defined as the state of stress that results in only volumetric deformation, and consequently is a diagonal second order tensor. As reported by Anderson et al. [Comput. Mech. 15, 201 (1994)], the shock response of an orthotropic material cannot be accurately predicted using the conventional decomposition of the stress tensor into isotropic and deviatoric parts. This paper presents two different stress decompositions based on the assumption that the stress tensor is split into two components: one component is due to volumetric strain and the other is due to deviatoric strain. Both decompositions are rigorously derived. In order to test their ability to describe shock propagation in orthotropic materials, both algorithms were implemented in a hydrocode and their predictions were compared to experimental plate impact data. The material considered was a carbon fiber reinforced epoxy material, which was tested in both the through-thickness and longitudinal directions. The ψ decomposition showed good agreement with the physical behavior of the considered material, while the ζ decomposition significantly overestimated the longitudinal stresses. 16. Coseismic deformation due to the 2011 Tohoku-oki earthquake: influence of 3-D elastic structure around Japan Hashima, Akinori; Becker, Thorsten W.; Freed, Andrew M.; Sato, Hiroshi; Okaya, David A. 2016-09-01 We investigated the effects of elastic heterogeneity on coseismic deformation associated with the 2011 Tohoku-oki earthquake, Japan, using a 3-D finite element model, incorporating the geometry of regional plate boundaries. Using a forward approach, we computed displacement fields for different elastic models with a given slip distribution. Three main structural models are considered to separate the effects of different kinds of heterogeneity: a homogeneous model, a two-layered model with crust-mantle stratification, and a crust-mantle layered model with a strong subducting slab. We observed two counteracting effects: (1) On large spatial scales, elastic layering with increasing rigidity with depth leads to a decrease in surface displacement. (2) An increase in rigidity from above the slab interface to below causes an increase in surface displacement, because the weaker hanging wall deforms to accommodate coseismic slip. Results for slip inversions associated with the Tohoku-oki earthquake show that slip patterns are modified when comparing homogeneous and heterogeneous models. However, the maximum slip only changes slightly: It increases from 38.5 m in the homogeneous to 39.6 m in the layered case and decreases to 37.3 m when slabs are introduced. Potency, i.e., the product of slip and fault area, changes accordingly. Layering leads to inferred slip distributions that are broader and deeper compared to the homogeneous case, particularly to the south of the overall slip maximum. The introduction of a strong slab leads to a reduction in slip around the slip maximum near the trench. We also find that details of the vertical deformation patterns for heterogeneous models are sensitive to the Poisson's ratio. While elastic heterogeneity does therefore not have a dramatic effect on bulk quantities such as inferred potency, the mechanical response of a layered medium with a slab does lead to a systematically modified slip response, and such effects may bias studies of 17. Ultra wide band millimeter wave holographic 3-D imaging of concealed targets on mannequins SciTech Connect Collins, H.D.; Hall, T.E.; Gribble, R.P. 1994-08-01 Ultra wide band (chirp frequency) millimeter wave 3-D holography is a unique technique for imaging concealed targets on human subjects with extremely high lateral and depth resolution. Recent 3-D holographic images of full size mannequins with concealed weapons illustrate the efficacy of this technique for airport security. A chirp frequency (24 GHz to 40 GHz) holographic system was used to construct extremely high resolution images (optical quality) using polyrod antenna in a bi-static configuration using an x-y scanner. Millimeter wave chirp frequency holography can be simply described as a multi-frequency detection and imaging technique where the targets reflected signals are decomposed into discrete frequency holograms and reconstructed into a single composite 3-D image. The implementation of this technology for security at airports, government installations, etc., will require real-time (video rate) data acquisition and computer image reconstruction of large volumetric data sets. This implies rapid scanning techniques or large, complex 2-D arrays and high speed computing for successful commercialization of this technology. 18. A global 3-D MHD model of the solar wind with Alfven waves NASA Technical Reports Server (NTRS) Usmanov, A. V. 1995-01-01 A fully three-dimensional solar wind model that incorporates momentum and heat addition from Alfven waves is developed. The proposed model upgrades the previous one by considering self-consistently the total system consisting of Alfven waves propagating outward from the Sun and the mean polytropic solar wind flow. The simulation region extends from the coronal base (1 R(sub s) out to beyond 1 AU. The fully 3-D MHD equations written in spherical coordinates are solved in the frame of reference corotating with the Sun. At the inner boundary, the photospheric magnetic field observations are taken as boundary condition and wave energy influx is prescribed to be proportional to the magnetic field strength. The results of the model application for several time intervals are presented. 19. 3D P-Wave Velocity Structure of the Deep Galicia Rifted Margin Bayrakci, Gaye; Minshull, Timothy; Davy, Richard; Sawyer, Dale; Klaeschen, Dirk; Papenberg, Cord; Reston, Timothy; Shillington, Donna; Ranero, Cesar 2015-04-01 The combined wide-angle reflection-refraction and multi-channel seismic (MCS) experiment, Galicia 3D, was carried out in 2013 at the Galicia rifted margin in the northeast Atlantic Ocean, west of Spain. The main geological features within the 64 by 20 km (1280 km²) 3D box investigated by the survey are the peridotite ridge (PR), the fault bounded, rotated basement blocks and the S reflector, which has been interpreted to be a low angle detachment fault. 44 short period four-component ocean bottom seismometers and 28 ocean bottom hydrophones were deployed in the 3D box. 3D MCS profiles sampling the whole box were acquired with two airgun arrays of 3300 cu.in. fired alternately every 37.5 m. We present the results from 3D first-arrival time tomography that constrains the P-wave velocity in the 3D box, for the entire depth sampled by reflection data. Results are validated by synthetic tests and by the comparison with Galicia 3D MCS lines. The main outcomes are as follows: 1- The 3.5 km/s iso-velocity contour mimics the top of the acoustic basement observed on MCS profiles. Block bounding faults are imaged as velocity contrasts and basement blocks exhibit 3D topographic variations. 2- On the southern profiles, the top of the PR rises up to 5.5 km depth whereas, 20 km northward, its basement expression (at 6.5 km depth) nearly disappears. 3- The 6.5 km/s iso-velocity contour matches the topography of the S reflector where the latter is visible on MCS profiles. Within a depth interval of 0.6 km (in average), velocities beneath the S reflector increase from 6.5 km/s to 7 km/s, which would correspond to a decrease in the degree of serpentinization from ~45 % to ~30 % if these velocity variations are caused solely by variations in hydration. At the intersections between the block bounding normal faults and the S reflector, this decrease happens over a larger depth interval (> 1 km), suggesting that faults act as conduit for the water flow in the upper mantle. 20. 3D P-Wave Velocity Structure of the Deep Galicia Rifted Margin Bayrakci, G.; Minshull, T. A.; Davy, R. G.; Sawyer, D. S.; Klaeschen, D.; Papenberg, C. A.; Reston, T. J.; Shillington, D. J.; Ranero, C. R. 2014-12-01 The combined wide-angle reflection-refraction and multi-channel seismic (MCS) experiment, Galicia 3D, was carried out in 2013 at the Galicia rifted margin in the northeast Atlantic Ocean, west of Spain. The main geological features within the 64 by 20 km (1280 km²) 3D box investigated by the survey are the peridotite ridge (PR), the fault bounded, rotated basement blocks and the S reflector, which has been interpreted to be a low angle detachment fault. 44 short period four-component ocean bottom seismometers and 28 ocean bottom hydrophones were deployed in the 3D box. 3D MCS profiles sampling the whole box were acquired with two airgun arrays of 3300 cu.in. fired alternately every 37.5 m. We present the results from 3D first-arrival time tomography that constrains the P-wave velocity in the 3D box, for the entire depth sampled by reflection data. Results are validated by synthetic tests and by the comparison with Galicia 3D MCS lines. The main outcomes are as follows: 1- The 3.5 km/s iso-velocity contour mimics the top of the acoustic basement observed on MCS profiles. Block bounding faults are imaged as velocity contrasts and basement blocks exhibit 3D topographic variations. 2- On the southern profiles, the top of the PR rises up to 5.5 km depth whereas, 20 km northward, its basement expression (at 6.5 km depth) nearly disappears. 3- The 6.5 km/s iso-velocity contour matches the topography of the S reflector where the latter is visible on MCS profiles. Within a depth interval of 0.6 km (in average), velocities beneath the S reflector increase from 6.5 km/s to 7 km/s, which would correspond to a decrease in the degree of serpentinization from ~45 % to ~30 % if these velocity variations are caused solely by variations in hydration. At the intersections between the block bounding normal faults and the S reflector, this decrease happens over a larger depth interval (> 1 km), suggesting that faults act as conduit for the water flow in the upper mantle. 1. Charactrisation of particle assemblies by 3D cross correlation light scattering and diffusing wave spectroscopy Scheffold, Frank 2014-08-01 To characterize the structural and dynamic properties of soft materials and small particles, information on the relevant mesoscopic length scales is required. Such information is often obtained from traditional static and dynamic light scattering (SLS/DLS) experiments in the single scattering regime. In many dense systems, however, these powerful techniques frequently fail due to strong multiple scattering of light. Here I will discuss some experimental innovations that have emerged over the last decade. New methods such as 3D static and dynamic light scattering (3D LS) as well as diffusing wave spectroscopy (DWS) can cover a much extended range of experimental parameters ranging from dilute polymer solutions, colloidal suspensions to extremely opaque viscoelastic emulsions. 2. Total-Field Technique for 3-D Modeling of Short Period Teleseismic Waves Monteiller, V.; Beller, S.; Operto, S.; Nissen-Meyer, T.; Tago Pacheco, J.; Virieux, J. 2014-12-01 The massive development of dense seismic arrays and the rapid increase in computing capacity allow today to consider application of full waveform inversion of teleseismic data for high-resolution lithospheric imaging. We present an hybrid numerical method that allows for the modellingof short period teleseismic waves in 3D lithospheric target with both the discontinuous Galerkin finite elements method and finite difference method, opening the possibility to perform waveform inversion of seismograms recorded by dense regional broadband arrays. However, despite the supercomputer ability, the forward-problem remains expensive at global scale for teleseismic configuration especially when 3D numerical methods are considered. In order to perform the forward problem in a reasonable amount of time, we reduce the computational domain in which full waveform modelling is performed. We define a 3D regional domain located below the seismological network that is embedded in a homogeneous background or axisymmetric model, in which the seismic wavefield can be computed efficiently. The background wavefield is used to compute the full wavefield in the 3D regional domain using the so-called total-field/scattered-field technique. This method relies on the decomposition of the wavefield into a background and a scattered wavefields. The computational domain is subdivided into three sub-domains: an outer domain formed by the perfectly-matched absorbing layers, an intermediate domain in which only the outgoing wavefield scattered by the lithospheric heterogeneities is computed, and the inner domain formed by the lithospheric target in which the full wavefield is computed. In this study, we shall present simulations in realistic lithospheric target when the axisymetric background wavefield is computed with the AxiSEM softwave and the 3D simulation in lithospheric target model is performed with the discontinuous Galerkin or finite difference method. 3. 3D frequency-domain finite-difference modeling of acoustic wave propagation Operto, S.; Virieux, J. 2006-12-01 We present a 3D frequency-domain finite-difference method for acoustic wave propagation modeling. This method is developed as a tool to perform 3D frequency-domain full-waveform inversion of wide-angle seismic data. For wide-angle data, frequency-domain full-waveform inversion can be applied only to few discrete frequencies to develop reliable velocity model. Frequency-domain finite-difference (FD) modeling of wave propagation requires resolution of a huge sparse system of linear equations. If this system can be solved with a direct method, solutions for multiple sources can be computed efficiently once the underlying matrix has been factorized. The drawback of the direct method is the memory requirement resulting from the fill-in of the matrix during factorization. We assess in this study whether representative problems can be addressed in 3D geometry with such approach. We start from the velocity-stress formulation of the 3D acoustic wave equation. The spatial derivatives are discretized with second-order accurate staggered-grid stencil on different coordinate systems such that the axis span over as many directions as possible. Once the discrete equations were developed on each coordinate system, the particle velocity fields are eliminated from the first-order hyperbolic system (following the so-called parsimonious staggered-grid method) leading to second-order elliptic wave equations in pressure. The second-order wave equations discretized on each coordinate system are combined linearly to mitigate the numerical anisotropy. Secondly, grid dispersion is minimized by replacing the mass term at the collocation point by its weighted averaging over all the grid points of the stencil. Use of second-order accurate staggered- grid stencil allows to reduce the bandwidth of the matrix to be factorized. The final stencil incorporates 27 points. Absorbing conditions are PML. The system is solved using the parallel direct solver MUMPS developed for distributed 4. 3D Numerical Simulation on the Sloshing Waves Excited by the Seismic Shacking Zhang, Lin; Wu, Tso-Ren 2016-04-01 In the event of 2015 Nepal earthquake, a video clip broadcasted worldwide showed a violent water spilling in a hotel swimming pool. This sloshing phenomenon indicates a potential water loss in the sensitive facilities, e.g. the spent fuel pools in nuclear power plant, has to be taken into account carefully under the consideration of seismic-induced ground acceleration. In the previous studies, the simulation of sloshing mainly focused on the pressure force on the structure by using a simplified Spring-Mass Method developed in the field of solid mechanics. However, restricted by the assumptions of plane water surface and limited wave height, significant error will be made in evaluating the amount of water loss in the tank. In this paper, the computational fluid dynamical model, Splash3D, was adopted for studying the sloshing problem accurately. Splash3D solved 3D Navier-Stokes Equation directly with Large-Eddy Simulation (LES) turbulent closure. The Volume-of-fluid (VOF) method with piecewise linear interface calculation (PLIC) was used to track the complex breaking water surface. The time series acceleration of a design seismic was loaded to excite the water. With few restrictions from the assumptions, the accuracy of the simulation results were improved dramatically. A series model validations were conducted by compared to a 2D theoretical solution, and a 3D experimental data. Good comparisons can be seen. After the validation, we performed the simulation for considering a sloshing case in a rectangular water tank with a dimension of 12 m long, 8 m wide, 8 m deep, which contained water with 7 m in depth. The seismic movement was imported by considering time-series acceleration in three dimensions, which were about 0.5 g to 1.2 g in the horizontal directions, and 0.3 g to 1 g in the vertical direction. We focused the discussions on the kinematics of the water surface, wave breaking, velocity field, pressure field, water force on the side walls, and, most 5. Mach-wave coherence in 3D media with random heterogeneities Vyas, Jagdish C.; Mai, P. Martin; Galis, Martin; Dunham, Eric M.; Imperatori, Walter 2016-04-01 We investigate Mach-waves coherence for complex super-shear ruptures embedded in 3D random media that lead to seismic scattering. We simulate Mach-wave using kinematic earthquake sources that include fault-regions over which the rupture propagates at super-shear speed. The local slip rate is modeled with the regularized Yoffe function. The medium heterogeneities are characterized by Von Karman correlation function. We consider various realizations of 3D random media from combinations of different values of correlation length (0.5 km, 2 km, 5 km), standard deviation (5%, 10%, 15%) and Hurst exponent (0.2). Simulations in a homogeneous medium serve as a reference case. The ground-motion simulations (maximum resolved frequency of 5 Hz) are conducted by solving the elasto-dynamic equations of motions using a generalized finite-difference method, assuming a vertical strike-slip fault. The seismic wavefield is sampled at numerous locations within the Mach-cone region to study the properties and evolution of the Mach-waves in scattering media. We find that the medium scattering from random heterogeneities significantly diminishes the coherence of Mach-wave in terms of both amplitude and frequencies. We observe that Mach-waves are considerably scattered at distances RJB > 20 km (and beyond) for random media with standard deviation 10%. The scattering efficiency of the medium for small Hurst exponents (H <= 0.2) is mainly controlled by the standard deviation of the velocity heterogeneities, rather than their correlation length, as both theoretical considerations and numerical experiments indicate. Based on our simulations, we propose that local super-shear ruptures may be more common in nature then reported, but are very difficult to detect due to the strong seismic scattering. We suggest that if an earthquake is recorded within 10-15 km fault perpendicular distance and has high PGA, then inversion should be carried out by allowing rupture speed variations from sub 6. 3D Simulation of an Audible Ultrasonic Electrolarynx Using Difference Waves PubMed Central Mills, Patrick; Zara, Jason 2014-01-01 A total laryngectomy removes the vocal folds which are fundamental in forming voiced sounds that make speech possible. Although implanted prosthetics are commonly used in developed countries, simple handheld vibrating electrolarynxes are still common worldwide. These devices are easy to use but suffer from many drawbacks including dedication of a hand, mechanical sounding voice, and sound leakage. To address some of these drawbacks, we introduce a novel electrolarynx that uses vibro-acoustic interference of dual ultrasonic waves to generate an audible fundamental frequency. A 3D simulation of the principles of the device is presented in this paper. PMID:25401965 7. 3D dynamic rupture with anelastic wave propagation using an hp-adaptive Discontinuous Galerkin method Tago, J.; Cruz-Atienza, V. M.; Etienne, V.; Virieux, J.; Benjemaa, M.; Sanchez-Sesma, F. J. 2010-12-01 Simulating any realistic seismic scenario requires incorporating physical basis into the model. Considering both the dynamics of the rupture process and the anelastic attenuation of seismic waves is essential to this purpose and, therefore, we choose to extend the hp-adaptive Discontinuous Galerkin finite-element method to integrate these physical aspects. The 3D elastodynamic equations in an unstructured tetrahedral mesh are solved with a second-order time marching approach in a high-performance computing environment. The first extension incorporates the viscoelastic rheology so that the intrinsic attenuation of the medium is considered in terms of frequency dependent quality factors (Q). On the other hand, the extension related to dynamic rupture is integrated through explicit boundary conditions over the crack surface. For this visco-elastodynamic formulation, we introduce an original discrete scheme that preserves the optimal code performance of the elastodynamic equations. A set of relaxation mechanisms describes the behavior of a generalized Maxwell body. We approximate almost constant Q in a wide frequency range by selecting both suitable relaxation frequencies and anelastic coefficients characterizing these mechanisms. In order to do so, we solve an optimization problem which is critical to minimize the amount of relaxation mechanisms. Two strategies are explored: 1) a least squares method and 2) a genetic algorithm (GA). We found that the improvement provided by the heuristic GA method is negligible. Both optimization strategies yield Q values within the 5% of the target constant Q mechanism. Anelastic functions (i.e. memory variables) are introduced to efficiently evaluate the time convolution terms involved in the constitutive equations and thus to minimize the computational cost. The incorporation of anelastic functions implies new terms with ordinary differential equations in the mathematical formulation. We solve these equations using the same order 8. A 3D unstructured non-hydrostatic ocean model for internal waves Ai, Congfang; Ding, Weiye 2016-10-01 A 3D non-hydrostatic model is developed to compute internal waves. A novel grid arrangement is incorporated in the model. This not only ensures the homogenous Dirichlet boundary condition for the non-hydrostatic pressure can be precisely and easily imposed but also renders the model relatively simple in its discretized form. The Perot scheme is employed to discretize horizontal advection terms in the horizontal momentum equations, which is based on staggered grids and has the conservative property. Based on previous water wave models, the main works of the present paper are to (1) utilize a semi-implicit, fractional step algorithm to solve the Navier-Stokes equations (NSE); (2) develop a second-order flux-limiter method satisfying the max-min property; (3) incorporate a density equation, which is solved by a high-resolution finite volume method ensuring mass conservation and max-min property based on a vertical boundary-fitted coordinate system; and (4) validate the developed model by using four tests including two internal seiche waves, lock-exchange flow, and internal solitary wave breaking. Comparisons of numerical results with analytical solutions or experimental data or other model results show reasonably good agreement, demonstrating the model's capability to resolve internal waves relating to complex non-hydrostatic phenomena. 9. Measurement of elastic waves induced by the reflection of light. PubMed Požar, Tomaž; Možina, Janez 2013-11-01 The reflection of light from the surface of an elastic solid gives rise to various types of elastic waves that propagate inside the solid. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Here, we present the first quantitative measurement of such light-pressure-induced elastic waves inside an ultrahigh-reflectivity mirror. Amplitudes of a few picometers were observed at the rear side of the mirror with a displacement-measuring conical piezoelectric sensor when laser pulses with a fluence of 1 J/cm(2) were reflected from the front side of the mirror. 10. Wave optics theory and 3-D deconvolution for the light field microscope. PubMed Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc 2013-10-21 Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. 11. Wave optics theory and 3-D deconvolution for the light field microscope PubMed Central Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc 2013-01-01 Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. PMID:24150383 12. Effect of background rotation on the evolution of 3D internal gravity wave beams Fan, Boyu; Akylas, T. R. 2016-11-01 The effect of background rotation on the 3D propagation of internal gravity wave beams (IGWB) is studied, assuming that variations in the along-beam and transverse directions are of long length scale relative to the beam width. The present study generalizes the asymptotic model of KA (Kataoka & Akylas 2015) who considered the analogous problem in the absence of rotation. It is shown that the role of mean vertical vorticity in the earlier analysis is now taken by the flow mean potential vorticity (MPV). Specifically, 3D variations enable resonant transfer of energy to the flow MPV, resulting in strong nonlinear coupling between a 3D IGWB and its induced mean flow. This coupling mechanism is governed by a system of two nonlinear equations of the same form as those derived in KA. Accordingly, the induced mean flow features a purely inviscid modulational component, as well as a viscous one akin to acoustic streaming; the latter grows linearly with time for a quasi-steady IGWB. On the other hand, owing to background rotation, the induced mean flow in the vicinity of the IGWB is no longer purely horizontal and develops an asymmetric behavior. Supported by NSF. 13. 3D Plenoptic PIV Measurements of a Shock Wave Boundary Layer Interaction Thurow, Brian; Bolton, Johnathan; Arora, Nishul; Alvi, Farrukh 2016-11-01 Plenoptic particle image velocimetry (PIV) is a relatively new technique that uses the computational refocusing capability of a single plenoptic camera and volume illumination with a double-pulsed light source to measure the instantaneous 3D/3C velocity field of a flow field seeded with particles. In this work, plenoptic PIV is used to perform volumetric velocity field measurements of a shock-wave turbulent boundary layer interaction (SBLI). Experiments were performed in a Mach 2.0 flow with the SBLI produced by an unswept fin at 15°angle of attack. The measurement volume was 38 x 25 x 32 mm3 and illuminated with a 400 mJ/pulse Nd:YAG laser with 1.7 microsecond inter-pulse time. Conventional planar PIV measurements along two planes within the volume are used for comparison. 3D visualizations of the fin generated shock and subsequent SBLI are presented. The growth of the shock foot and separation region with increasing distance from the fin tip is observed and agrees with observations made using planar PIV. Instantaneous images depict 3D fluctuations in the position of the shock foot from one image to the next. The authors acknowledge the support of the Air Force Office of Scientific Research. 14. 3D P-wave Velocity Structure Beneath the Eastern Canadian Shield and Northern Appalachian Region Villemaire, M.; Darbyshire, F. A.; Bastow, I. D. 2010-12-01 Previous seismic studies of the upper mantle of the Canadian Shield have indicated some low-velocity anomalies within the cratonic lithosphere in the Abitibi-Grenville region. The lack of seismograph station coverage to the east and south-east of the studied area prevented definition of the 3D geometry of these anomalies. Adding new stations from the province of Quebec and from the northeastern United States allows us to carry out new studies of the P-wave velocity structure of the upper mantle, in order to better understand the complexity of the region and the interaction of the lithosphere with possible thermal anomalies in the underlying mantle. We analysed teleseismic P wave arrivals from almost 200 earthquakes, recorded at 45 stations deployed across the provinces of Quebec and Ontario and across the northeastern US. The relative arrival times of teleseismic P waves across the array were measured using the cross-correlation method of VanDecar & Crosson (1990). The travel time data were then inverted to estimate the 3D P-wave velocity structure beneath the region, using the least-squares tomographic inversion code of VanDecar (1991). The model shows some interesting features. We see a diffuse low-velocity structure beneath New-England that extends to at least 500 km depth, and that may be related to the Appalachian Mountain belt. There is also a linear low-velocity structure, flanked by higher velocities, perpendicular to the Grenville Front, and along the Ottawa Valley. We interpret this feature as a mantle signature of the Great Meteor Hotspot track. We have looked for systematic differences between the mantle underlying the Archean Superior craton and the Proterozoic Grenville Province but did not find a significant difference in the upper mantle. We investigate the role of thermal and compositional effects to interpret the velocity models and to relate the patterns of the anomalies to past and present tectonic structures. 15. Poroelastic Wave Propagation With a 3D Velocity-Stress-Pressure Finite-Difference Algorithm Aldridge, D. F.; Symons, N. P.; Bartel, L. C. 2004-12-01 Seismic wave propagation within a three-dimensional, heterogeneous, isotropic poroelastic medium is numerically simulated with an explicit, time-domain, finite-difference algorithm. A system of thirteen, coupled, first-order, partial differential equations is solved for the particle velocity vector components, the stress tensor components, and the pressure associated with solid and fluid constituents of the two-phase continuum. These thirteen dependent variables are stored on staggered temporal and spatial grids, analogous to the scheme utilized for solution of the conventional velocity-stress system of isotropic elastodynamics. Centered finite-difference operators possess 2nd-order accuracy in time and 4th-order accuracy in space. Seismological utility is enhanced by an optional stress-free boundary condition applied on a horizontal plane representing the earth's surface. Absorbing boundary conditions are imposed on the flanks of the 3D spatial grid via a simple wavefield amplitude taper approach. A massively parallel computational implementation, utilizing the spatial domain decomposition strategy, allows investigation of large-scale earth models and/or broadband wave propagation within reasonable execution times. Initial algorithm testing indicates that a point force density and/or moment density source activated within a poroelastic medium generates diverging fast and slow P waves (and possibly an S-wave)in accord with Biot theory. Solid and fluid particle velocities are in-phase for the fast P-wave, whereas they are out-of-phase for the slow P-wave. Conversions between all wave types occur during reflection and transmission at interfaces. Thus, although the slow P-wave is regarded as difficult to detect experimentally, its presence is strongly manifest within the complex of waves generated at a lithologic or fluid boundary. Very fine spatial and temporal gridding are required for high-fidelity representation of the slow P-wave, without inducing excessive 16. Rayleigh–Bloch waves along elastic diffraction gratings PubMed Central Colquitt, D. J.; Craster, R. V.; Antonakakis, T.; Guenneau, S. 2015-01-01 Rayleigh–Bloch (RB) waves in elasticity, in contrast to those in scalar wave systems, appear to have had little attention. Despite the importance of RB waves in applications, their connections to trapped modes and the ubiquitous nature of diffraction gratings, there has been no investigation of whether such waves occur within elastic diffraction gratings for the in-plane vector elastic system. We identify boundary conditions that support such waves and numerical simulations confirm their presence. An asymptotic technique is also developed to generate effective medium homogenized equations for the grating that allows us to replace the detailed microstructure by a continuum representation. Further numerical simulations confirm that the asymptotic scheme captures the essential features of these waves. PMID:25568616 17. Radially anisotropic 3-D shear wave structure of the Australian lithosphere and asthenosphere from multi-mode surface waves Yoshizawa, K. 2014-10-01 A new radially anisotropic shear wave speed model for the Australasian region is constructed from multi-mode phase dispersion of Love and Rayleigh waves. An automated waveform fitting technique based on a global optimization with the Neighbourhood Algorithm allows the exploitation of large numbers of three-component broad-band seismograms to extract path-specific dispersion curves covering the entire continent. A 3-D shear wave model is constructed including radial anisotropy from a set of multi-mode phase speed maps for both Love and Rayleigh waves. These maps are derived from an iterative inversion scheme incorporating the effects of ray-path bending due to lateral heterogeneity, as well as the finite frequency of the surface waves for each mode. The new S wave speed model exhibits major tectonic features of this region that are in good agreement with earlier shear wave models derived primarily from Rayleigh waves. The lateral variations of depth and thickness of the lithosphere-asthenosphere transition (LAT) are estimated from the isotropic (Voigt average) S wave speed model and its vertical gradient, which reveals correlations between the lateral variations of the LAT and radial anisotropy. The thickness of the LAT is very large beneath the Archean cratons in western Australia, whereas that in south Australia is thinner. The radial anisotropy model shows faster SH wave speed than SV beneath eastern Australia and the Coral Sea at the lithospheric depth. The faster SH anomaly in the lithosphere is also seen in the suture zone between the three cratonic blocks of Australia. One of the most conspicuous features of fast SH anisotropy is found in the asthenosphere beneath the central Australia, suggesting anisotropy induced by shear flow in the asthenosphere beneath the fast drifting Australian continent. 18. 3D reconstruction and particle acceleration properties of Coronal Shock Waves During Powerful Solar Particle Events Plotnikov, Illya; Vourlidas, Angelos; Tylka, Allan J.; Pinto, Rui; Rouillard, Alexis; Tirole, Margot 2016-07-01 Identifying the physical mechanisms that produce the most energetic particles is a long-standing observational and theoretical challenge in astrophysics. Strong pressure waves have been proposed as efficient accelerators both in the solar and astrophysical contexts via various mechanisms such as diffusive-shock/shock-drift acceleration and betatron effects. In diffusive-shock acceleration, the efficacy of the process relies on shock waves being super-critical or moving several times faster than the characteristic speed of the medium they propagate through (a high Alfven Mach number) and on the orientation of the magnetic field upstream of the shock front. High-cadence, multipoint imaging using the NASA STEREO, SOHO and SDO spacecrafts now permits the 3-D reconstruction of pressure waves formed during the eruption of coronal mass ejections. Using these unprecedented capabilities, some recent studies have provided new insights on the timing and longitudinal extent of solar energetic particles, including the first derivations of the time-dependent 3-dimensional distribution of the expansion speed and Mach numbers of coronal shock waves. We will review these recent developments by focusing on particle events that occurred between 2011 and 2015. These new techniques also provide the opportunity to investigate the enigmatic long-duration gamma ray events. 19. A high-order discontinuous Galerkin method for wave propagation through coupled elastic-acoustic media Wilcox, Lucas C.; Stadler, Georg; Burstedde, Carsten; Ghattas, Omar 2010-12-01 We introduce a high-order discontinuous Galerkin (dG) scheme for the numerical solution of three-dimensional (3D) wave propagation problems in coupled elastic-acoustic media. A velocity-strain formulation is used, which allows for the solution of the acoustic and elastic wave equations within the same unified framework. Careful attention is directed at the derivation of a numerical flux that preserves high-order accuracy in the presence of material discontinuities, including elastic-acoustic interfaces. Explicit expressions for the 3D upwind numerical flux, derived as an exact solution for the relevant Riemann problem, are provided. The method supports h-non-conforming meshes, which are particularly effective at allowing local adaptation of the mesh size to resolve strong contrasts in the local wavelength, as well as dynamic adaptivity to track solution features. The use of high-order elements controls numerical dispersion, enabling propagation over many wave periods. We prove consistency and stability of the proposed dG scheme. To study the numerical accuracy and convergence of the proposed method, we compare against analytical solutions for wave propagation problems with interfaces, including Rayleigh, Lamb, Scholte, and Stoneley waves as well as plane waves impinging on an elastic-acoustic interface. Spectral rates of convergence are demonstrated for these problems, which include a non-conforming mesh case. Finally, we present scalability results for a parallel implementation of the proposed high-order dG scheme for large-scale seismic wave propagation in a simplified earth model, demonstrating high parallel efficiency for strong scaling to the full size of the Jaguar Cray XT5 supercomputer. 20. A high-order discontinuous Galerkin method for wave propagation through coupled elastic-acoustic media SciTech Connect Wilcox, Lucas C.; Stadler, Georg; Burstedde, Carsten; Ghattas, Omar 2010-12-10 We introduce a high-order discontinuous Galerkin (dG) scheme for the numerical solution of three-dimensional (3D) wave propagation problems in coupled elastic-acoustic media. A velocity-strain formulation is used, which allows for the solution of the acoustic and elastic wave equations within the same unified framework. Careful attention is directed at the derivation of a numerical flux that preserves high-order accuracy in the presence of material discontinuities, including elastic-acoustic interfaces. Explicit expressions for the 3D upwind numerical flux, derived as an exact solution for the relevant Riemann problem, are provided. The method supports h-non-conforming meshes, which are particularly effective at allowing local adaptation of the mesh size to resolve strong contrasts in the local wavelength, as well as dynamic adaptivity to track solution features. The use of high-order elements controls numerical dispersion, enabling propagation over many wave periods. We prove consistency and stability of the proposed dG scheme. To study the numerical accuracy and convergence of the proposed method, we compare against analytical solutions for wave propagation problems with interfaces, including Rayleigh, Lamb, Scholte, and Stoneley waves as well as plane waves impinging on an elastic-acoustic interface. Spectral rates of convergence are demonstrated for these problems, which include a non-conforming mesh case. Finally, we present scalability results for a parallel implementation of the proposed high-order dG scheme for large-scale seismic wave propagation in a simplified earth model, demonstrating high parallel efficiency for strong scaling to the full size of the Jaguar Cray XT5 supercomputer. 1. Energy in elastic fiber embedded in elastic matrix containing incident SH wave NASA Technical Reports Server (NTRS) Williams, James H., Jr.; Nagem, Raymond J. 1989-01-01 A single elastic fiber embedded in an infinite elastic matrix is considered. An incident plane SH wave is assumed in the infinite matrix, and an expression is derived for the total energy in the fiber due to the incident SH wave. A nondimensional form of the fiber energy is plotted as a function of the nondimensional wavenumber of the SH wave. It is shown that the fiber energy attains maximum values at specific values of the wavenumber of the incident wave. The results obtained here are interpreted in the context of phenomena observed in acousto-ultrasonic experiments on fiber reinforced composite materials. 2. Passive retrieval of Rayleigh waves in disordered elastic media. PubMed Larose, Eric; Derode, Arnaud; Clorennec, Dominique; Margerin, Ludovic; Campillo, Michel 2005-10-01 When averaged over sources or disorder, cross correlation of diffuse fields yields the Green's function between two passive sensors. This technique is applied to elastic ultrasonic waves in an open scattering slab mimicking seismic waves in the Earth's crust. It appears that the Rayleigh wave reconstruction depends on the scattering properties of the elastic slab. Special attention is paid to the specific role of bulk to Rayleigh wave coupling, which may result in unexpected phenomena, such as a persistent time asymmetry in the diffuse regime. 3. Hydrodynamic analysis of elastic floating collars in random waves Bai, Xiao-dong; Zhao, Yun-peng; Dong, Guo-hai; Li, Yu-cheng 2015-06-01 As the main load-bearing component of fish cages, the floating collar supports the whole cage and undergoes large deformations. In this paper, a mathematical method is developed to study the motions and elastic deformations of elastic floating collars in random waves. The irregular wave is simulated by the random phase method and the statistical approach and Fourier transfer are applied to analyze the elastic response in both time and frequency domains. The governing equations of motions are established by Newton's second law, and the governing equations of deformations are obtained based on curved beam theory and modal superposition method. In order to validate the numerical model of the floating collar attacked by random waves, a series of physical model tests are conducted. Good relationship between numerical simulation and experimental observations is obtained. The numerical results indicate that the transfer function of out-of-plane and in-plane deformations increase with the increasing of wave frequency. In the frequency range between 0.6 Hz and 1.1 Hz, a linear relationship exists between the wave elevations and the deformations. The average phase difference between the wave elevation and out-of-plane deformation is 60° with waves leading and the phase between the wave elevation and in-plane deformation is 10° with waves lagging. In addition, the effect of fish net on the elastic response is analyzed. The results suggest that the deformation of the floating collar with fish net is a little larger than that without net. 4. Shear wave elastography quantification of blood elasticity during clotting. PubMed Bernal, Miguel; Gennisson, Jean-Luc; Flaud, Patrice; Tanter, Mickael 2012-12-01 Deep venous thrombosis (DVT) affects millions of people worldwide. A fatal complication occurs when the thrombi detach and create a pulmonary embolism. The diagnosis and treatment of DVT depends on clot's age. The elasticity of thrombi is closely related to its age. Blood was collected from pigs and anticoagulated using ethylenediaminetetraacetic acid (EDTA). Coagulation was initiated using calcium ions. Supersonic shear wave imaging was used to generate shear waves using 100 μs tone bursts of 8 MHz. Tracking of the shear waves was done by ultrafast imaging. Postprocessing of the data was done using Matlab(®). Two-dimensional (2-D) maps of elasticity were obtained by calculating the speed of shear wave propagation. Elasticity varied with time from around 50 Pa at coagulation to 1600 Pa at 120 min after which the elasticity showed a natural decreased (17%) because of thrombolytic action of plasmin. Ejection of the serum from the clot showed a significant decrease in the elasticity of the clot next to the liquid pool (65% decrease), corresponding to the detachment of the clot from the beaker wall. The use of a thrombolytic agent (Urokinase) on the coagulated blood decreased the shear elasticity close to the point of injection, which varied with time and distance. Supersonic imaging proved to be useful mapping the 2-D clot's elasticity. It allowed the visualization of the heterogeneity of mechanical properties of thrombi and has potential use in predicting thrombi breakage as well as in monitoring thrombolytic therapy. 5. Full 3D dispersion curve solutions for guided waves in generally anisotropic media Hernando Quintanilla, F.; Lowe, M. J. S.; Craster, R. V. 2016-02-01 Dispersion curves of guided waves provide valuable information about the physical and elastic properties of waves propagating within a given waveguide structure. Algorithms to accurately compute these curves are an essential tool for engineers working in non-destructive evaluation and for scientists studying wave phenomena. Dispersion curves are typically computed for low or zero attenuation and presented in two or three dimensional plots. The former do not always provide a clear and complete picture of the dispersion loci and the latter are very difficult to obtain when high values of attenuation are involved and arbitrary anisotropy is considered in single or multi-layered systems. As a consequence, drawing correct and reliable conclusions is a challenging task in the modern applications that often utilize multi-layered anisotropic viscoelastic materials. These challenges are overcome here by using a spectral collocation method (SCM) to robustly find dispersion curves in the most complicated cases of high attenuation and arbitrary anisotropy. Solutions are then plotted in three-dimensional frequency-complex wavenumber space, thus gaining much deeper insight into the nature of these problems. The cases studied range from classical examples, which validate this approach, to new ones involving materials up to the most general triclinic class for both flat and cylindrical geometry in multi-layered systems. The apparent crossing of modes within the same symmetry family in viscoelastic media is also explained and clarified by the results. Finally, the consequences of the centre of symmetry, present in every crystal class, on the solutions are discussed. 6. Critical speed and free vibration analysis of spinning 3D single-walled carbon nanotubes resting on elastic foundations 2017-01-01 In this article, the influences of critical speed on the free vibration behavior of spinning 3D single-walled carbon nanotubes (SWCNT) are investigated using modified couple stress theory (MCST). Moreover, the surrounding elastic medium of SWCNT has been considered as a model of Winkler, characterized by the spring. Taking into consideration the first-order shear deformation theory (FSDT), the rotating SWCNT is modeled and its equations of motion are derived using the Hamilton principle. The formulations include Coriolis, centrifugal and initial hoop tension effects due to rotation of the SWCNT. The accuracy of the presented model is validated by some cases in the literature. The novelty of this study is considering the effects of rotation and MCST, in addition to considering the various boundary conditions of SWCNT. The generalized differential quadrature method (GDQM) is used to discretize the model and to approximate the equation of motion. Then investigation has been made on critical speed and natural frequency of the rotating SWCNT due to the influence of initial hoop tension, material length scale parameter, constant of spring, frequency mode number, angular velocity, length-to-radius ratio, radius-to-thickness ratio and boundary conditions. 7. Visco-elastic effects on wave dispersion in three-phase acoustic metamaterials Krushynska, A. O.; Kouznetsova, V. G.; Geers, M. G. D. 2016-11-01 This paper studies the wave attenuation performance of dissipative solid acoustic metamaterials (AMMs) with local resonators possessing subwavelength band gaps. The metamaterial is composed of dense rubber-coated inclusions of a circular shape embedded periodically in a matrix medium. Visco-elastic material losses present in a matrix and/or resonator coating are introduced by either the Kelvin-Voigt or generalized Maxwell models. Numerical solutions are obtained in the frequency domain by means of k(ω)-approach combined with the finite element method. Spatially attenuating waves are described by real frequencies ω and complex-valued wave vectors k. Complete 3D band structure diagrams including complex-valued pass bands are evaluated for the undamped linear elastic and several visco-elastic AMM cases. The changes in the band diagrams due to the visco-elasticity are discussed in detail; the comparison between the two visco-elastic models representing artificial (Kelvin-Voigt model) and experimentally characterized (generalized Maxwell model) damping is performed. The interpretation of the results is facilitated by using attenuation and transmission spectra. Two mechanisms of the energy absorption, i.e. due to the resonance of the inclusions and dissipative effects in the materials, are discussed separately. It is found that the visco-elastic damping of the matrix material decreases the attenuation performance of AMMs within band gaps; however, if the matrix material is slightly damped, it can be modeled as linear elastic without the loss of accuracy given the resonator coating is dissipative. This study also demonstrates that visco-elastic losses properly introduced in the resonator coating improve the attenuation bandwidth of AMMs although the attenuation on the resonance peaks is reduced. 8. 3D P and S Wave Velocity Structure and Tremor Locations in the Parkfield Region Zeng, X.; Thurber, C. H.; Shelly, D. R.; Bennington, N. L.; Cochran, E. S.; Harrington, R. M. 2014-12-01 We have assembled a new dataset to refine the 3D seismic velocity model in the Parkfield region. The S arrivals from 184 earthquakes recorded by the Parkfield Experiment to Record MIcroseismicity and Tremor array (PERMIT) during 2010-2011 were picked by a new S wave picker, which is based on machine learning. 74 blasts have been assigned to four quarries, whose locations were identified with Google Earth. About 1000 P and S wave arrivals from these blasts at permanent seismic network were also incorporated. Low frequency earthquakes (LFEs) occurring within non-volcanic tremor (NVT) are valuable for improving the precision of NVT location and the seismic velocity model at greater depths. Based on previous work (Shelley and Hardebeck, 2010), waveforms of hundreds of LFEs in same family were stacked to improve signal qualify. In a previous study (McClement et al., 2013), stacked traces of more than 30 LFE families at the Parkfileld Array Seismic Observatory (PASO) have been picked. We expanded our work to include LFEs recorded by the PERMIT array. The time-frequency Phase Weight Stacking (tf-PWS) method was introduced to improve the stack quality, as direct stacking does not produce clear S-wave arrivals on the PERMIT stations. This technique uses the coherence of the instantaneous phase among the stacked signals to enhance the signal-to-noise ratio (SNR) of the stack. We found that it is extremely effective for picking LFE arrivals (Thurber et al., 2014). More than 500 P and about 1000 S arrivals from 58 LFE families were picked at the PERMIT and PASO arrays. Since the depths of LFEs are much deeper than earthquakes, we are able to extend model resolution to lower crustal depths. Both P and S wave velocity structure have been obtained with the tomoDD method. The result suggests that there is a low velocity zone (LVZ) in the lower crust and the location of the LVZ is consistent with the high conductivity zone beneath the southern segment of the Rinconada fault that 9. 3D P-wave velocity structure of the deep Galicia rifted margin: A first analysis of the Galicia 3D wide-angle seismic dataset Bayrakci, Gaye; Minshull, Timothy A.; Davy, Richard G.; Karplus, Marianne S.; Kaeschen, Dirk; Papenberg, Cord; Krabbenhoeft, Anne; Sawyer, Dale; Reston, Timothy J.; Shillington, Donna J.; Ranero, César R. 2014-05-01 Galicia 3D, a reflection-refraction and long offset seismic experiment was carried out from May through September 2013, at the Galicia rifted margin (in the northeast Atlantic Ocean, west of Spain) as a collaboration between US, UK, German and Spanish groups. The 3D multichannel seismic acquisition conducted by R/V Marcus Langseth covered a 64 km by 20 km (1280 km2) zone where the main geological features are the Peridotite Ridge (PR), composed of serpentinized peridotite and thought be upper mantle exhumed to the seafloor during rifting, and the S reflector which has been interpreted to be a low angle detachment fault overlain by fault bounded, rotated, continental crustal blocks. In the 3D box, two airgun arrays of 3300 cu.in. were fired alternately (in flip-flop configuration) every 37.5 m. All shots are recorded by 44 short period four component ocean bottom seismometers (OBS) and 26 ocean bottom hydrophones (OBH) deployed and recovered by R/V Poseidon, as well as four 6 km hydrophone streamers with 12.5 m channel spacing towed by R/V Marcus Langseth. We present the preliminary results of the first arrival time tomography study which is carried out with a subset of the wide-angle dataset, in order to generate a 3D P-wave velocity volume for the entire depth sampled by the reflection data. After the relocation of OBSs and OBHs, an automatic first-arrival time picking approach is applied to a subset of the dataset, which comprises more than 5.5 million source-receiver pairs. Then, the first-arrival times are checked visually, in 3-dimensions. The a priori model used for the first-arrival time tomography is built up using information from previous seismic surveys carried out at the Galicia margin (e.g. ISE, 1997). The FAST algorithm of Zelt and Barton (1998) is used for the first-arrival time inversion. The 3D P-wave velocity volume can be used in interpreting the reflection dataset, as a starting point for migration, to quantify the thinning of the crustal layers 10. Elastic Domain Wall Waves in Ferroelectric Ceramics and Single Crystals DTIC Science & Technology 1988-07-01 and identify by block number) " This report reviews research on acoustic guided waves along poling transitions in counter- poled ferroelectric ceramics...INTRODUCTION........................................ 1 II. GENERAL REVIEW . .. .......... ........... ... 2 (a) COUNTERPOLED CERAMICS...and better understanding of new ferroelectric materials. II. GENERAL REVIEW The initial phase of this project was an in-depth study of elastic wave 11. Laboratory observation of elastic waves in solids Rossing, Thomas D.; Russell, Daniel A. 1990-12-01 Compressional, torsional, and bending waves in bars and plates can be studied with simple apparatus in the laboratory. Although compressional and torsional waves show little or no dispersion, bending waves propagate at a speed proportional to (f)1/2. Reflections at boundaries lead to standing waves that determine the vibrational mode shapes and mode frequencies. Boundary conditions include free edges, simply supported edges, and clamped edges. Typical mode shapes and mode frequencies for rectangular bars, circular plates, and square plates are described. 12. Intersymbol Interference Investigations Using a 3D Time-Dependent Traveling Wave Tube Model NASA Technical Reports Server (NTRS) Kory, Carol L.; Andro, Monty 2002-01-01 For the first time, a time-dependent, physics-based computational model has been used to provide a direct description of the effects of the traveling wave tube amplifier (TWTA) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry and operating characteristics of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept- amplitude and/or swept-frequency data. First, the TWT model using the three dimensional (3D) electromagnetic code MAFIA is presented. Then, this comprehensive model is used to investigate approximations made in conventional TWT black-box models used in communication system level simulations. To quantitatively demonstrate the effects these approximations have on digital signal performance predictions, including intersymbol interference (ISI), the MAFIA results are compared to the system level analysis tool, Signal Processing Workstation (SPW), using high order modulation schemes including 16 and 64-QAM. 13. Intersymbol Interference Investigations Using a 3D Time-Dependent Traveling Wave Tube Model NASA Technical Reports Server (NTRS) Kory, Carol L.; Andro, Monty; Downey, Alan (Technical Monitor) 2001-01-01 For the first time, a physics based computational model has been used to provide a direct description of the effects of the TWT (Traveling Wave Tube) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept amplitude and/or swept frequency data. The fully three-dimensional (3D), time-dependent, TWT interaction model using the electromagnetic code MAFIA is presented. This model is used to investigate assumptions made in TWT black box models used in communication system level simulations. In addition, digital signal performance, including intersymbol interference (ISI), is compared using direct data input into the MAFIA model and using the system level analysis tool, SPW (Signal Processing Worksystem). 14. Elastic wave propagation in bone in vivo: methodology. PubMed Cheng, S; Timonen, J; Suominen, H 1995-04-01 The purpose of this study was to investigate the usefulness of elastic wave propagation (EWP) in estimating the mechanical properties (elasticity) of human tibia. The test group was composed of 78-yr-old women assigned to high (n = 19) and low (n = 17) bone mineral density (BMD) groups as measured at the calcaneus by the 125I-photon absorption method. The EWP apparatus consisted of an impact-producing hammer with a force strain gauge and two accelerometers positioned on the bone. Results for nylon and acrylic were used to calibrate the apparatus. Polyvinyl chloride (PVC) solid rods and tubes of various diameters were used to evaluate the relationship between the elastic wave velocity and cross-sectional area. The density and the cross-sectional area of tibia were measured by the computerized tomographic (CT) method at the same intersection points as velocity recordings. The velocities in tibia of bending waves produced by the mechanical hammer were found to depend on the density, area moment of inertia, and density-dependent elastic constants of bone. It is important to account for the changes of these quantities along the bone. It is suggested that the velocity of elastic waves and various indices derived there from provide inexpensive ways of evaluating the elastic properties of bone. 15. 3D geological to geophysical modelling and seismic wave propagation simulation: a case study from the Lalor Lake VMS (Volcanogenic Massive Sulphides) mining camp Miah, Khalid; Bellefleur, Gilles 2014-05-01 The global demand for base metals, uranium and precious metals has been pushing mineral explorations at greater depth. Seismic techniques and surveys have become essential in finding and extracting mineral rich ore bodies, especially for deep VMS mining camps. Geophysical parameters collected from borehole logs and laboratory measurements of core samples provide preliminary information about the nature and type of subsurface lithologic units. Alteration halos formed during the hydrothermal alteration process contain ore bodies, which are of primary interests among geologists and mining industries. It is known that the alteration halos are easier to detect than the ore bodies itself. Many 3D geological models are merely projection of 2D surface geology based on outcrop inspections and geochemical analysis of a small number of core samples collected from the area. Since a large scale 3D multicomponent seismic survey can be prohibitively expensive, performance analysis of such geological models can be helpful in reducing exploration costs. In this abstract, we discussed challenges and constraints encountered in geophysical modelling of ore bodies and surrounding geologic structures from the available coarse 3D geological models of the Lalor Lake mining camp, located in northern Manitoba, Canada. Ore bodies in the Lalor lake VMS camp are rich in gold, zinc, lead and copper, and have an approximate weight of 27 Mt. For better understanding of physical parameters of these known ore bodies and potentially unknown ones at greater depth, we constructed a fine resolution 3D seismic model with dimensions: 2000 m (width), 2000 m (height), and 1500 m (vertical depth). Seismic properties (P-wave, S-wave velocities, and density) were assigned based on a previous rock properties study of the same mining camp. 3D finite-difference elastic wave propagation simulation was performed in the model using appropriate parameters. The generated synthetic 3D seismic data was then compared to 16. Stress evolution during 3D single-layer visco-elastic buckle folding: Implications for the initiation of fractures Liu, Xiaolong; Eckert, Andreas; Connolly, Peter 2016-06-01 Buckle folds of sedimentary strata commonly feature a variety of different fracture sets. Some fracture sets including outer arc tensile fractures and inner arc shear fractures at the fold hinge zones are well understood by the extensional and compressional strain/stress pattern. However, other commonly observed fracture sets, including tensile fractures parallel to the fold axis, tensile fractures cutting through the limb, extensional faults at the fold hinge, and other shear fractures of various orientations in the fold limb, fail to be intuitively explained by the strain/stress regimes during the buckling process. To obtain a better understanding of the conditions for the initiation of the various fractures sets associated with single-layer cylindrical buckle folds, a 3D finite element modeling approach using a Maxwell visco-elastic rheology is utilized. The influences of three model parameters with significant influence on fracture initiation are considered: burial depth, viscosity, and permeability. It is concluded that these parameters are critical for the initiation of major fracture sets at the hinge zone with varying degrees. The numerical simulation results further show that the buckling process fails to explain most of the fracture sets occurring in the limb unless the process of erosional unloading as a post-fold phenomenon is considered. For fracture sets that only develop under unrealistic boundary conditions, the results demonstrate that their development is realistic for a perclinal fold geometry. In summary, a more thorough understanding of fractures sets associated with buckle folds is obtained based on the simulation of in-situ stress conditions during the structural development of buckle folds. 17. Anomalously low amplitude of S waves produced by the 3D structures in the lower mantle To, Akiko; Capdeville, Yann; Romanowicz, Barbara 2016-07-01 Direct S and Sdiff phases with anomalously low amplitudes are recorded for the earthquakes in Papua New Guinea by seismographs in northern America. According to the prediction by a standard 1D model, the amplitudes are the lowest at stations in southern California, at a distance and azimuth of around 95° and 55°, respectively, from the earthquake. The amplitude anomaly is more prominent at frequencies higher than 0.03 Hz. We checked and ruled out the possibility of the anomalies appearing because of the errors in the focal mechanism used in the reference synthetic waveform calculations. The observed anomaly distribution changes drastically with a relatively small shift in the location of the earthquake. The observations indicate that the amplitude reduction is likely due to the 3D shear velocity (Vs) structure, which deflects the wave energy away from the original ray paths. Moreover, some previous studies suggested that some of the S and Sdiff phases in our dataset are followed by a prominent postcursor and show a large travel time delay, which was explained by placing a large ultra-low velocity zone (ULVZ) located on the core-mantle boundary southwest of Hawaii. In this study, we evaluated the extent of amplitude anomalies that can be explained by the lower mantle structures in the existing models, including the previously proposed ULVZ. In addition, we modified and tested some models and searched for the possible causes of low amplitudes. Full 3D synthetic waveforms were calculated and compared with the observations. Our results show that while the existing models explain the trends of the observed amplitude anomalies, the size of such anomalies remain under-predicted especially at large distances. Adding a low velocity zone, which is spatially larger and has less Vs reduction than ULVZ, on the southwest side of ULVZ, contributes to explain the low amplitudes observed at distances larger than 100° from the earthquake. The newly proposed low velocity zone 18. A goal-oriented adaptive finite-element approach for plane wave 3-D electromagnetic modelling Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi 2013-08-01 We have developed a novel goal-oriented adaptive mesh refinement approach for finite-element methods to model plane wave electromagnetic (EM) fields in 3-D earth models based on the electric field differential equation. To handle complicated models of arbitrary conductivity, magnetic permeability and dielectric permittivity involving curved boundaries and surface topography, we employ an unstructured grid approach. The electric field is approximated by linear curl-conforming shape functions which guarantee the divergence-free condition of the electric field within each tetrahedron and continuity of the tangential component of the electric field across the interior boundaries. Based on the non-zero residuals of the approximated electric field and the yet to be satisfied boundary conditions of continuity of both the normal component of the total current density and the tangential component of the magnetic field strength across the interior interfaces, three a-posterior error estimators are proposed as a means to drive the goal-oriented adaptive refinement procedure. The first a-posterior error estimator relies on a combination of the residual of the electric field, the discontinuity of the normal component of the total current density and the discontinuity of the tangential component of the magnetic field strength across the interior faces shared by tetrahedra. The second a-posterior error estimator is expressed in terms of the discontinuity of the normal component of the total current density (conduction plus displacement current). The discontinuity of the tangential component of the magnetic field forms the third a-posterior error estimator. Analytical solutions for magnetotelluric (MT) and radiomagnetotelluric (RMT) fields impinging on a homogeneous half-space model are used to test the performances of the newly developed goal-oriented algorithms using the above three a-posterior error estimators. A trapezoidal topographical model, using normally incident EM waves 19. Plate-type elastic metamaterials for low-frequency broadband elastic wave attenuation. PubMed Li, Yinggang; Zhu, Ling; Chen, Tianning 2017-01-01 In this paper, we numerically and experimentally demonstrate the low-frequency broadband elastic wave attenuation and vibration suppression by using plate-type elastic metamaterial, which is constituted of periodic double-sides stepped resonators deposited on a two-dimensional phononic plate with steel matrix. The dispersion relations, the power transmission spectra, and the displacement fields of the eigenmodes are calculated by using the finite element method. In contrast to the typical phononic plates consisting of periodic stepped resonators deposited on a homogeneous steel plate, the proposed elastic metamaterial can yield large band gap in the low-frequency range, resulting in the low-frequency broadband elastic wave attenuation. The formation mechanisms of the band gap as well as the effects of material and geometrical parameters on the band gap are further explored numerically. Numerical results show that, the formation mechanism of opening the low-frequency band gap is attributed to the coupling between the local resonant Lamb modes of two-dimensional phononic plate and the resonant modes of the stepped resonators. The band gap can be significantly modulated by the material and geometrical parameters. The properties of broadband gaps of the proposed subwavelength scale elastic metamaterials can potentially be applied to vibration and noise reduction in the audio regime as well as broadband elastic wave confinement and modulation in ultrasonic region. 20. Wave Phase-Sensitive Transformation of 3d-Straining of Mechanical Fields Smirnov, I. N.; Speranskiy, A. A. 2015-11-01 It is the area of research of oscillatory processes in elastic mechanical systems. Technical result of innovation is creation of spectral set of multidimensional images which reflect time-correlated three-dimensional vector parameters of metrological, and\\or estimated, and\\or design parameters of oscillations in mechanical systems. Reconstructed images of different dimensionality integrated in various combinations depending on their objective function can be used as homeostatic profile or cybernetic image of oscillatory processes in mechanical systems for an objective estimation of current operational conditions in real time. The innovation can be widely used to enhance the efficiency of monitoring and research of oscillation processes in mechanical systems (objects) in construction, mechanical engineering, acoustics, etc. Concept method of vector vibrometry based on application of vector 3D phase- sensitive vibro-transducers permits unique evaluation of real stressed-strained states of power aggregates and loaded constructions and opens fundamental innovation opportunities: conduct of continuous (on-line regime) reliable monitoring of turboagregates of electrical machines, compressor installations, bases, supports, pipe-lines and other objects subjected to damaging effect of vibrations; control of operational safety of technical systems at all the stages of life cycle including design, test production, tuning, testing, operational use, repairs and resource enlargement; creation of vibro-diagnostic systems of authentic non-destructive control of anisotropic characteristics of materials resistance of power aggregates and loaded constructions under outer effects and operational flaws. The described technology is revolutionary, universal and common for all branches of engineering industry and construction building objects. 1. Theoretical relationship between elastic wave velocity and electrical resistivity Lee, Jong-Sub; Yoon, Hyung-Koo 2015-05-01 Elastic wave velocity and electrical resistivity have been commonly applied to estimate stratum structures and obtain subsurface soil design parameters. Both elastic wave velocity and electrical resistivity are related to the void ratio; the objective of this study is therefore to suggest a theoretical relationship between the two physical parameters. Gassmann theory and Archie's equation are applied to propose a new theoretical equation, which relates the compressional wave velocity to shear wave velocity and electrical resistivity. The piezo disk element (PDE) and bender element (BE) are used to measure the compressional and shear wave velocities, respectively. In addition, the electrical resistivity is obtained by using the electrical resistivity probe (ERP). The elastic wave velocity and electrical resistivity are recorded in several types of soils including sand, silty sand, silty clay, silt, and clay-sand mixture. The appropriate input parameters are determined based on the error norm in order to increase the reliability of the proposed relationship. The predicted compressional wave velocities from the shear wave velocity and electrical resistivity are similar to the measured compressional velocities. This study demonstrates that the new theoretical relationship may be effectively used to predict the unknown geophysical property from the measured values. 2. Some Properties of the Transverse Elastic Waves in Quasiperiodic Structures Tutor, J.; Velasco, V. R. We have studied the integrated density of states and fractal dimension of the transverse elastic waves spectrum in quasiperiodic systems following the Fibonacci, Thue-Morse and Rudin-Shapiro sequences. Due to the finiteness of the quasiperiodic generations, in spite of the high number of materials included, we have studied the possible influence of the boundary conditions, infinite periodic or finite systems, together with that of the different ways to generate the constituent blocks of the quasiperiodic systems, on the transverse elastic waves spectra. No relevant differences have been found for the different boundary conditions, but the different ways of generating the building blocks produce appreciable consequences in the properties of the transverse elastic waves spectra of the quasiperiodic systems studied here. 3. Optimized Equivalent Staggered-grid FD Method for Elastic Wave Modeling Based on Plane Wave Solutions Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun 2016-12-01 In finite difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modeling. Various optimized FD schemes for scalar wave modeling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modeling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modeling are obtained, which are represented by three equations corresponding to P-, S- and converted wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modeling compared to Taylor-series expansion and optimized space domain FD schemes. 4. Optimized equivalent staggered-grid FD method for elastic wave modelling based on plane wave solutions Yong, Peng; Huang, Jianping; Li, Zhenchun; Liao, Wenyuan; Qu, Luping; Li, Qingyang; Liu, Peijun 2017-02-01 In finite-difference (FD) method, numerical dispersion is the dominant factor influencing the accuracy of seismic modelling. Various optimized FD schemes for scalar wave modelling have been proposed to reduce grid dispersion, while the optimized time-space domain FD schemes for elastic wave modelling have not been fully investigated yet. In this paper, an optimized FD scheme with Equivalent Staggered Grid (ESG) for elastic modelling has been developed. We start from the constant P- and S-wave speed elastic wave equations and then deduce analytical plane wave solutions in the wavenumber domain with eigenvalue decomposition method. Based on the elastic plane wave solutions, three new time-space domain dispersion relations of ESG elastic modelling are obtained, which are represented by three equations corresponding to P-, S- and converted-wave terms in the elastic equations, respectively. By using these new relations, we can study the dispersion errors of different spatial FD terms independently. The dispersion analysis showed that different spatial FD terms have different errors. It is therefore suggested that different FD coefficients to be used to approximate the three spatial derivative terms. In addition, the relative dispersion error in L2-norm is minimized through optimizing FD coefficients using Newton's method. Synthetic examples have demonstrated that this new optimal FD schemes have superior accuracy for elastic wave modelling compared to Taylor-series expansion and optimized space domain FD schemes. 5. Decay of elastic waves in alumina Marom, H.; Sherman, D.; Rosenberg, Z. 2000-11-01 The dynamic response of alumina under shock compression was studied using planar impact experiments with different tile thicknesses. Stress-time measurements were made with manganin gauges backed by different backing materials in order to optimize gauge response. The results show an apparent decay in the Hugoniot elastic limit with propagation distance. However, further analysis reveals that this phenomenon is probably a measurement artifact, resulting from the relatively slow response times of manganin gauges. 6. Seismic waves in 3-D: from mantle asymmetries to reliable seismic hazard assessment Panza, Giuliano F.; Romanelli, Fabio 2014-10-01 A global cross-section of the Earth parallel to the tectonic equator (TE) path, the great circle representing the equator of net lithosphere rotation, shows a difference in shear wave velocities between the western and eastern flanks of the three major oceanic rift basins. The low-velocity layer in the upper asthenosphere, at a depth range of 120 to 200 km, is assumed to represent the decoupling between the lithosphere and the underlying mantle. Along the TE-perturbed (TE-pert) path, a ubiquitous LVZ, about 1,000-km-wide and 100-km-thick, occurs in the asthenosphere. The existence of the TE-pert is a necessary prerequisite for the existence of a continuous global flow within the Earth. Ground-shaking scenarios were constructed using a scenario-based method for seismic hazard analysis (NDSHA), using realistic and duly validated synthetic time series, and generating a data bank of several thousands of seismograms that account for source, propagation, and site effects. Accordingly, with basic self-organized criticality concepts, NDSHA permits the integration of available information provided by the most updated seismological, geological, geophysical, and geotechnical databases for the site of interest, as well as advanced physical modeling techniques, to provide a reliable and robust background for the development of a design basis for cultural heritage and civil infrastructures. Estimates of seismic hazard obtained using the NDSHA and standard probabilistic approaches are compared for the Italian territory, and a case-study is discussed. In order to enable a reliable estimation of the ground motion response to an earthquake, three-dimensional velocity models have to be considered, resulting in a new, very efficient, analytical procedure for computing the broadband seismic wave-field in a 3-D anelastic Earth model. 7. 3D Anisotropic structure of the south-central Mongolia from Rayleigh and Love wave tomography Yu, D.; Wu, Q.; Montagner, J. P. 2014-12-01 A better understanding of the geodynamics of the crust and mantle below Baikal-Mongolia is required to identify the role of mantle processes versus that of far-field tectonic effects from India-Asia collision. Anisotropy tomography can provide new perspective to the continental growth mechanism. In order to study the 3D anisotropic structure of the upper mantle in the south-central Mongolia, we collected the vertical and transverse components of seismograms recorded at 69 broadband seismic stations. We have measured inter-station phase velocities of 7181 Rayleigh waves and 901 Love waves using the frequency-time analysis of wavelet transformation method for the fundamental mode at period range 10~80s. The lateral phase velocity variations are computed by using a regionalization method. These phase velocities have been inverted to obtain the first anisotropic model including Sv velocities, azimuthal and radial anisotropy. The Middle Gobi is associated with low velocity. Based on the distribution of the Cenozoic basalts in the Middle Gobi, it refers that the low velocity anomaly is related to the Cenozoic volcanism. In the northern domain, near to Baikal zone, the azimuthal anisotropy is normal to the Baikal rift and consistent with the fast direction of previous SKS splitting measurements. In the South Gobi, north to Main Mongolian Lineament, the azimuthal anisotropy is NEE-SWW in the crust and NW-SE in the mantle. It indicates that the crust and mantle are decoupled. We propose that the crustal deformation is related to the far-field effects of India-Asia collision and that the mantle flow is correlated with the Baikal rift activity. Further study in process will provide more evidence and insight to better understand the geodynamics in this region. 8. Coal Thickness Gauging Using Elastic Waves NASA Technical Reports Server (NTRS) Nazarian, Soheil; Bar-Cohen, Yoseph 1999-01-01 The efforts of a mining crew can be optimized, if the thickness of the coal layers to be excavated is known before excavation. Wave propagation techniques can be used to estimate the thickness of the layer based on the contrast in the wave velocity between coal and rock beyond it. Another advantage of repeated wave measurement is that the state of the stress within the mine can be estimated. The state of the stress can be used in many safety-related decisions made during the operation of the mine. Given these two advantages, a study was carried out to determine the feasibility of the methodology. The results are presented herein. 9. Threshold response using modulated continuous wave illumination for multilayer 3D optical data storage Saini, A.; Christenson, C. W.; Khattab, T. A.; Wang, R.; Twieg, R. J.; Singer, K. D. 2017-01-01 In order to achieve a high capacity 3D optical data storage medium, a nonlinear or threshold writing process is necessary to localize data in the axial dimension. To this end, commercial multilayer discs use thermal ablation of metal films or phase change materials to realize such a threshold process. This paper addresses a threshold writing mechanism relevant to recently reported fluorescence-based data storage in dye-doped co-extruded multilayer films. To gain understanding of the essential physics, single layer spun coat films were used so that the data is easily accessible by analytical techniques. Data were written by attenuating the fluorescence using nanosecond-range exposure times from a 488 nm continuous wave laser overlapping with the single photon absorption spectrum. The threshold writing process was studied over a range of exposure times and intensities, and with different fluorescent dyes. It was found that all of the dyes have a common temperature threshold where fluorescence begins to attenuate, and the physical nature of the thermal process was investigated. 10. Nonhydrostatic granular flow over 3-D terrain: New Boussinesq-type gravity waves? Castro-Orgaz, Oscar; Hutter, Kolumban; Giraldez, Juan V.; Hager, Willi H. 2015-01-01 granular mass flow is a basic step in the prediction and control of natural or man-made disasters related to avalanches on the Earth. Savage and Hutter (1989) pioneered the mathematical modeling of these geophysical flows introducing Saint-Venant-type mass and momentum depth-averaged hydrostatic equations using the continuum mechanics approach. However, Denlinger and Iverson (2004) found that vertical accelerations in granular mass flows are of the same order as the gravity acceleration, requiring the consideration of nonhydrostatic modeling of granular mass flows. Although free surface water flow simulations based on nonhydrostatic depth-averaged models are commonly used since the works of Boussinesq (1872, 1877), they have not yet been applied to the modeling of debris flow. Can granular mass flow be described by Boussinesq-type gravity waves? This is a fundamental question to which an answer is required, given the potential to expand the successful Boussinesq-type water theory to granular flow over 3-D terrain. This issue is explored in this work by generalizing the basic Boussinesq-type theory used in civil and coastal engineering for more than a century to an arbitrary granular mass flow using the continuum mechanics approach. Using simple test cases, it is demonstrated that the above question can be answered in the affirmative way, thereby opening a new framework for the physical and mathematical modeling of granular mass flow in geophysics, whereby the effect of vertical motion is mathematically included without the need of ad hoc assumptions. 11. Characterization of an SRF gun: a 3D full wave simulation SciTech Connect Wang, E.; Ben-Zvi, I.; Wang, J. 2011-03-28 We characterized a BNL 1.3GHz half-cell SRF gun is tested for GaAs photocathode. The gun already was simulated several years ago via two-dimensional (2D) numerical codes (i.e., Superfish and Parmela) with and without the beam. In this paper, we discuss our investigation of its characteristics using a three dimensional (3D) full-wave code (CST STUDIO SUITE{trademark}).The input/pickup couplers are sited symmetrically on the same side of the gun at an angle of 180{sup o}. In particular, the inner conductor of the pickup coupler is considerably shorter than that of the input coupler. We evaluated the cross-talk between the beam (trajectory) and the signal on the input coupler compared our findings with published results based on analytical models. The CST STUDIO SUITE{trademark} also was used to predict the field within the cavity; particularly, a combination of transient/eigenmode solvers was employed to accurately construct the RF field for the particles, which also includes the effects of the couplers. Finally, we explored the beam's dynamics with a particle in cell (PIC) simulation, validated the results and compare them with 2D code result. 12. Measurements of radiated elastic wave energy from dynamic tensile cracks NASA Technical Reports Server (NTRS) Boler, Frances M. 1990-01-01 The role of fracture-velocity, microstructure, and fracture-energy barriers in elastic wave radiation during a dynamic fracture was investigated in experiments in which dynamic tensile cracks of two fracture cofigurations of double cantilever beam geometry were propagating in glass samples. The first, referred to as primary fracture, consisted of fractures of intact glass specimens; the second configuration, referred to as secondary fracture, consisted of a refracture of primary fracture specimens which were rebonded with an intermittent pattern of adhesive to produce variations in fracture surface energy along the crack path. For primary fracture cases, measurable elastic waves were generated in 31 percent of the 16 fracture events observed; the condition for radiation of measurable waves appears to be a local abrupt change in the fracture path direction, such as occurs when the fracture intersects a surface flaw. For secondary fractures, 100 percent of events showed measurable elastic waves; in these fractures, the ratio of radiated elastic wave energy in the measured component to fracture surface energy was 10 times greater than for primary fracture. 13. Lamellipodin promotes invasive 3D cancer cell migration via regulated interactions with Ena/VASP and SCAR/WAVE PubMed Central Carmona, Guillaume; Perera, Upamali; Gillett, Cheryl; Naba, Alexandra; Law, Ah-Lai; Sharma, Ved P.; Wang, Jian; Wyckoff, Jeffrey; Balsamo, Michele; Mosis, Fuad; De Piano, Mario; Monypenny, James; Woodman, Natalie; McConnell, Russell E.; Mouneimne, Ghassan; Van Hemelrijck, Mieke; Cao, Yihai; Condeelis, John; Hynes, Richard O.; Gertler, Frank B.; Krause, Matthias 2016-01-01 Cancer invasion is a hallmark of metastasis. The mesenchymal mode of cancer cell invasion is mediated by elongated membrane protrusions driven by the assembly of branched F-actin networks. How deregulation of actin regulators promotes cancer cell invasion is still enigmatic. We report that increased expression and membrane localization of the actin regulator Lamellipodin correlates with reduced metastasis-free survival and poor prognosis in breast cancer patients. In agreement we find that Lamellipodin depletion reduced lung metastasis in an orthotopic mouse breast cancer model. Invasive 3D cancer cell migration as well as invadopodia formation, and matrix degradation were impaired upon Lamellipodin depletion. Mechanistically, we show that Lamellipodin promotes invasive 3D cancer cell migration via both actin-elongating Ena/VASP proteins and the Scar/WAVE complex, which stimulates actin branching. In contrast, Lamellipodin interaction with Scar/WAVE but not Ena/VASP is required for random 2D cell migration. We identify a phosphorylation-dependent mechanism that regulates selective recruitment of these effectors to Lamellipodin: Abl-mediated Lamellipodin phosphorylation promotes its association with both Scar/WAVE and Ena/VASP, while Src-dependent phosphorylation enhances binding to Scar/WAVE but not Ena/VASP. Through these selective, regulated interactions Lamellipodin mediates directional sensing of EGF gradients and invasive 3D migration of breast cancer cells. Our findings imply that increased Lamellipodin levels enhance Ena/VASP and Scar/WAVE activities at the plasma membrane to promote 3D invasion and metastasis. PMID:26996666 14. Transverse instability and viscous dissipation of forced 3-D gravity-capillary solitary waves on deep water Cho, Yeunwoo 2014-11-01 The shedding phenomena of 3-D viscous gravity-capillary solitary waves generated by a moving air-forcing on the surface of deep water are investigated. Near the resonance where the forcing speed is close to 23 cm/s, two kinds of shedding modes are possible; Anti-symmetric and symmetric modes. A relevant theoretical model equation is numerically solved for the identification of shedding of solitary waves, and is analytically studied in terms of their linear stability to transverse perturbations. Furthermore, by tracing trajectories of shed solitary waves, the decay rate of a 3-D solitary wave due to viscous dissipation is estimated. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2014R1A1A1002441). 15. Nonlinear evolution of 3D-inertial Alfvén wave and turbulent spectra in Auroral region Rinawa, M. L.; Modi, K. V.; Sharma, R. P. 2014-10-01 In the present paper, we have investigated nonlinear interaction of three dimensional (3D) inertial Alfvén wave and perpendicularly propagating magnetosonic wave for low β-plasma ( β≪ m e / m i ). We have developed the set of dimensionless equations in the presence of ponderomotive nonlinearity due to 3D-inertial Alfvén wave in the dynamics of perpendicularly propagating magnetosonic wave. Stability analysis and numerical simulation has been carried out to study the effect of nonlinear coupling on the formation of localized structures and turbulent spectra, applicable to auroral region. The results reveal that the localized structures become more and more complex as the nonlinear interaction progresses. Further, we have studied the turbulent spectrum which follows spectral index (˜ k -3.57) at smaller scales. Relevance of the obtained results has been shown with the observations received by various spacecrafts like FAST, Hawkeye and Heos 2. 16. Vibration band gaps for elastic metamaterial rods using wave finite element method Nobrega, E. D.; Gautier, F.; Pelat, A.; Dos Santos, J. M. C. 2016-10-01 Band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators are investigated. New techniques to analyze metamaterial systems are using a combination of analytical or numerical method with wave propagation. One of them, called here wave spectral element method (WSEM), consists of combining the spectral element method (SEM) with Floquet-Bloch's theorem. A modern methodology called wave finite element method (WFEM), developed to calculate dynamic behavior in periodic acoustic and structural systems, utilizes a similar approach where SEM is substituted by the conventional finite element method (FEM). In this paper, it is proposed to use WFEM to calculate band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators of multi-degree-of-freedom (M-DOF). Simulated examples with band gaps generated by Bragg scattering and local resonators are calculated by WFEM and verified with WSEM, which is used as a reference method. Results are presented in the form of attenuation constant, vibration transmittance and frequency response function (FRF). For all cases, WFEM and WSEM results are in agreement, provided that the number of elements used in WFEM is sufficient to convergence. An experimental test was conducted with a real elastic metamaterial rod, manufactured with plastic in a 3D printer, without local resonance-type effect. The experimental results for the metamaterial rod with band gaps generated by Bragg scattering are compared with the simulated ones. Both numerical methods (WSEM and WFEM) can localize the band gap position and width very close to the experimental results. A hybrid approach combining WFEM with the commercial finite element software ANSYS is proposed to model complex metamaterial systems. Two examples illustrating its efficiency and accuracy to model an elastic metamaterial rod unit-cell using 1D simple rod element and 3D solid element are 17. Dynamics of periodic mechanical structures containing bistable elastic elements: From elastic to solitary wave propagation Nadkarni, Neel; Daraio, Chiara; Kochmann, Dennis M. 2014-08-01 We investigate the nonlinear dynamics of a periodic chain of bistable elements consisting of masses connected by elastic springs whose constraint arrangement gives rise to a large-deformation snap-through instability. We show that the resulting negative-stiffness effect produces three different regimes of (linear and nonlinear) wave propagation in the periodic medium, depending on the wave amplitude. At small amplitudes, linear elastic waves experience dispersion that is controllable by the geometry and by the level of precompression. At moderate to large amplitudes, solitary waves arise in the weakly and strongly nonlinear regime. For each case, we present closed-form analytical solutions and we confirm our theoretical findings by specific numerical examples. The precompression reveals a class of wave propagation for a partially positive and negative potential. The presented results highlight opportunities in the design of mechanical metamaterials based on negative-stiffness elements, which go beyond current concepts primarily based on linear elastic wave propagation. Our findings shed light on the rich effective dynamics achievable by nonlinear small-scale instabilities in solids and structures. 18. Influence of 3D Teleseismic Body Waves in the Finite-Fault Source Inversion of Subduction Earthquakes 2014-12-01 Most large earthquakes are generated in subduction zones. To study the complexity of these events, teleseismic body waves offer many advantages over other types of data: they allow to study both the temporal and spatial evolution of slip during the rupture, they don't depend on the presence of nearby land and they allow to study earthquakes regardless of their location. Since the development of teleseismic finite-fault inversion in the 1980th, teleseismic body waves have been simulated using 1D velocity models to take into account propagation effects at the source. Yet, subduction zones are known to be highly heterogeneous: they are characterized by curved and dipping structures, strong seismic velocity contrasts, strong variations of topography and height of the water column. The main reason for relying on a 1D approximation is the computational cost of 3D simulations. And while forward simulations of teleseismic waves in a 3D Earth are only starting to be tractable on modern computers at the frequency range of interest (0.1Hz or shorter), finite-fault source studies require a large number of these simulations. In this work, we present a new and efficient approach to compute 3D teleseismic body waves, in which the full 3D propagation is only computed in a regional domain using discontinuous Galerkin finite-element method, while the rest of the seismic wave field is propagated in a background axisymmetric Earth. The regional and global wave fields are matched using the so-called Total-Field/Scattered-Field technique. This new simulation approach allows us to study the waveform complexities resulting from 3D propagation and investigate how they could improve the resolution and reduce the non-uniqueness of finite-fault inversions. 19. SHEAR WAVE SEISMIC STUDY COMPARING 9C3D SV AND SH IMAGES WITH 3C3D C-WAVE IMAGES SciTech Connect John Beecherl; Bob A. Hardage 2004-07-01 The objective of this study was to compare the relative merits of shear-wave (S-wave) seismic data acquired with nine-component (9-C) technology and with three-component (3-C) technology. The original proposal was written as if the investigation would be restricted to a single 9-C seismic survey in southwest Kansas (the Ashland survey), on the basis of the assumption that both 9-C and 3-C S-wave images could be created from that one data set. The Ashland survey was designed as a 9-C seismic program. We found that although the acquisition geometry was adequate for 9-C data analysis, the source-receiver geometry did not allow 3-C data to be extracted on an equitable and competitive basis with 9-C data. To do a fair assessment of the relative value of 9-C and 3-C seismic S-wave data, we expanded the study beyond the Ashland survey and included multicomponent seismic data from surveys done in a variety of basins. These additional data were made available through the Bureau of Economic Geology, our research subcontractor. Bureau scientists have added theoretical analyses to this report that provide valuable insights into several key distinctions between 9-C and 3-C seismic data. These theoretical considerations about distinctions between 3-C and 9-C S-wave data are presented first, followed by a discussion of differences between processing 9-C common-midpoint data and 3-C common-conversion-point data. Examples of 9-C and 3-C data are illustrated and discussed in the last part of the report. The key findings of this study are that each S-wave mode (SH-SH, SV-SV, or PSV) involves a different subsurface illumination pattern and a different reflectivity behavior and that each mode senses a different Earth fabric along its propagation path because of the unique orientation of its particle-displacement vector. As a result of the distinct orientation of each mode's particle-displacement vector, one mode may react to a critical geologic condition in a more optimal way than do 20. Estimation of local stresses and elastic properties of a mortar sample by FFT computation of fields on a 3D image SciTech Connect Escoda, J.; Willot, F.; Jeulin, D.; Sanahuja, J.; Toulemonde, C. 2011-05-15 This study concerns the prediction of the elastic properties of a 3D mortar image, obtained by micro-tomography, using a combined image segmentation and numerical homogenization approach. The microstructure is obtained by segmentation of the 3D image into aggregates, voids and cement paste. Full-fields computations of the elastic response of mortar are undertaken using the Fast Fourier Transform method. Emphasis is made on highly-contrasted properties between aggregates and matrix, to anticipate needs for creep or damage computation. The representative volume element, i.e. the volume size necessary to compute the effective properties with a prescribed accuracy, is given. Overall, the volumes used in this work were sufficient to estimate the effective response of mortar with a precision of 5%, 6% and 10% for contrast ratios of 100, 1000 and 10,000, resp. Finally, a statistical and local characterization of the component of the stress field parallel to the applied loading is carried out. 1. Linking snow microstructure to its macroscopic elastic stiffness tensor: A numerical homogenization method and its application to 3-D images from X-ray tomography Wautier, A.; Geindreau, C.; Flin, F. 2015-10-01 The full 3-D macroscopic mechanical behavior of snow is investigated by solving kinematically uniform boundary condition problems derived from homogenization theories over 3-D images obtained by X-ray tomography. Snow is modeled as a porous cohesive material, and its mechanical stiffness tensor is computed within the framework of the elastic behavior of ice. The size of the optimal representative elementary volume, expressed in terms of correlation lengths, is determined through a convergence analysis of the computed effective properties. A wide range of snow densities is explored, and power laws with high regression coefficients are proposed to link the Young's and shear moduli of snow to its density. The degree of anisotropy of these properties is quantified, and Poisson's ratios are also provided. Finally, the influence of the main types of metamorphism (isothermal, temperature gradient, and wet snow metamorphism) on the elastic properties of snow and on their anisotropy is reported. 2. Noncontact Elastic Wave Imaging Optical Coherence Elastography for Evaluating Changes in Corneal Elasticity Due to Crosslinking. PubMed Singh, Manmohan; Li, Jiasong; Vantipalli, Srilatha; Wang, Shang; Han, Zhaolong; Nair, Achuth; Aglyamov, Salavat R; Twa, Michael D; Larin, Kirill V 2016-01-01 The mechanical properties of tissues can provide valuable information about tissue integrity and health and can assist in detecting and monitoring the progression of diseases such as keratoconus. Optical coherence elastography (OCE) is a rapidly emerging technique, which can assess localized mechanical contrast in tissues with micrometer spatial resolution. In this work we present a noncontact method of optical coherence elastography to evaluate the changes in the mechanical properties of the cornea after UV-induced collagen cross-linking. A focused air-pulse induced a low amplitude (μm scale) elastic wave, which then propagated radially and was imaged in three dimensions by a phase-stabilized swept source optical coherence tomography (PhS-SSOCT) system. The elastic wave velocity was translated to Young's modulus in agar phantoms of various concentrations. Additionally, the speed of the elastic wave significantly changed in porcine cornea before and after UV-induced corneal collagen cross-linking (CXL). Moreover, different layers of the cornea, such as the anterior stroma, posterior stroma, and inner region, could be discerned from the phase velocities of the elastic wave. Therefore, because of noncontact excitation and imaging, this method may be useful for in vivo detection of ocular diseases such as keratoconus and evaluation of therapeutic interventions such as CXL. 3. Elastic waves trapped by a homogeneous anisotropic semicylinder SciTech Connect Nazarov, S A 2013-11-30 It is established that the problem of elastic oscillations of a homogeneous anisotropic semicylinder (console) with traction-free lateral surface (Neumann boundary condition) has no eigenvalues when the console is clamped at one end (Dirichlet boundary condition). If the end is free, under additional requirements of elastic and geometric symmetry, simple sufficient conditions are found for the existence of an eigenvalue embedded in the continuous spectrum and generating a trapped elastic wave, that is, one which decays at infinity at an exponential rate. The results are obtained by generalizing the methods developed for scalar problems, which however require substantial modification for the vector problem in elasticity theory. Examples are given and open questions are stated. Bibliography: 53 titles. 4. Converted-Wave Processing of a 3D-3C Refection Seismic Survey of Soda Lake Geothermal Field Louie, J. N.; Kent, T.; Echols, J. 2012-12-01 This 3D-3C seismic survey greatly improves the structural model of the Soda Lake, Nevada geothermal system. The picked top of a mudstone interval above reservoir levels reveals a detailed fault map. The geothermal reservoir is within a complex of nested grabens. Determining a "geothermal indicator" for the deeper reservoir in the seismic signal, and processing of the 3D converted-wave data, have been unsuccessful to date. Due to a high near-surface Vp/Vs ratio the shear-wave energy is under-sampled with 220 ft receiver spacing and 550 ft (168 m) line spacing. The 2D converted-wave data that we can image shows encouraging similarity to the deep structural features in the P-wave sections, but have little resolution of shallow structures. Higher-density receivers and a better shallow shear-wave model are needed in conjunction with this deep reflection study to effectively image the 3D converted waves. 5. Nonlinear dynamics of 3D beams of fast magnetosonic waves propagating in the ionospheric and magnetospheric plasma Belashov, V. Yu.; Belashova, E. S. 2016-11-01 On the basis of the model of the three-dimensional (3D) generalized Kadomtsev-Petviashvili equation for magnetic field h = B / B the formation, stability, and dynamics of 3D soliton-like structures, such as the beams of fast magnetosonic (FMS) waves generated in ionospheric and magnetospheric plasma at a low-frequency branch of oscillations when β = 4 πnT/ B 2 ≪ 1 and β > 1, are studied. The study takes into account the highest dispersion correction determined by values of the plasma parameters and the angle θ = ( B, k), which plays a key role in the FMS beam propagation at those angles to the magnetic field that are close to π/2. The stability of multidimensional solutions is studied by an investigation of the Hamiltonian boundness under its deformations on the basis of solving of the corresponding variational problem. The evolution and dynamics of the 3D FMS wave beam are studied by the numerical integration of equations with the use of specially developed methods. The results can be interpreted in terms of the self-focusing phenomenon, as the formation of a stationary beam and the scattering and self-focusing of the solitary beam of FMS waves. These cases were studied with a detailed investigation of all evolutionary stages of the 3D FMS wave beams in the ionospheric and magnetospheric plasma. 6. A time-space domain stereo finite difference method for 3D scalar wave propagation Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie 2016-11-01 The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM). 7. Elastic wave propagation in finitely deformed layered materials Galich, Pavel I.; Fang, Nicholas X.; Boyce, Mary C.; Rudykh, Stephan 2017-01-01 We analyze elastic wave propagation in highly deformable layered media with isotropic hyperelastic phases. Band gap structures are calculated for the periodic laminates undergoing large deformations. Compact explicit expressions for the phase and group velocities are derived for the long waves propagating in the finitely deformed composites. Elastic wave characteristics and band gaps are shown to be highly tunable by deformation. The influence of deformation on shear and pressure wave band gaps for materials with various composition and constituent properties are studied, finding advantageous compositions for producing highly tunable complete band gaps in low-frequency ranges. The shear wave band gaps are influenced through the deformation induced changes in effective material properties, whereas pressure wave band gaps are mostly influenced by deformation induced geometry changes. The wide shear wave band gaps are found in the laminates with small volume fractions of a soft phase embedded in a stiffer material; pressure wave band gaps of the low-frequency range appear in the laminates with thin highly compressible layers embedded in a nearly incompressible phase. Thus, by constructing composites with a small amount of a highly compressible phase, wide complete band gaps at the low-frequency range can be achieved; furthermore, these band gaps are shown to be highly tunable by deformation. 8. Normal waves in elastic bars of rectangular cross section. PubMed Krushynska, Anastasiia A; Meleshko, Viatcheslav V 2011-03-01 This paper addresses a theoretical study of guided normal waves in elastic isotropic bars of rectangular cross-section by an analytical superposition method. Dispersion properties of propagating and evanescent modes for four families are analyzed in detail at various geometric and physical parameters of the bar. A comparison of the obtained results with the well-known properties for waves in infinite plates and circular cylinders is provided. The complicated structure of dispersion spectra is explained. High-frequency limiting values for phase and group velocities of normal waves are established for the first time. Calculated data agree well with the available experimental results. 9. Frequency spectra of nonlinear elastic pulse-mode waves SciTech Connect Kadish, A.; TenCate, J.A.; Johnson, P.A. 1996-09-01 The frequency spectrum of simple waves is used to derive a closed form analytical representation for the frequency spectrum of damped nonlinear pulses in elastic materials. The damping modification of simple wave theory provides an efficient numerical method for calculating propagating wave forms. The spectral representation, which is neither pulse length nor amplitude limited, is used to obtain estimates for parameters of the nonlinear state relation for a sandstone sample from published experimental data, and the results are compared with those of other theories. The method should have broad application to many solids. 10. Propagation of elastic waves through textured polycrystals: application to ice. PubMed Maurel, Agnès; Lund, Fernando; Montagnat, Maurine 2015-05-08 The propagation of elastic waves in polycrystals is revisited, with an emphasis on configurations relevant to the study of ice. Randomly oriented hexagonal single crystals are considered with specific, non-uniform, probability distributions for their major axis. Three typical textures or fabrics (i.e. preferred grain orientations) are studied in detail: one cluster fabric and two girdle fabrics, as found in ice recovered from deep ice cores. After computing the averaged elasticity tensor for the considered textures, wave propagation is studied using a wave equation with elastic constants c=〈c〉+δc that are equal to an average plus deviations, presumed small, from that average. This allows for the use of the Voigt average in the wave equation, and velocities are obtained solving the appropriate Christoffel equation. The velocity for vertical propagation, as appropriate to interpret sonic logging measurements, is analysed in more details. Our formulae are shown to be accurate at the 0.5% level and they provide a rationale for previous empirical fits to wave propagation velocities with a quantitative agreement at the 0.07-0.7% level. We conclude that, within the formalism presented here, it is appropriate to use, with confidence, velocity measurements to characterize ice fabrics. 11. Propagation of elastic waves through textured polycrystals: application to ice PubMed Central Maurel, Agnès; Lund, Fernando; Montagnat, Maurine 2015-01-01 The propagation of elastic waves in polycrystals is revisited, with an emphasis on configurations relevant to the study of ice. Randomly oriented hexagonal single crystals are considered with specific, non-uniform, probability distributions for their major axis. Three typical textures or fabrics (i.e. preferred grain orientations) are studied in detail: one cluster fabric and two girdle fabrics, as found in ice recovered from deep ice cores. After computing the averaged elasticity tensor for the considered textures, wave propagation is studied using a wave equation with elastic constants c=〈c〉+δc that are equal to an average plus deviations, presumed small, from that average. This allows for the use of the Voigt average in the wave equation, and velocities are obtained solving the appropriate Christoffel equation. The velocity for vertical propagation, as appropriate to interpret sonic logging measurements, is analysed in more details. Our formulae are shown to be accurate at the 0.5% level and they provide a rationale for previous empirical fits to wave propagation velocities with a quantitative agreement at the 0.07–0.7% level. We conclude that, within the formalism presented here, it is appropriate to use, with confidence, velocity measurements to characterize ice fabrics. PMID:27547099 12. Torsional wave propagation in multiwalled carbon nanotubes using nonlocal elasticity Arda, Mustafa; Aydogdu, Metin 2016-03-01 Torsional wave propagation in multiwalled carbon nanotubes is studied in the present work. Governing equation of motion of multiwalled carbon nanotube is obtained using Eringen's nonlocal elasticity theory. The effect of van der Waals interaction coefficient is considered between inner and outer nanotubes. Dispersion relations are obtained and discussed in detail. Effect of nonlocal parameter and van der Waals interaction to the torsional wave propagation behavior of multiwalled carbon nanotubes is investigated. It is obtained that torsional van der Waals interaction between adjacent tubes can change the rotational direction of multiwalled carbon nanotube as in-phase or anti-phase. The group and escape velocity of the waves converge to a limit value in the nonlocal elasticity approach. 13. Elastic wave velocities of Apollo 14, 15, and 16 rocks NASA Technical Reports Server (NTRS) Mizutani, H.; Newbigging, D. F. 1973-01-01 Elastic wave velocities of two Apollo 14 rocks, 14053 and 14321, three Apollo 15 rocks, 15058, 15415, and 15545, and one Apollo 16 rock 60315 have been determined at pressures up to 10 kb. For sample 14321, the variation of the compressional wave velocities with temperature has been measured over the temperature range from 27 to 200 C. Overall elastic properties of these samples except sample 15415 are very similar to those of Apollo 11, 12, and 14 rocks and are concordant with Toksoz et al.'s (1972) interpretation that lunar upper crust is of basaltic composition. Temperature derivative of the P wave velocity for sample 14321 is a half to one order of magnitude larger than that for single crystalline minerals. This suggests that the seismic velocity in the lunar crust may be affected significantly by the temperature distribution. 14. Wide-angle elastic wave one-way propagation in heterogeneous media and an elastic wave complex-screen method Wu, Ru-Shan 1994-01-01 In this paper a system of equations for wide-angle one-way elastic wave propagation in arbitrarily heterogeneous media is formulated in both the space and wavenumber domains using elastic Rayleigh integrals and local elastic Born scattering theory. The wavenumber domain formulation leads to compact solutions to one-way propagation and scattering problems. It is shown that wide-angle scattering in heterogeneous elastic media cannot be formulated as passage through regular phase-screens, since the interaction between the incident wavefield and the heterogeneities is not local in both the space domain and the wavenumber domain. Our more generally valid formulation is called the 'thin-slap; formulation. After applying the small-angle approximation, the thin slab effect degenerates to that of an elastic complex-screen (or generalized phase-screen). For the complex-screen method the cross-coupling term is neglected because it is higher order small quantity for small-angle scattering. Relative to prior derivations of vector phase-screen method, our method can correctly treat the conversion between P and S waves and the cross-coupling between differently polarized S waves. A comparison with solutions from three-dimensional finite difference and exact solutions using eigenfunctions expansion is made for two special cases. One is for a solid sphere with only P velocity pertubation; the other is with only S velocity perturbation. The Elastic complex-screen method generally agrees well with the three-dimensional finite difference method and the exact solutions. In the limiting case of scalar waves, the derivation in this paper leads to a move generally valid new method, namely, a scaler thin-slab method. When making the small-angle approximation to the interaction term while keeping the propagation term unchanged, the thin-slab method approaches the currently available scalar wide-angle phase screen method. 15. Moored Observations of Internal Waves in Luzon Strait: 3-D Structure, Dissipation, and Evolution DTIC Science & Technology 2013-09-30 variability, it may be due to waves propagating into Luzon strait from remote sources. Lee Waves and Dissipation on Supercritical Slopes A profiling...variability of the internal wave field in the upper 1000 m of the water column. The phase progression of internal waves as they propagate away from their 16. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey 2016-04-01 Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT 17. Relation Between the 3D-Geometry of the Coronal Wave and Associated CME During the 26 April 2008 Event Temmer, M.; Veronig, A. M.; Gopalswamy, N.; Yashiro, S. We study the kinematical characteristics and 3D geometry of a large-scale coronal wave that occurred in association with the 26 April 2008 flare-CME event. The wave was observed with the EUVI instruments aboard both STEREO spacecraft (STEREO-A and STEREO-B) with a mean speed of ˜ 240 km s-1. The wave is more pronounced in the eastern propagation direction, and is thus, better observable in STEREO-B images. From STEREO-B observations we derive two separate initiation centers for the wave, and their locations fit with the coronal dimming regions. Assuming a simple geometry of the wave we reconstruct its 3D nature from combined STEREO-A and STEREO-B observations. We find that the wave structure is asymmetric with an inclination toward East. The associated CME has a deprojected speed of ˜ 750±50 km s-1, and it shows a non-radial outward motion toward the East with respect to the underlying source region location. Applying the forward fitting model developed by Thernisien, Howard, and Vourlidas (Astrophys. J. 652, 763, 2006), we derive the CME flux rope position on the solar surface to be close to the dimming regions. We conclude that the expanding flanks of the CME most likely drive and shape the coronal wave. 18. Relation Between the 3D-Geometry of the Coronal Wave and Associated CME During the 26 April 2008 Event Temmer, M.; Veronig, A. M.; Gopalswamy, N.; Yashiro, S. 2011-11-01 We study the kinematical characteristics and 3D geometry of a large-scale coronal wave that occurred in association with the 26 April 2008 flare-CME event. The wave was observed with the EUVI instruments aboard both STEREO spacecraft (STEREO-A and STEREO-B) with a mean speed of ˜ 240 km s-1. The wave is more pronounced in the eastern propagation direction, and is thus, better observable in STEREO-B images. From STEREO-B observations we derive two separate initiation centers for the wave, and their locations fit with the coronal dimming regions. Assuming a simple geometry of the wave we reconstruct its 3D nature from combined STEREO-A and STEREO-B observations. We find that the wave structure is asymmetric with an inclination toward East. The associated CME has a deprojected speed of ˜ 750±50 km s-1, and it shows a non-radial outward motion toward the East with respect to the underlying source region location. Applying the forward fitting model developed by Thernisien, Howard, and Vourlidas (Astrophys. J. 652, 763, 2006), we derive the CME flux rope position on the solar surface to be close to the dimming regions. We conclude that the expanding flanks of the CME most likely drive and shape the coronal wave. 19. Relation Between the 3D-Geometry of the Coronal Wave and Associated CME During the 26 April 2008 Event NASA Technical Reports Server (NTRS) Temmer, M.; Veronig, A. M.; Gopalswamy, N.; Yashiro, S. 2011-01-01 We study the kinematical characteristics and 3D geometry of a large-scale coronal wave that occurred in association with the 26 April 2008 flare-CME event. The wave was observed with the EUVI instruments aboard both STEREO spacecraft (STEREO-A and STEREO-B) with a mean speed of approx 240 km/s. The wave is more pronounced in the eastern propagation direction, and is thus, better observable in STEREO-B images. From STEREO-B observations we derive two separate initiation centers for the wave, and their locations fit with the coronal dimming regions. Assuming a simple geometry of the wave we reconstruct its 3D nature from combined STEREO-A and STEREO-B observations. We find that the wave structure is asymmetric with an inclination toward East. The associated CME has a deprojected speed of approx 750 +/- 50 km/s, and it shows a non-radial outward motion toward the East with respect to the underlying source region location. Applying the forward fitting model developed by Thernisien, Howard, and Vourlidas we derive the CME flux rope position on the solar surface to be close to the dimming regions. We conclude that the expanding flanks of the CME most likely drive and shape the coronal wave. 20. Stress waves in isotropic elastic plate excited by circular transducer NASA Technical Reports Server (NTRS) Williams, J. H., Jr.; Lee, S. S.; Karagulle, H. 1986-01-01 Steady state harmonic stress waves in an isotropic elastic plate excited on one face by a circular transducer are analyzed theoretically. The transmitting transducer transforms an electrical voltage into a uniform normal stress at the top of the plate. To solve the boundary value problem,the radiation into a half-space is considered. The receiving transducer produces an electrical voltage proportional to the average spatially integrated normal stress over its face due to an incident wave. A numerical procedure is given to evaluate the frequency response at a receiving point due to a multiply reflected wave in the near field. Its stability and convergence are discussed. Parameterization plots which determine the particular wave whose frequency response has maximum magnitude compared with other multiple reflected waves are given for a range of values of dimensionless parameters. The effects of changes in the values of the parameters are discussed. 1. Acoustic and elastic waves in metamaterials for underwater applications Titovich, Alexey S. Elastic effects in acoustic metamaterials are investigated. Water-based periodic arrays of elastic scatterers, sonic crystals, suffer from low transmission due to the impedance and index mismatch of typical engineering materials with water. A new type of acoustic metamaterial element is proposed that can be tuned to match the acoustic properties of water in the quasi-static regime. The element comprises a hollow elastic cylindrical shell fitted with an optimized internal substructure consisting of a central mass supported by an axisymmetric distribution of elastic stiffeners, which dictate the shell's effective bulk modulus and density. The derived closed form scattering solution for this system shows that the subsonic flexural waves excited in the shell by the attachment of stiffeners are suppressed by including a sufficiently large number of such stiffeners. As an example of refraction-based wave steering, a cylindrical-to-plane wave lens is designed by varying the bulk modulus in the array according to the conformal mapping of a unit circle to a square. Elastic shells provide rich scattering properties, mainly due to their ability to support highly dispersive flexural waves. Analysis of flexural-borne waves on a pair of shells yields an analytical expression for the width of a flexural resonance, which is then used with the theory of multiple scattering to accurately predict the splitting of the resonance frequency. This analysis leads to the discovery of the acoustic Poisson-like effect in a periodic wave medium. This effect redirects an incident acoustic wave by 90° in an otherwise acoustically transparent sonic crystal. An unresponsive "deaf" antisymmetric mode locked to band gap boundaries is unlocked by matching Bragg scattering with a quadrupole flexural resonance of the shell. The dynamic effect causes normal unidirectional wave motion to strongly couple to perpendicular motion, analogous to the quasi-static Poisson effect in solids. The Poisson 2. Elastic wave from fast heavy ion irradiation on solids Kambara, T.; Kageyama, K.; Kanai, Y.; Kojima, T. M.; Nanai, Y.; Yoneda, A.; Yamazaki, Y. 2002-06-01 To study the time-dependent mechanical effects of fast heavy ion irradiations, we have irradiated various solids by a short-bunch beam of 95 MeV/u Ar ions and observed elastic waves generated in the bulk. The irradiated targets were square-shaped plates of poly-crystals of metals (Al and Cu), invar alloy, ceramic (Al 2O 3), fused silica (SiO 2) and single crystals of KC1 and LiF with a thickness of 10 mm. The beam was incident perpendicular to the surface and all ions were stopped in the target. Two piezo-electric ultrasonic sensors were attached to the surface of the target and detected the elastic waves. The elastic waveforms as well as the time structure and intensity of the beam bunch were recorded for each shot of a beam bunch. The sensor placed opposite to the beam spot recorded a clear waveform of the longitudinal wave across the material, except for the invar and fused silica targets. From its propagation time along with the sound velocity and the thickness of the target, the depth of the wave source was estimated. The result was compared with ion ranges calculated for these materials by TRIM code. 3. Wave propagation in elastic medium with heterogeneous quadratic nonlinearity SciTech Connect Tang Guangxin; Jacobs, Laurence J.; Qu Jianmin 2011-06-23 This paper studies the one-dimensional wave propagation in an elastic medium with spatially non-uniform quadratic nonlinearity. Two problems are solved analytically. One is for a time-harmonic wave propagating in a half-space where the displacement is prescribed on the surface of the half-space. It is found that spatial non-uniformity of the material nonlinearity causes backscattering of the second order harmonic, which when combined with the forward propagating waves generates a standing wave in steady-state wave motion. The second problem solved is the reflection from and transmission through a layer of finite thickness embedded in an otherwise linearly elastic medium of infinite extent, where it is assumed that the layer has a spatially non-uniform quadratic nonlinearity. The results show that the transmission coefficient for the second order harmonic is proportional to the spatial average of the nonlinearity across the thickness of the layer, independent of the spatial distribution of the nonlinearity. On the other hand, the coefficient of reflection is proportional to a weighted average of the nonlinearity across the layer thickness. The weight function in this weighted average is related to the propagating phase, thus making the coefficient of reflection dependent on the spatial distribution of the nonlinearity. Finally, the paper concludes with some discussions on how to use the reflected and transmitted second harmonic waves to evaluate the variance and autocorrelation length of nonlinear parameter {beta} when the nonlinearity distribution in the layer is a stochastic process. 4. Lamellipodin promotes invasive 3D cancer cell migration via regulated interactions with Ena/VASP and SCAR/WAVE. PubMed Carmona, G; Perera, U; Gillett, C; Naba, A; Law, A-L; Sharma, V P; Wang, J; Wyckoff, J; Balsamo, M; Mosis, F; De Piano, M; Monypenny, J; Woodman, N; McConnell, R E; Mouneimne, G; Van Hemelrijck, M; Cao, Y; Condeelis, J; Hynes, R O; Gertler, F B; Krause, M 2016-09-29 Cancer invasion is a hallmark of metastasis. The mesenchymal mode of cancer cell invasion is mediated by elongated membrane protrusions driven by the assembly of branched F-actin networks. How deregulation of actin regulators promotes cancer cell invasion is still enigmatic. We report that increased expression and membrane localization of the actin regulator Lamellipodin correlate with reduced metastasis-free survival and poor prognosis in breast cancer patients. In agreement, we find that Lamellipodin depletion reduced lung metastasis in an orthotopic mouse breast cancer model. Invasive 3D cancer cell migration as well as invadopodia formation and matrix degradation was impaired upon Lamellipodin depletion. Mechanistically, we show that Lamellipodin promotes invasive 3D cancer cell migration via both actin-elongating Ena/VASP proteins and the Scar/WAVE complex, which stimulates actin branching. In contrast, Lamellipodin interaction with Scar/WAVE but not with Ena/VASP is required for random 2D cell migration. We identified a phosphorylation-dependent mechanism that regulates selective recruitment of these effectors to Lamellipodin: Abl-mediated Lamellipodin phosphorylation promotes its association with both Scar/WAVE and Ena/VASP, whereas Src-dependent phosphorylation enhances binding to Scar/WAVE but not to Ena/VASP. Through these selective, regulated interactions Lamellipodin mediates directional sensing of epidermal growth factor (EGF) gradients and invasive 3D migration of breast cancer cells. Our findings imply that increased Lamellipodin levels enhance Ena/VASP and Scar/WAVE activities at the plasma membrane to promote 3D invasion and metastasis. 5. Scattering of time-harmonic elastic waves by an elastic inclusion with quadratic nonlinearity. PubMed Tang, Guangxin; Jacobs, Laurence J; Qu, Jianmin 2012-04-01 This paper considers the scattering of a plane, time-harmonic wave by an inclusion with heterogeneous nonlinear elastic properties embedded in an otherwise homogeneous linear elastic solid. When the inclusion and the surrounding matrix are both isotropic, the scattered second harmonic fields are obtained in terms of the Green's function of the surrounding medium. It is found that the second harmonic fields depend on two independent acoustic nonlinearity parameters related to the third order elastic constants. Solutions are also obtained when these two acoustic nonlinearity parameters are given as spatially random functions. An inverse procedure is developed to obtain the statistics of these two random functions from the measured forward and backscattered second harmonic fields. 6. Scattering resonance of elastic wave and low-frequency equivalent slow wave Meng, X.; Liu, H.; Hu, T.; Yang, L. 2015-12-01 Transmitted wave occurs as fast p-wave and slow p-wave in certain conditions when seismic waves travel through inhomogeneous layers. Energy of slow p-waves is strongest at some frequency band, but rather weak at both high frequency band and low frequency band, called scattering resonance. For practical seismic exploration, the frequency of slow p-wave occurs is below 10Hz, which cannot be explained by Biot's theory which predicts existence of the slow p-wave at ultrasonic band in the porous media. The slow p-wave equation have been derived, but which only adapted to explaining slow p-wave in the ultrasonic band. Experimental observations exhibit that slow p-wave also exists in nonporous media but with enormous low-velocity interbeds. When vertical incidence, elastic wave is simplified as compressing wave, the generation of slow waves is independent on shear wave. In the case of flat interbed and gas bubble, Liu (2006) has studied the transmission of acoustic waves, and found that the slow waves below the 10Hz frequency band can be explained. In the case of general elastic anisotropy medium, the tiheoretical research on the generation of slow waves is insufficient. Aiming at this problem, this paper presents an exponential mapping method based on transmitted wave (Magnus 1954), which can successfully explain the generation of the slow wave transmission in that case. Using the prediction operator (Claerbout 1985) to represent the transmission wave, this can be derived as first order partial differential equation. Using expansions in the frequency domain and the wave number domain, we find that the solutions have different expressions in the case of weak scattering and strong scattering. Besides, the method of combining the prediction operator and the exponential map is needed to extend to the elastic wave equation. Using the equation (Frazer and Fryer 1984, 1987), we derive the exponential mapping solution for the prediction operator of the general elastic medium 7. Three-dimensional two-fluid investigation of 3D-localized magnetic reconnection and its relation to whistler waves Yoon, Young Dae; Bellan, Paul M. 2016-10-01 A full three-dimensional computer code was developed in order to simulate a 3D-localized magnetic reconnection. We assume an incompressible two-fluid regime where the ions are stationary, and electron inertia and Hall effects are present. We solve a single dimensionless differential equation for perturbed magnetic fields with arbitrary background fields. The code has successfully reproduced both experimental and analytic solutions to resonance and Gendrin mode whistler waves in a uniform background field. The code was then modified to model 3D-localized magnetic reconnection as a 3D-localized perturbation on a hyperbolic-tangent background field. Three-dimensional properties that are asymmetric in the out-of-plane direction have been observed. These properties pertained to magnetic field lines, electron currents and their convection. Helicity and energy have also been examined, as well as the addition of a guide field. 8. Spherical Wave Propagation in a Nonlinear Elastic Medium SciTech Connect Korneev, Valeri A. 2009-07-01 Nonlinear propagation of spherical waves generated by a point-pressure source is considered for the cases of monochromatic and impulse primary waveforms. The nonlinear five-constant elastic theory advanced by Murnaghan is used where general equations of motion are put in the form of vector operators, which are independent of the coordinate system choice. The ratio of the nonlinear field component to the primary wave in the far field is proportional to ln(r) where r is a propagation distance. Near-field components of the primary field do not contribute to the far field of nonlinear component. 9. Multilevel fast multipole algorithm for elastic wave scattering by large three-dimensional objects Tong, Mei Song; Chew, Weng Cho 2009-02-01 Multilevel fast multipole algorithm (MLFMA) is developed for solving elastic wave scattering by large three-dimensional (3D) objects. Since the governing set of boundary integral equations (BIE) for the problem includes both compressional and shear waves with different wave numbers in one medium, the double-tree structure for each medium is used in the MLFMA implementation. When both the object and surrounding media are elastic, four wave numbers in total and thus four FMA trees are involved. We employ Nyström method to discretize the BIE and generate the corresponding matrix equation. The MLFMA is used to accelerate the solution process by reducing the complexity of matrix-vector product from O(N2) to O(NlogN) in iterative solvers. The multiple-tree structure differs from the single-tree frame in electromagnetics (EM) and acoustics, and greatly complicates the MLFMA implementation due to the different definitions for well-separated groups in different FMA trees. Our Nyström method has made use of the cancellation of leading terms in the series expansion of integral kernels to handle hyper singularities in near terms. This feature is kept in the MLFMA by seeking the common near patches in different FMA trees and treating the involved near terms synergistically. Due to the high cost of the multiple-tree structure, our numerical examples show that we can only solve the elastic wave scattering problems with 0.3-0.4 millions of unknowns on our Dell Precision 690 workstation using one core. 10. Measurement of elastic wave dispersion on human femur tissue Strantza, M.; Louis, O.; Polyzos, D.; Boulpaep, F.; Van Hemelrijck, D.; Aggelis, D. G. 2014-03-01 Cortical bone is one of the most complex heterogeneous media exhibiting strong wave dispersion. In such media when a burst of energy goes into the formation of elastic waves the different modes tend to separate according to the velocities of the frequency components as usually occurs in waveguides. In this study human femur specimens were subjected to elastic wave measurements. The main objective of the study is using broadband acoustic emission sensors to measure parameters like wave velocity dispersion and attenuation. Additionally, waveform parameters like the duration, rise time and average frequency, are also examined relatively to the propagation distance as a preparation for acoustic emission monitoring during fracture. To do so, four sensors were placed at adjacent positions on the surface of the cortical bone in order to record the transient response after pencil lead break excitation. The results are compared to similar measurements on a bulk metal piece which does not exhibit heterogeneity at the scale of the propagating wave lengths. It is shown that the microstructure of the tissue imposes a dispersive behavior for frequencies below 1 MHz and care should be taken for interpretation of the signals. 11. Application of RMS for damage detection by guided elastic waves Radzieński, M.; Doliński, Ł.; Krawczuk, M.; dot Zak, A.; Ostachowicz, W. 2011-07-01 This paper presents certain results of an experimental study related with a damage detection in structural elements based on deviations in guided elastic wave propagation patterns. In order to excite guided elastic waves within specimens tested piezoelectric transducers have been applied. As excitation signals 5 sine cycles modulated by Hanning window have been used. Propagation of guided elastic waves has been monitored by a scanning Doppler laser vibrometer. The time signals recorded during measurement have been utilised to calculate the values of RMS. It has turned out that the values of RMS differed significantly in damaged areas from the values calculated for the healthy ones. In this way it has become possible to pinpoint precisely the locations of damage over the entire measured surface. All experimental investigations have been carried out for thin aluminium or composite plates. Damage has been simulated by a small additional mass attached on the plate surface or by a narrow notch cut. It has been shown that proposed method allows one to localise damage of various shapes and sizes within structural elements over the whole area under investigation. 12. 3-D frequency-domain seismic wave modelling in heterogeneous, anisotropic media using a Gaussian Quadrature Grid (GQG) approach Greenhalgh, Stewart; Zhou, Bing; Maurer, Hansruedi 2010-05-01 We have developed a modified version of the spectral element method (SEM), called the Gaussian Quadrature Grid (GQG) approach, for frequency domain 3D seismic modelling in arbitrary heterogeneous, anisotropic media. The model may incorporate an arbitrary free-surface topography and irregular subsurface interfaces. Unlike the SEM ,it does not require a powerful mesh generator such as the Delauney Triangular or TetGen. Rather, the GQG approach replaces the element mesh with Gaussian quadrature abscissae to directly sample the physical properties of the model parameters and compute the weighted residual or variational integral. This renders the model discretisation simple and easily matched to the model topography, as well as direct control of the model paramterisation for subsequent inversion. In addition, it offers high accuracy in numerical modelling provided that an appropriate density of the Gaussian quadrature abscissae is employed. The second innovation of the GQG is the incorporation of a new implementation of perfectly matched layers to suppress artificial reflections from the domain margins. We employ PML model parameters (specified complex valued density and elastic moduli) rather than explicitly solving the governing wave equation with a complex co-ordinate system as in conventional approaches. Such an implementation is simple, general, effective and easily extendable to any class of anisotropy and other numerical modelling methods. The accuracy of the GQG approach is controlled by the number of Gaussian quadrature points per minimum wavelength, the so-called sampling density. The optimal sampling density should be the one which enables high definition of geological characteristics and high precision of the variational integral evaluation and spatial differentiation. Our experiments show that satisfactory results can be obtained using sampling densities of 5 points per minimum wavelength. Efficiency of the GQG approach mainly depends on the linear 13. Prediction of Tsunami Waves and Runup Generated by 3d Granular Landslides Mohammed, F.; Fritz, H. M. 2008-12-01 Subaerial and submarine landslides can trigger tsunamis with locally high amplitudes and runup, which can cause devastating effects in the near field region. The 50th anniversary of the Lituya Bay 1958 landslide impact generated mega tsunami recalls the largest tsunami runup of 524m in recorded history. In contrast to earthquake generated tsunamis, landslide generated tsunami sources are not confined to active tectonic regions and therefore are of particular importance for the Atlantic Ocean. Landslide generated tsunamis were studied in the three dimensional NEES tsunami wave basin TWB at OSU based on the generalized Froude similarity. A novel pneumatic landslide generator was deployed to control the landslide geometry and kinematics. Granular materials were used to model deformable landslides. Measurement techniques such as particle image velocimetry (PIV), multiple above and underwater video cameras, multiple acoustic transducer arrays (MTA), as well as resistance wave and runup gauges were applied. The wave generation was characterized by an extremely unsteady three phase flow consisting of the slide granulate, water and air entrained into the flow. The underwater cameras and the MTA provide data on the landslide deformation as it impacts the water surface, penetrates the water and finally deposits on the bottom of the basin. The influence of the landslide volume, shape and the impact speed on the generated tsunami wave characteristics were extensively studied. The experimental data provides prediction models for the generated tsunami wave characteristics based on the initial landslide characteristics and the final slide deposits. PIV provided instantaneous surface velocity vector fields, which gave insight into the kinematics of the landslide and wave generation process. At high impact velocities flow separation occurred on the slide shoulder resulting in a hydrodynamic impact crater. The recorded wave profiles yielded information on the wave propagation and 14. 3D finite element modelling of guided wave scattering at delaminations in composites Murat, Bibi Intan Suraya; Fromme, Paul 2016-02-01 Carbon fiber laminate composites are increasingly used for aerospace structures as they offer a number of advantages including a good strength to weight ratio. However, impact during the operation and servicing of the aircraft can lead to barely visible and difficult to detect damage. Depending on the severity of the impact, delaminations can occur, reducing the load carrying capacity of the structure. Efficient nondestructive testing of composite panels can be achieved using guided ultrasonic waves propagating along the structure. The guided wave (A0 Lamb wave mode) scattering at delaminations was modeled using full three-dimensional Finite Element (FE) simulations. The influence of the delamination size was systematically investigated from a parameter study. A significant influence of the delamination width on the guided wave scattering was found, especially on the angular dependency of the scattered guided wave amplitude. The sensitivity of guided ultrasonic waves for the detection of delamination damage in composite panels is discussed. 15. Using FUN3D for Aeroelastic, Sonic Boom, and AeroPropulsoServoElastic (APSE) Analyses of a Supersonic Configuration NASA Technical Reports Server (NTRS) Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph; Kopasakis, George 2016-01-01 An overview of recent applications of the FUN3D CFD code to computational aeroelastic, sonic boom, and aeropropulsoservoelasticity (APSE) analyses of a low-boom supersonic configuration is presented. The overview includes details of the computational models developed including multiple unstructured CFD grids suitable for aeroelastic and sonic boom analyses. In addition, aeroelastic Reduced-Order Models (ROMs) are generated and used to rapidly compute the aeroelastic response and utter boundaries at multiple flight conditions. 16. Making and Propagating Elastic Waves: Overview of the new wave propagation code WPP SciTech Connect McCandless, K P; Petersson, N A; Nilsson, S; Rodgers, A; Sjogreen, B; Blair, S C 2006-05-09 We are developing a new parallel 3D wave propagation code at LLNL called WPP (Wave Propagation Program). WPP is being designed to incorporate the latest developments in embedded boundary and mesh refinement technology for finite difference methods, as well as having an efficient portable implementation to run on the latest supercomputers at LLNL. We are currently exploring seismic wave applications, including a recent effort to compute ground motions for the 1906 Great San Francisco Earthquake. This paper will briefly describe the wave propagation problem, features of our numerical method to model it, implementation of the wave propagation code, and results from the 1906 Great San Francisco Earthquake simulation. 17. Wave Propagation from Complex 3D Sources using the Representation Theorem DTIC Science & Technology 2009-09-30 wavenumber integration. The equations for the Green’s functions for surface waves are given by Bache et al. (1982). The Green’s functions for the...Green’s functions for body waves are generated by a procedure similar to that described by Bache and Harkrider (1976) using a saddle point...931 951. Bache , T. C. and D. G Harkrider (1976). The Body Waves Due to a General Seismic source in a Layered Earth Model, Bull. Seism. Soc. Am. 66 18. 3C3D VSP Imaging of Salt Flanks Using Converted Waves in the Gulf of Mexico Li, Y.; Doherty, F.; Jackson, J. 2005-05-01 Locating salt boundary and imaging updip sediment structures flanking the salt domes are very important tasks for exploration in the Gulf of Mexico since major petroleum reserves are often trapped underneath overhangs of diapiric salt domes. Although the top of salt and less steep structures can be well imaged using current surface seismic methods, the steep sides of a salt dome with irregularly shapes are hard to image with adequate accuracy. Thus, Vertical Seismic Profiling (VSP) surveys with three-component (3C) receivers in wells are usually requested for improving images of subsurface structures. Conventional multi-offset VSP (OVSP) and refraction salt proximity (SP) surveys are widely applied in the Gulf of Mexico to improve images of slat interfaces, sub-salt and salt flank structures using P waves. In this paper, we will focus on using converted waves to image the steep salt-sediment boundary. A VSP dataset, including multi-OVSP and a SP survey, acquired in the Gulf of Mexico was used in this study. We analyzed 3C OVSP data to identify and separate converted waves, such as PS, P-SP, P-SS, generated at a salt boundary. Then both PP wave and converted waves were 3C3D depth migrated to generate images of the steep salt-sediment interface. Both transmitted P-P and P-S converted waves from the SP survey were used to calculate 3D salt exit points which delineate the steep salt face. The VSP results derived from both methods are abundant and a suitable 3D visualization tool is required for visual integration and interpretation. The image volumes and other available geophysical and geological data were integrated using a 3D visualization tool specially designed for VSP solutions. The migrated images using PP and converted waves provides a precise and complete definition of the steep salt face and reservoir sands flanking the salt dome. This study indicates that both reflection and reflection surveys can result in a consistent location of the steep salt flank 19. Propagation of ultrasonic Love waves in nonhomogeneous elastic functionally graded materials. PubMed Kiełczyński, P; Szalewski, M; Balcerzak, A; Wieja, K 2016-02-01 This paper presents a theoretical study of the propagation behavior of ultrasonic Love waves in nonhomogeneous functionally graded elastic materials, which is a vital problem in the mechanics of solids. The elastic properties (shear modulus) of a semi-infinite elastic half-space vary monotonically with the depth (distance from the surface of the material). The Direct Sturm-Liouville Problem that describes the propagation of Love waves in nonhomogeneous elastic functionally graded materials is formulated and solved by using two methods: i.e., (1) Finite Difference Method, and (2) Haskell-Thompson Transfer Matrix Method. The dispersion curves of phase and group velocity of surface Love waves in inhomogeneous elastic graded materials are evaluated. The integral formula for the group velocity of Love waves in nonhomogeneous elastic graded materials has been established. The effect of elastic non-homogeneities on the dispersion curves of Love waves is discussed. Two Love wave waveguide structures are analyzed: (1) a nonhomogeneous elastic surface layer deposited on a homogeneous elastic substrate, and (2) a semi-infinite nonhomogeneous elastic half-space. Obtained in this work, the phase and group velocity dispersion curves of Love waves propagating in the considered nonhomogeneous elastic waveguides have not previously been reported in the scientific literature. The results of this paper may give a deeper insight into the nature of Love waves propagation in elastic nonhomogeneous functionally graded materials, and can provide theoretical guidance for the design and optimization of Love wave based devices. 20. Rayleigh scattering and nonlinear inversion of elastic waves SciTech Connect Gritto, Roland 1995-12-01 Rayleigh scattering of elastic waves by an inclusion is investigated and the limitations determined. In the near field of the inhomogeneity, the scattered waves are up to a factor of 300 stronger than in the far field, excluding the application of the far field Rayleigh approximation for this range. The investigation of the relative error as a function of parameter perturbation shows a range of applicability broader than previously assumed, with errors of 37% and 17% for perturbations of -100% and +100%, respectively. The validity range for the Rayleigh limit is controlled by large inequalities, and therefore, the exact limit is determined as a function of various parameter configurations, resulting in surprisingly high values of up to kpR = 0.9. The nonlinear scattering problem can be solved by inverting for equivalent source terms (moments) of the scatterer, before the elastic parameters are determined. The nonlinear dependence between the moments and the elastic parameters reveals a strong asymmetry around the origin, which will produce different results for weak scattering approximations depending on the sign of the anomaly. Numerical modeling of cross hole situations shows that near field terms are important to yield correct estimates of the inhomogeneities in the vicinity of the receivers, while a few well positioned sources and receivers considerably increase the angular coverage, and thus the model resolution of the inversion parameters. The pattern of scattered energy by an inhomogeneity is complicated and varies depending on the object, the wavelength of the incident wave, and the elastic parameters involved. Therefore, it is necessary to investigate the direction of scattered amplitudes to determine the best survey geometry. 1. An investigation of elastic guided waves for ceramic joint evaluation SciTech Connect Simpson, W.A. Jr.; McClung, R.W. 1989-10-01 The rapid development of ceramic technology has led to the widespread use of ceramics in applications traditionally reserved for metals. Many of these applications, however, require the use of ceramics bonded to other ceramics or to metals in order to achieve the requisite strength. The presence of unbonding in such ceramic joints can now be reliably detected by previously developed ultrasonic techniques, but what is needed is a nondestructive approach which is capable of assessing bond strength directly. A possible tool to achieve this goal is the use of guided elastic waves propagating in the braze layer of a typical ceramic joint. We describe the theory of guided waves in the center layer of a general three-layer solid. The secular determinant is found, and roots of the secular equation are determined numerically for cases of interest in ceramic joining. Both guided and leaky modes are described, and it is shown that dispersive Stoneley waves can occur for these materials. In addition, the evanescent nature of the guided waves in the bounding solids raises the possibility of determining the elastic properties of the oxygen depletion layer adjacent to the braze in oxide ceramics. Possible models for incomplete bonding and the effect of this condition in the ultrasonic parameters are also discussed. 16 refs., 10 figs. 2. Elastic Wave Radiation from a Line Source of Finite Length SciTech Connect Aldridge, D.F. 1998-11-04 Straightforward algebraic expressions describing the elastic wavefield produced by a line source of finite length are derived in circular cylindrical coordinates. The surrounding elastic medium is assumed to be both homogeneous and isotropic, anc[ the source stress distribution is considered axisymmetic. The time- and space-domain formulae are accurate at all distances and directions from the source; no fa-field or long-wavelength assumptions are adopted for the derivation. The mathematics yield a unified treatment of three different types of sources: an axial torque, an axial force, and a radial pressure. The torque source radiates only azirnuthally polarized shear waves, whereas force and pressure sources generate simultaneous compressional and shear radiation polarized in planes containing the line source. The formulae reduce to more familiar expressions in the two limiting cases where the length of the line source approaches zero and infinity. Far-field approximations to the exact equations indicate that waves radiated parallel to the line source axI.s are attenuated relative to those radiated normal to the axis. The attenuation is more severe for higher I?equencies and for lower wavespeeds. Hence, shear waves are affected more than compressional waves. This fi-equency- and directiondependent attenuation is characterized by an extremely simple mathematical formula, and is readily apparent in example synthetic seismograms. 3. Analysis of non linear partially standing waves from 3D velocity measurements Drevard, D.; Rey, V.; Svendsen, Ib; Fraunie, P. 2003-04-01 Surface gravity waves in the ocean exhibit an energy spectrum distributed in both frequency and direction of propagation. Wave data collection is of great importance in coastal zones for engineering and scientific studies. In particular, partially standing waves measurements near coastal structures and steep or barred beaches may be a requirement, for instance for morphodynamic studies. The aim of the present study is the analysis of partially standing surface waves icluding non-linear effects. According to 1st order Stokes theory, synchronous measurements of horizontal and vertical velocity components allow calculation of rate of standing waves (Drevard et al, 2003). In the present study, it is demonstrated that for deep water conditions, partially standing 2nd order Stokes waves induced velocity field is still represented by the 1st order solution for the velocity potential contrary to the surface elevation which exhibits harmonic components. For intermediate water depth, harmonic components appear not only in the surface elevation but also in the velocity fields, but their weight remains much smaller, because of the vertical decreasing wave induced motion. For irregular waves, the influence of the spectrum width on the non-linear effects in the analysis is discussed. Keywords: Wave measurements ; reflection ; non-linear effects Acknowledgements: This work was initiated during the stay of Prof. Ib Svendsen, as invited Professor, at LSEET in autumn 2002. This study is carried out in the framework of the Scientific French National Programmes PNEC ART7 and PATOM. Their financial supports are acknowledged References: Drevard, D., Meuret, A., Rey, V. Piazzola, J. And Dolle, A.. (2002). "Partially reflected waves measurements using Acoustic Doppler Velocimeter (ADV)", Submitted to ISOPE 03, Honolulu, Hawaii, May 2003. 4. WaveQ3D: Fast and accurate acoustic transmission loss (TL) eigenrays, in littoral environments Reilly, Sean M. This study defines a new 3D Gaussian ray bundling acoustic transmission loss model in geodetic coordinates: latitude, longitude, and altitude. This approach is designed to lower the computation burden of computing accurate environmental effects in sonar training application by eliminating the need to transform the ocean environment into a collection of Nx2D Cartesian radials. This approach also improves model accuracy by incorporating real world 3D effects, like horizontal refraction, into the model. This study starts with derivations for a 3D variant of Gaussian ray bundles in this coordinate system. To verify the accuracy of this approach, acoustic propagation predictions of transmission loss, time of arrival, and propagation direction are compared to analytic solutions and other models. To validate the model's ability to predict real world phenomena, predictions of transmission loss and propagation direction are compared to at-sea measurements, in an environment where strong horizontal refraction effect have been observed. This model has been integrated into U.S. Navy active sonar training system applications, where testing has demonstrated its ability to improve transmission loss calculation speed without sacrificing accuracy. 5. Numerical simulation of suspended sediment concentration by 3D coupled wave-current model in the Oujiang River Estuary, China Xu, Ting; You, Xue-yi 2017-04-01 A 3D sediment transport model based on the modified environmental fluid dynamics code (EFDC) and the nearshore waves simulation model (SWAN) is developed to study the change of suspended sediment concentration and bottom shear stress under the actions of pure current and wave-current. After being validated by the field measured data, the proposed sediment transport model is applied in the Oujiang River Estuary, China. The results show that the ratios of both bottom shear stress and suspended sediment concentration of pure current to those of wave-current show a gradually increase from shallow nearshore water to deep open sea. The results also show that the proportion of wave contributions on bottom shear stress and sediment concentration are above 60%, approximately 20-30% and less than 10% for the water depth of less than 5 m, 5-10 m and more than 20 m, respectively. For the waters among islands, the proportion of wave contribution to bottom shear stress and sediment concentration is reduced to 10-20% for -5 m water depth and this is more obvious for the waves of large amplitude. The bottom stress and suspended sediment concentration between islands are mainly controlled by tidal current, and the effect of wave is not significant. 6. Investigation of Parametric Excitation of Whistler Waves Using 3D Particle-In-Cell Simulations Caplinger, James; Sotnikov, Vladimir; Main, Daniel; Rose, David; Paraschiv, Ioana 2016-10-01 Previous theoretical work has shown that a parametric interaction between quasi-electrostatic lower oblique resonance (LOR) and lower frequency (ω < ωLH) ion acoustic or extremely low frequency (ELF) waves can produce electromagnetic whistler waves in a cold magnetized plasma. It was also demonstrated theoretically that this interaction can more efficiently generate electromagnetic whistler waves than by direct excitation by a conventional loop antenna, operating at a single frequency. For the purpose of numerically validating the above result, a series of particle-in-cell simulations were carried out. We first demonstrate the ability to accurately model whistler wave excitation producing the familiar resonant surfaces which comprise the LOR using a modeled loop antenna. Next we demonstrate the ability to generate ion acoustic waves as well as ELF waves, both of which are shown to agree with the expected linear dispersion relations. Finally, we investigate the existence of any nonlinear interaction which indicates the desired parametric excitation and attempt to analyze the efficiency of this method of excitation and radiated power going into the whistler part of the VLF wave spectrum. 7. 3D crustal structure of the Alpine belt and foreland basins as imaged by ambient-noise surface wave Molinari, Irene; Morelli, Andrea; Cardi, Riccardo; Boschi, Lapo; Poli, Piero; Kissling, Edi 2016-04-01 We derive a 3-D crustal structure (S wave velocity) underneath northern Italy and the wider Alpine region, from an extensive data set of measurements of Rayleigh-wave phase- and group-velocities from ambient noise correlation among all seismographic stations available to date in the region, via a constrained tomographic inversion made to honor detailed active source reflection/refraction profiles and other geological information. We first derive a regional-scale surface wave tomography from ambient-noise-based phase- and group- surface wave velocity observations (Verbeke et al., 2012). Our regional 3D model (Molinari et al., 2015) shows the low velocity area beneath the Po Plain and the Molasse basin; the contrast between the low-velocity crust of the Adriatic domain and the high-velocity crust of the Tyrrhenian domain is clearly seen, as well as an almost uniform crystalline crust beneath the Alpine belt. However, higher frequency data can be exploited to achieve higher resolution images of the Po Plain and Alpine foreland 3D crustal structure. We collected and analyze one year of noise records (2011) of ~100 North Italy seismic broadband stations, we derive the Green functions between each couple of stations and we measure the phase- and group-Rayleigh wave velocity. We conduct a suite of linear least squares inversion of both phase- and group-velocity data, resulting in 2-D maps of Rayleigh-wave phase and group velocity at periods between 3 and 40s with a resolution of 0.1x0.1 degrees. The maps are then inverted to get the 3D structure with unprecedented details. We present here our results, we compare them with other studies, and we discuss geological/geodynamical implications. We believe that such a model stands for the most up-to-date seismological information on the crustal structure of the Alpine belt and foreland basins, and it can represent a reliable reference for further, more detailed, studies to come, based on the high seismograph station density 8. Asymmetric wave transmission in a diatomic acoustic/elastic metamaterial Li, Bing; Tan, K. T. 2016-08-01 Asymmetric acoustic/elastic wave transmission has recently been realized using nonlinearity, wave diffraction, or bias effects, but always at the cost of frequency distortion, direction shift, large volumes, or external energy. Based on the self-coupling of dual resonators, we propose a linear diatomic metamaterial, consisting of several small-sized unit cells, to realize large asymmetric wave transmission in low frequency domain (below 1 kHz). The asymmetric transmission mechanism is theoretically investigated, and numerically verified by both mass-spring and continuum models. This passive system does not require any frequency conversion or external energy, and the asymmetric transmission band can be theoretically predicted and mathematically controlled, which extends the design concept of unidirectional transmission devices. 9. Wave Propagation from Complex 3D Sources Using the Representation Theorem DTIC Science & Technology 2008-09-30 functions for surface waves are given by Bache et al. (1982). The Green’s functions for the complete seismograms are computed using a ring load source...procedure similar to that described by Bache and Harkrider (1976), using a saddle point approximation to calculate a far-field plane wave for a given takeoff...space, Part II, Bull. Seism. Soc. Am. 73: 931-951. Bache , T. C. and D. G. Harkrider (1976). The body waves due to a general seismic source in a layered 10. Surface Acoustic Waves (SAW)-Based Biosensing for Quantification of Cell Growth in 2D and 3D Cultures PubMed Central Wang, Tao; Green, Ryan; Nair, Rajesh Ramakrishnan; Howell, Mark; Mohapatra, Subhra; Guldiken, Rasim; Mohapatra, Shyam Sundar 2015-01-01 Detection and quantification of cell viability and growth in two-dimensional (2D) and three-dimensional (3D) cell cultures commonly involve harvesting of cells and therefore requires a parallel set-up of several replicates for time-lapse or dose–response studies. Thus, developing a non-invasive and touch-free detection of cell growth in longitudinal studies of 3D tumor spheroid cultures or of stem cell regeneration remains a major unmet need. Since surface acoustic waves (SAWs) permit mass loading-based biosensing and have been touted due to their many advantages including low cost, small size and ease of assembly, we examined the potential of SAW-biosensing to detect and quantify cell growth. Herein, we demonstrate that a shear horizontal-surface acoustic waves (SH-SAW) device comprising two pairs of resonators consisting of interdigital transducers and reflecting fingers can be used to quantify mass loading by the cells in suspension as well as within a 3D cell culture platform. A 3D COMSOL model was built to simulate the mass loading response of increasing concentrations of cells in suspension in the polydimethylsiloxane (PDMS) well in order to predict the characteristics and optimize the design of the SH-SAW biosensor. The simulated relative frequency shift from the two oscillatory circuit systems (one of which functions as control) were found to be concordant to experimental data generated with RAW264.7 macrophage and A549 cancer cells. In addition, results showed that SAW measurements per se did not affect viability of cells. Further, SH-SAW biosensing was applied to A549 cells cultured on a 3D electrospun nanofiber scaffold that generate tumor spheroids (tumoroids) and the results showed the device's ability to detect changes in tumor spheroid growth over the course of eight days. Taken together, these results demonstrate the use of SH-SAW device for detection and quantification of cell growth changes over time in 2D suspension cultures and in 3D cell 11. Surface Acoustic Waves (SAW)-Based Biosensing for Quantification of Cell Growth in 2D and 3D Cultures. PubMed Wang, Tao; Green, Ryan; Nair, Rajesh Ramakrishnan; Howell, Mark; Mohapatra, Subhra; Guldiken, Rasim; Mohapatra, Shyam Sundar 2015-12-19 Detection and quantification of cell viability and growth in two-dimensional (2D) and three-dimensional (3D) cell cultures commonly involve harvesting of cells and therefore requires a parallel set-up of several replicates for time-lapse or dose-response studies. Thus, developing a non-invasive and touch-free detection of cell growth in longitudinal studies of 3D tumor spheroid cultures or of stem cell regeneration remains a major unmet need. Since surface acoustic waves (SAWs) permit mass loading-based biosensing and have been touted due to their many advantages including low cost, small size and ease of assembly, we examined the potential of SAW-biosensing to detect and quantify cell growth. Herein, we demonstrate that a shear horizontal-surface acoustic waves (SH-SAW) device comprising two pairs of resonators consisting of interdigital transducers and reflecting fingers can be used to quantify mass loading by the cells in suspension as well as within a 3D cell culture platform. A 3D COMSOL model was built to simulate the mass loading response of increasing concentrations of cells in suspension in the polydimethylsiloxane (PDMS) well in order to predict the characteristics and optimize the design of the SH-SAW biosensor. The simulated relative frequency shift from the two oscillatory circuit systems (one of which functions as control) were found to be concordant to experimental data generated with RAW264.7 macrophage and A549 cancer cells. In addition, results showed that SAW measurements per se did not affect viability of cells. Further, SH-SAW biosensing was applied to A549 cells cultured on a 3D electrospun nanofiber scaffold that generate tumor spheroids (tumoroids) and the results showed the device's ability to detect changes in tumor spheroid growth over the course of eight days. Taken together, these results demonstrate the use of SH-SAW device for detection and quantification of cell growth changes over time in 2D suspension cultures and in 3D cell 12. Development of Scientific Simulation 3D Full Wave ICRF Code for Stellarators and Heating/CD Scenarios Development SciTech Connect Vdovin V.L. 2005-08-15 In this report we describe theory and 3D full wave code description for the wave excitation, propagation and absorption in 3-dimensional (3D) stellarator equilibrium high beta plasma in ion cyclotron frequency range (ICRF). This theory forms a basis for a 3D code creation, urgently needed for the ICRF heating scenarios development for the operated LHD, constructed W7-X, NCSX and projected CSX3 stellarators, as well for re evaluation of ICRF scenarios in operated tokamaks and in the ITER . The theory solves the 3D Maxwell-Vlasov antenna-plasma-conducting shell boundary value problem in the non-orthogonal flux coordinates ({Psi}, {theta}, {var_phi}), {Psi} being magnetic flux function, {theta} and {var_phi} being the poloidal and toroidal angles, respectively. All basic physics, like wave refraction, reflection and diffraction are self consistently included, along with the fundamental ion and ion minority cyclotron resonances, two ion hybrid resonance, electron Landau and TTMP absorption. Antenna reactive impedance and loading resistance are also calculated and urgently needed for an antenna -generator matching. This is accomplished in a real confining magnetic field being varying in a plasma major radius direction, in toroidal and poloidal directions, through making use of the hot dense plasma wave induced currents with account to the finite Larmor radius effects. We expand the solution in Fourier series over the toroidal ({var_phi}) and poloidal ({theta}) angles and solve resulting ordinary differential equations in a radial like {Psi}-coordinate by finite difference method. The constructed discretization scheme is divergent-free one, thus retaining the basic properties of original equations. The Fourier expansion over the angle coordinates has given to us the possibility to correctly construct the ''parallel'' wave number k{sub //}, and thereby to correctly describe the ICRF waves absorption by a hot plasma. The toroidal harmonics are tightly coupled with each 13. Elastic lattice modelling of seismic waves including a free surface OBrien, Gareth S. 2014-06-01 Elastic lattice methods (ELMs) have been shown to accurately model seismic wave propagation in a heterogeneous medium. These methods represent an elastic solid as a series of interconnected springs arranged on a lattice and recover a continuum wave equation in the long wavelength limit. However, in the case of a regular lattice, the recovery of the continuum equation depends on the symmetry of the lattice. By removing particles above a free surface this symmetry is broken. Therefore, this free surface implementation leads to errors when compared with a traction free boundary condition. The error between a traction free boundary condition and the ELMs grows as the Poisson's ratio deviates from 0.25. By modifying the interaction constants with a scalar, the error can be reduced while keeping the flexibility of the nearest neighbour interaction rule. We present results of simulations where modified spring constants reduce the misfit with a traction free boundary solution and hence increase the accuracy of the elastic lattice method solution on the free surface. 14. Localization of 3D inertial Alfvén wave and generation of turbulence Sharma, R. P.; Sharma, Prachi; Yadav, N. 2015-06-01 The present paper deals with the nonlinear interaction of Inertial Alfvén wave (IAW) and fast magnetosonic wave in the low beta plasma, where beta is the ratio of thermal pressure to the background magnetic pressure. In this paper, the localization and turbulent spectra of IAW along with the density dips correlated with the fast magnetosonic wave have been investigated. Variation of parallel electric field along and across the field lines has also been studied. Taking ponderomotive nonlinear effect in the dynamics of fast magnetosonic wave, couple of dimensionless equations has been derived. These coupled equations have been simulated numerically using the pseudo-spectral method. The obtained results reveal that the Kolmogorov scaling is followed by a steeper scaling in magnetic power spectrum, which is consistent with the observations by the FAST and Hawkeye spacecraft in auroral region. The relevance of present investigation has been discussed for auroral plasmas. 15. Observation of 3D defect mediated dust acoustic wave turbulence with fluctuating defects and amplitude hole filaments SciTech Connect Chang, Mei-Chu; Tsai, Ya-Yi; I, Lin 2013-08-15 We experimentally demonstrate the direct observation of defect mediated wave turbulence with fluctuating defects and low amplitude hole filaments, from a 3D self-excited plane dust acoustic wave in a dusty plasma by reducing dissipation. The waveform undulation is found to be the origin for the amplitude and the phase modulations of the local dust density oscillation, the broadening of the sharp peaks in the frequency spectrum, and the fluctuating defects. The corrugated wave crest surface also causes the observed high and low density patches in the transverse (xy) plane. Low oscillation amplitude spots (holes) share the same positions with the defects. Their trajectories in the xyt space appear in the form of chaotic filaments without long term predictability, through uncertain pair generation, propagation, and pair annihilation. 16. Moored Observations of Internal Waves in Luzon Strait: 3-D Structure, Dissipation, and Evolution DTIC Science & Technology 2016-03-01 advancing the performance of operational and climate models, as well as for understanding local problems such as pollutant dispersal and biological...Y.J. Yang, M.-H. Chang , and Q. Li. 2011. From Luzon Strait to Dongsha Plateau: Stages in the life of an internal wave. Oceanography 24(4):64–77...Knowledge of the general problems of internal waves and ocean mixing are important for advancing the performance of operational and climate models, as well 17. Bending analysis of a general cross-ply laminate using 3D elasticity solution and layerwise theory Yazdani Sarvestani, H.; Naghashpour, A.; Heidari-Rarani, M. 2015-12-01 In this study, the analytical solution of interlaminar stresses near the free edges of a general (symmetric and unsymmetric layups) cross-ply composite laminate subjected to pure bending loading is presented based on Reddy's layerwise theory (LWT) for the first time. First, the reduced form of displacement field is obtained for a general cross-ply composite laminate subjected to a bending moment by elasticity theory. Then, first-order shear deformation theory of plates and LWT is utilized to determine the global and local deformation parameters appearing in the displacement fields, respectively. One of the main advantages of the developed solution based on the LWT is exact prediction of interlaminar stresses at the boundary layer regions. To show the accuracy of this solution, three-dimensional elasticity bending problem of a laminated composite is solved for special set of boundary conditions as well. Finally, LWT results are presented for edge-effect problems of several symmetric and unsymmetric cross-ply laminates under the bending moment. The obtained results indicate high stress gradients of interlaminar stresses near the edges of laminates. 18. Propagation of 3D nonlinear waves over complex bathymetry using a High-Order Spectral method Gouin, Maïté; Ducrozet, Guillaume; Ferrant, Pierre 2016-04-01 Scattering of regular and irregular surface gravity waves propagating over a region of arbitrary three-dimensional varying bathymetry is considered here. The three-dimensional High-Order Spectral method (HOS) with an extension to account for a variable bathymetry is used. The efficiency of the model has been proved to be conserved even with this extension. The method is first applied to a bathymetry consisting of an elliptical lens, as used in the Vincent and Briggs (1989) experiment. Incident waves passing across the lens are transformed and a strong convergence region is observed after the elliptical mound. The wave amplification depends on the incident wave. Numerical results for regular and irregular waves are analysed and compared with other methods and experimental data demonstrating the efficiency and practical applicability of the present approach. Then the method is used to model waves propagating over a real bathymetry: the canyons of Scripps/La Jolla in California. The implementation of this complex bathymetry in the model is presented, as well as the first results achieved. They will be compared to the ones obtained with another numerical model. 19. Finite Difference Elastic Wave Field Simulation On GPU Hu, Y.; Zhang, W. 2011-12-01 Numerical modeling of seismic wave propagation is considered as a basic and important aspect in investigation of the Earth's structure, and earthquake phenomenon. Among various numerical methods, the finite-difference method is considered one of the most efficient tools for the wave field simulation. However, with the increment of computing scale, the power of computing has becoming a bottleneck. With the development of hardware, in recent years, GPU shows powerful computational ability and bright application prospects in scientific computing. Many works using GPU demonstrate that GPU is powerful . Recently, GPU has not be used widely in the simulation of wave field. In this work, we present forward finite difference simulation of acoustic and elastic seismic wave propagation in heterogeneous media on NVIDIA graphics cards with the CUDA programming language. We also implement perfectly matched layers on the graphics cards to efficiently absorb outgoing waves on the fictitious edges of the grid Simulations compared with the results on CPU platform shows reliable accuracy and remarkable efficiency. This work proves that GPU can be an effective platform for wave field simulation, and it can also be used as a practical tool for real-time strong ground motion simulation. 20. Conducting a 3D Converted Shear Wave Project to Reduce Exploration Risk at Wister, CA SciTech Connect Matlick, Skip; Walsh, Patrick; Rhodes, Greg; Fercho, Steven 2015-06-30 Ormat sited 2 full-size exploration wells based on 3D seismic interpretation of fractures, prior drilling results, and temperature anomaly. The wells indicated commercial temperatures (>300 F), but almost no permeability, despite one of the wells being drilled within 820 ft of an older exploration well with reported indications of permeability. Following completion of the second well in 2012, Ormat undertook a lengthy program to 1) evaluate the lack of observed permeability, 2) estimate the likelihood of finding permeability with additional drilling, and 3) estimate resource size based on an anticipated extent of permeability. 1. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan 2016-07-01 Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion. 2. Scattering of antiplane shear waves by layered circular elastic cylinder. PubMed Cai, Liang-Wu 2004-02-01 An exact analytical solution for the scattering of antiplane elastic waves by a layered elastic circular cylinder is obtained. The solution and its degenerate cases are compared with other simpler models of circular cylindrical scatterers. The effects of the geometrical and physical properties of the interphase are studied. Numerical results confirm the existence of a resonance mode in which the scatterer's core undergoes a rigid-body motion when the outer layer of the scatterer is very compliant. This resonance mode has been attributed [Liu et al., Science 289, 1734 (2000)] to a new mechanism for the band gap formed in the extremely low frequency range for phononic crystals made of layered spherical scatterers. Numerical results also show the existence of a similar resonance mode when the outer layer of the scatterer has very high mass density. 3. Vibration and wave propagation characteristics of multisegmented elastic beams NASA Technical Reports Server (NTRS) 1990-01-01 Closed form analytical solutions are derived for the vibration and wave propagation of multisegmented elastic beams. Each segment is modeled as a Timoshenko beam with possible inclusion of material viscosity, elastic foundation and axial forces. Solutions are obtained by using transfer matrix methods. According to these methods formal solutions are first constructed which relate the deflection, slope, moment and shear force of one end of the individual segment to those of the other. By satisfying appropriate continuity conditions at segment junctions, a global 4x4 matrix results which relates the deflection, slope, moment and shear force of one end of the beam to those of the other. If any boundary conditions are subsequently invoked on the ends of the beam one gets the appropriate characteristic equation for the natural frequencies. Furthermore, by invoking appropriate periodicity conditions the dispersion relation for a periodic system is obtained. A variety of numerical examples are included. 4. Near-field imaging of biperiodic surfaces for elastic waves Li, Peijun; Wang, Yuliang; Zhao, Yue 2016-11-01 This paper is concerned with the direct and inverse scattering of elastic waves by biperiodic surfaces in three dimensions. The surface is assumed to be a small and smooth perturbation of a rigid plane. Given a time-harmonic plane incident wave, the direct problem is to determine the displacement field of the elastic wave for a given surface; the inverse problem is to reconstruct the surface from the measured displacement field. The direct problem is shown to have a unique weak solution by studying its variational formulation. Moreover, an analytic solution is deduced by using the transformed field expansion method and the convergence is established for the power series solution. A local uniqueness is proved for the inverse problem. An explicit reconstruction formula is obtained and implemented by using the fast Fourier transform. The error estimate is derived for the reconstructed surface function, and it provides an insight on the trade-off among resolution, accuracy, and stability of the solution for the inverse problem. Numerical results show that the method is effective to reconstruct biperiodic scattering surfaces with subwavelength resolution. 5. SAFE-3D analysis of a piezoelectric transducer to excite guided waves in a rail web Ramatlo, Dineo A.; Long, Craig S.; Loveday, Philip W.; Wilke, Daniel N. 2016-02-01 Our existing Ultrasonic Broken Rail Detection system detects complete breaks and primarily uses a propagating mode with energy concentrated in the head of the rail. Previous experimental studies have demonstrated that a mode with energy concentrated in the head of the rail, is capable of detecting weld reflections at long distances. Exploiting a mode with energy concentrated in the web of the rail would allow us to effectively detect defects in the web of the rail and could also help to distinguish between reflections from welds and cracks. In this paper, we will demonstrate the analysis of a piezoelectric transducer attached to the rail web. The forced response at different frequencies is computed by the Semi-Analytical Finite Element (SAFE) method and compared to a full three-dimensional finite element method using ABAQUS. The SAFE method only requires the rail track cross-section to be meshed using two-dimensional elements. The ABAQUS model in turn requires a full three-dimensional discretisation of the rail track. The SAFE approach can yield poor predictions at cut-on frequencies associated with other modes in the rail. Problematic frequencies are identified and a suitable frequency range identified for transducer design. The forced response results of the two methods were found to be in good agreement with each other. We then use a previously developed SAFE-3D method to analyse a practical transducer over the selected frequency range. The results obtained from the SAFE-3D method are in good agreement with experimental measurements. 6. Self-consistent Synthetic Mantle Discontinuities From Joint Modeling of Geodynamics and Mineral Physics and Their Effects on the 3D Global Wave Field Schuberth, B.; Piazzoni, A.; Bunge, H.; Igel, H.; Steinle-Neumann, G.; Moder, C.; Oeser, J. 2007-12-01 Our current understanding of mantle structure and dynamics is to a large part based on inversion of seismic data resulting in tomographic images and on direct analysis of a wide range of seismic phases such as Pdiff, PcP, ScS SdS etc. For solving inverse problems, forward modeling is needed to obtain a synthetic dataset for a given set of model parameters. In this respect, great progress has been made over the last years in the developement of sophisticated numerical full waveform modeling tools. However, the main limitation in the application of this new class of techniques for the forward problem of seismology is the lack of accurate predictions of mantle heterogeneity that allow us to test hypotheses about Earth's mantle. Such predictive models should be based on geodynamic and mineralogical considerations and derived independently of seismological observations. Here, we demonstrate the feasibility of joining forward simulations from geodynamics, mineral physics and seismology to obtain earth-like seismograms. 3D global wave propagation is simulated for dynamically consistent thermal structures derived from 3D mantle circulation modeling (e.g. Bunge et al. 2002), for which the temperatures are converted to seismic velocities using a recently published, thermodynamically self-consistent mineral physics approach (Piazzoni et al. 2007). Assuming a certain, fixed mantle composition (e.g. pyrolite) our mineralogic modeling algorithm computes the stable phases at mantle pressures for a wide range of temperatures by system Gibbs free energy minimization. Through the same equations of state that model the Gibbs free energy, we compute elastic moduli and density for each stable phase assemblage at the same P-T conditions. One straightforward application of this approach is the study of the seismic signature of synthetic mantle discontinuities arising in such models, as the temperature dependent phase transformations occuring at around 410 Km and 660 Km depth are 7. Verification of Long Period Surface Waves from Ambient Noise and Its Application in Constructing 3D Shear Wave Structure of Lithosphere in United States Xie, J.; Yang, Y.; Ni, S.; Zhao, K. 2015-12-01 In the past decade, ambient noise tomography (ANT) has become an estimated method to construct the earth's interior structures thanks to its advantage in extracting surface waves from cross-correlations of ambient noise without using earthquake data. However, most of previous ambient noise tomography studies concentrate on short and intermediate periods (<50sec) due to the dominant energy of the microseism at these periods. Studies of long period surface waves from cross-correlation of ambient noise are limited. In this study, we verify the accuracy of the long period (50-250sec) surface wave (Rayleigh wave) from ambient noise by comparing both dispersion curves and seismic waveforms from ambient noise with those from earthquake records quantitatively. After that, we calculate vertical-vertical cross-correlation functions among more than1800 USArray Transportable Array stations and extract high quality interstation phase velocity dispersion curves from them at 10-200 sec periods. Then, we adopt a finite frequency ambient noise tomography method based on Born approximation to obtain high resolution phase velocity maps using the obtained dispersion measurements at 10-150 sec periods. Afterward, we extract local dispersion curves from these dispersion maps and invert them for 1D shear wave velocity profiles at individual grids using a Bayesian Monte Carlo method. Finally, a 3D shear velocity model is constructed by assembling all the 1D Vs profiles. Our 3D model is overall similar to other models constructed using earthquake surface waves and body waves. In summary, we demonstrate that the long period surface waves can be extracted from ambient noise, and the long period dispersion measurements from ambient noise are as accurate as those from earthquake data and can be used to construct 3D lithospheric structure from surface down to lithosphere/asthenosphere depths. 8. Real-time 3D millimeter wave imaging based FMCW using GGD focal plane array as detectors Levanon, Assaf; Rozban, Daniel; Kopeika, Natan S.; Yitzhaky, Yitzhak; Abramovich, Amir 2014-03-01 Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters. 9. High-resolution 3-D P wave attenuation structure of the New Madrid Seismic Zone using local earthquake tomography Bisrat, Shishay T.; DeShon, Heather R.; Pesicek, Jeremy; Thurber, Clifford 2014-01-01 A three-dimensional (3-D), high-resolution P wave seismic attenuation model for the New Madrid Seismic Zone (NMSZ) is determined using P wave path attenuation (t*) values of small-magnitude earthquakes (MD < 3.9). Events were recorded at 89 broadband and short-period seismometers of the Cooperative New Madrid Seismic Zone Network and 40 short-period seismometers of the Portable Array for Numerical Data Acquisition experiment. The amplitude spectra of all the earthquakes are simultaneously inverted for source, path (t*), and site parameters. The t* values are inverted for QP using local earthquake tomography methods and a known 3-D P wave velocity model for the region. The four major seismicity arms of the NMSZ exhibit reduced QP (higher attenuation) than the surrounding crust. The highest attenuation anomalies coincide with areas of previously reported high swarm activity attributed to fluid-rich fractures along the southeast extension of the Reelfoot fault. The QP results are consistent with previous attenuation studies in the region, which showed that active fault zones and fractured crust in the NMSZ are highly attenuating. 10. Guided wave-based J-integral estimation for dynamic stress intensity factors using 3D scanning laser Doppler vibrometry Ayers, J.; Owens, C. T.; Liu, K. C.; Swenson, E.; Ghoshal, A.; Weiss, V. 2013-01-01 The application of guided waves to interrogate remote areas of structural components has been researched extensively in characterizing damage. However, there exists a sparsity of work in using piezoelectric transducer-generated guided waves as a method of assessing stress intensity factors (SIF). This quantitative information enables accurate estimation of the remaining life of metallic structures exhibiting cracks, such as military and commercial transport vehicles. The proposed full wavefield approach, based on 3D laser vibrometry and piezoelectric transducer-generated guided waves, provides a practical means for estimation of dynamic stress intensity factors (DSIF) through local strain energy mapping via the J-integral. Strain energies and traction vectors can be conveniently estimated from wavefield data recorded using 3D laser vibrometry, through interpolation and subsequent spatial differentiation of the response field. Upon estimation of the Jintegral, it is possible to obtain the corresponding DSIF terms. For this study, the experimental test matrix consists of aluminum plates with manufactured defects representing canonical elliptical crack geometries under uniaxial tension that are excited by surface mounted piezoelectric actuators. The defects' major to minor axes ratios vary from unity to approximately 133. Finite element simulations are compared to experimental results and the relative magnitudes of the J-integrals are examined. 11. Investigating Global 3-D Shear-Wave Anisotropy in the Earth's Mantle from Free Oscillations, Body Waves, Surface Waves and Long-period Waveforms Moulik, P.; Ekstrom, G. 2012-12-01 We have developed a framework that can be used to investigate anisotropic velocity, density and anelastic heterogeneity in the Earth's mantle using a wide spectrum (0.3-50 mHz) of seismological observables. We start with the extensive dataset of surface-wave phase anomalies, long-period waveforms, and body-wave travel times collected by Kustowski et al. (2008) for the development of the global model S362ANI. The additional data included in our analysis are splitting functions of spheroidal and toroidal modes, which are analogous to phase velocity maps at low frequencies. We include in this set of observations a new dataset containing the splitting functions of 56 spheroidal fundamental modes and overtones, measured by Deuss et al. (2011, 2012) using data from large recent earthquakes. Apart from providing unique constraints on the long-wavelength elastic and density structure in the mantle, the overtone splitting data are especially sensitive to the velocity (and anisotropic) structure in the transition zone and in the deeper mantle. The detection of anisotropy, a marker of flow, in the transition zone has implications for our understanding of mantle convection. Our forward modeling of the splitting functions, like the other types of data, includes the effects of radial anisotropy (Mochizuki, 1986). We show that the upper-mantle shear-wave anisotropy of S362ANI generates a clear contribution to the splitting functions of the modes that are sensitive to the upper-mantle structure. We explore the tradeoffs between fitting the mode splitting functions and the travel-times of body waves that turn in the transition zone or in the lower mantle (e.g. SS), while observing that the waveforms and the surface wave phase-anomalies provide complementary information about the mantle. Our experiments suggest that the splitting data are sufficiently sensitive to the anisotropy in the mantle such that their inclusion may provide a better depth resolution of the anisotropic shear 12. Propagation of unsteady waves in an elastic layer Kuznetsova, E. L.; Tarlakovskii, D. V.; Fedotenkov, G. V. 2011-10-01 We consider a plane problem of propagation of unsteady waves in a plane layer of constant thickness filled with a homogeneous linearly elastic isotropic medium in the absence of mass forces and with zero initial conditions. We assume that, on one of the layer boundaries, the normal stresses are given in the form of the Dirac delta function, the tangential stresses are zero, and the second boundary is rigidly fixed. The problem is solved by using the Laplace transform with respect to time and the Fourier transform with respect to the longitudinal coordinate. The normal displacements at an arbitrary point are obtained in the form of finite sums. 13. Properties of elastic waves in quasiregular structures with planar defects Aynaou, H.; Velasco, V. R.; Nougaoui, A.; El Boudouti, E. H.; Bria, D. 2002-07-01 We have studied the elastic waves in quasiregular structures following the Fibonacci and Rudin-Shapiro sequences, and having planar defects, that is breaks of the quasiregular structure in different parts of the system. It is seen that the different kinds of defects produce effects on different ranges of the frequency spectrum, and can introduce more localized states in the gaps, or modify the frequencies of the states in the gaps. We have also studied the phase time and transmission coefficients, thus seeing how these localized modes can be used as frequency filters. 14. A 3D MPI-Parallel GPU-accelerated framework for simulating ocean wave energy converters Pathak, Ashish; Raessi, Mehdi 2015-11-01 We present an MPI-parallel GPU-accelerated computational framework for studying the interaction between ocean waves and wave energy converters (WECs). The computational framework captures the viscous effects, nonlinear fluid-structure interaction (FSI), and breaking of waves around the structure, which cannot be captured in many potential flow solvers commonly used for WEC simulations. The full Navier-Stokes equations are solved using the two-step projection method, which is accelerated by porting the pressure Poisson equation to GPUs. The FSI is captured using the numerically stable fictitious domain method. A novel three-phase interface reconstruction algorithm is used to resolve three phases in a VOF-PLIC context. A consistent mass and momentum transport approach enables simulations at high density ratios. The accuracy of the overall framework is demonstrated via an array of test cases. Numerical simulations of the interaction between ocean waves and WECs are presented. Funding from the National Science Foundation CBET-1236462 grant is gratefully acknowledged. 15. Summary of work on shock wave feature extraction in 3-D datasets NASA Technical Reports Server (NTRS) 1996-01-01 A method for extracting and visualizing shock waves from three dimensional data-sets is discussed. Issues concerning computation time, robustness to numerical perturbations, and noise introduction are considered and compared with other methods. Finally, results using this method are discussed. 16. 3D numerical simulation of the long range propagation of acoustical shock waves through a heterogeneous and moving medium SciTech Connect Luquet, David; Marchiano, Régis; Coulouvrat, François 2015-10-28 Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D 17. 3D numerical simulation of the long range propagation of acoustical shock waves through a heterogeneous and moving medium Luquet, David; Marchiano, Régis; Coulouvrat, François 2015-10-01 Many situations involve the propagation of acoustical shock waves through flows. Natural sources such as lightning, volcano explosions, or meteoroid atmospheric entries, emit loud, low frequency, and impulsive sound that is influenced by atmospheric wind and turbulence. The sonic boom produced by a supersonic aircraft and explosion noises are examples of intense anthropogenic sources in the atmosphere. The Buzz-Saw-Noise produced by turbo-engine fan blades rotating at supersonic speed also propagates in a fast flow within the engine nacelle. Simulating these situations is challenging, given the 3D nature of the problem, the long range propagation distances relative to the central wavelength, the strongly nonlinear behavior of shocks associated to a wide-band spectrum, and finally the key role of the flow motion. With this in view, the so-called FLHOWARD (acronym for FLow and Heterogeneous One-Way Approximation for Resolution of Diffraction) method is presented with three-dimensional applications. A scalar nonlinear wave equation is established in the framework of atmospheric applications, assuming weak heterogeneities and a slow wind. It takes into account diffraction, absorption and relaxation properties of the atmosphere, quadratic nonlinearities including weak shock waves, heterogeneities of the medium in sound speed and density, and presence of a flow (assuming a mean stratified wind and 3D turbulent ? flow fluctuations of smaller amplitude). This equation is solved in the framework of the one-way method. A split-step technique allows the splitting of the non-linear wave equation into simpler equations, each corresponding to a physical effect. Each sub-equation is solved using an analytical method if possible, and finite-differences otherwise. Nonlinear effects are solved in the time domain, and others in the frequency domain. Homogeneous diffraction is handled by means of the angular spectrum method. Ground is assumed perfectly flat and rigid. Due to the 3D 18. Bubbles attenuate elastic waves at seismic frequencies: First experimental evidence Tisato, Nicola; Quintal, Beatriz; Chapman, Samuel; Podladchikov, Yury; Burg, Jean-Pierre 2015-05-01 The migration of gases from deep to shallow reservoirs can cause damageable events. For instance, some gases can pollute the biosphere or trigger explosions and eruptions. Seismic tomography may be employed to map the accumulation of subsurface bubble-bearing fluids to help mitigating such hazards. Nevertheless, how gas bubbles modify seismic waves is still unclear. We show that saturated rocks strongly attenuate seismic waves when gas bubbles occupy part of the pore space. Laboratory measurements of elastic wave attenuation at frequencies <100 Hz are modeled with a dynamic gas dissolution theory demonstrating that the observed frequency-dependent attenuation is caused by wave-induced-gas-exsolution-dissolution (WIGED). This result is incorporated into a numerical model simulating the propagation of seismic waves in a subsurface domain containing CO2-gas bubbles. This simulation shows that WIGED can significantly modify the wavefield and illustrates how accounting for this physical mechanism can potentially improve the monitoring and surveying of gas bubble-bearing fluids in the subsurface. 19. Tectonic stress accumulation in Bohai-Zhangjiakou Seismotectonic Zone based on 3D visco-elastic modelling Wei, Ju; Weifeng, Sun; Xiaojing, Ma; Hui, Jiang 2016-07-01 Future earthquake potential in the Bohai-Zhangjiakou Seismotectonic Zone (BZSZ) in North China deserves close attention. Tectonic stress accumulation state is an important indicator for earthquakes; therefore, this study aims to analyse the stress accumulation state in the BZSZ via three-dimensional visco-elastic numerical modelling. The results reveal that the maximum shear stress in the BZSZ increases gradually as the depth increases, and the stress range is wider in the lower layer. In the upper layer, the maximum shear stress is high in the Zhangjiakou area, whereas in the lower layer, relatively high values occur in the Penglai-Yantai area, which may be affected by the depth of the Moho surface. Besides, weak fault zones will be easily fractured when the maximum shear stress is not sufficiently high due to their low strengths, resulting in earthquakes. Therefore, based on the modelling results, the upper layer of the Zhangjiakou area and the lower layer of the Penglai-Yantai area in the BZSZ in North China are more likely to experience earthquakes. 20. Moored Observations of Internal Waves in Luzon Strait: 3-D Structure, Dissipation, and Evolution DTIC Science & Technology 2014-09-30 the performance of operational and climate models, as well as for understanding local problems such as pollutant dispersal and biological productivity...substantially improves both our understanding and predictive ability of linear internal tides and NLIWs in Luzon Strait and the South China Sea...westward into the northeastern South China Sea (SCS). • To better understand generation and propagation of internal waves in a strongly sheared 1. Well-posedness of linearized motion for 3-D water waves far from equilibrium SciTech Connect Hou, T.Y.; Zhen-huan Teng; Pingwen Zhang 1996-12-31 In this paper, we study the motion of a free surface separating two different layers of fluid in three dimensions. We assume the flow to be inviscid, irrotational, and incompressible. In this case, one can reduce the entire motion by variables on the surface alone. In general, without additional regularizing effects such as surface alone. In general, without additional regularizing effects such as surface tension or viscosity, the flow can be subject to Rayleigh-Taylor or Kelvin-Helmholtz instabilities which will lead to unbounded growth in high frequency wave numbers. In this case, the problem is not well-posed in the Hadamard sense. The problem of water wave with no fluid above is a special case. It is well-known that such motion is well-posed when the free surface is sufficiently close to equilibrium. Beale, Hous and Lowengrub derived a general condition which ensures well-posedness of the linearization about a presumed time-dependent motion in two dimensional case. The linearized equations, when formulated in a proper coordinate system are found to have a qualitative structure surprisingly like that for the simple case of linear waves near equilbrium. Such an analysis is essential in analyzing stability of boundary integral methods for computing free interface problems. 19 refs. 2. Evolutions of elastic-plastic shock compression waves in different materials Kanel, G. I.; Zaretsky, E. B.; Razorenov, S. V.; Savinykh, A. S.; Garkushin, G. V. 2017-01-01 In the paper, we discuss such unexpected features in the wave evolution in solids as a departure from self-similar development of the wave process which is accompanied with apparent sub-sonic wave propagation, changes of shape of elastic precursor wave as a result of variations in the material structure and the temperature, unexpected peculiarities of reflection of elastic-plastic waves from free surface, effects of internal friction at shock compression of glasses and some other effects. 3. Localization of metal targets by time reversal of electromagnetic waves . 3D-numerical and experimental study Benhamouche, Mehdi; Bernard, Laurent; Serhir, Mohammed; Pichon, Lionel; Lesselier, Dominique 2013-11-01 This paper proposes a criterion for locating obstacles by time reversal (TR) of electromagnetic (EM) waves based on the analysis of the density of EM energy map in time domain. Contrarily to a monochromatic study of the TR, the wide-band approach requires to determine the instant of the wave focus. This enables us to locate the focal spots that are indicative of the positions. The criterion proposed is compared to the inverse of the minimum entropy criterion as used in the literature [X. Xu, E.L. Miller, C.M. Rappaport, IEEE Trans. Geosci. Remote Sens. 41, 1804 (2003)]. An application for the localization of 3D metal targets is proposed using finite integration technique (FIT) as computational tool at the modeling stage. An experimental validation is presented for canonical three-dimensional configurations with two kinds of metal objects. Contribution to the Topical Issue "Numelec 2012", Edited by Adel Razek. 4. OpenHVSR: imaging the subsurface 2D/3D elastic properties through multiple HVSR modeling and inversion Bignardi, S.; Mantovani, A.; Abu Zeid, N. 2016-08-01 OpenHVSR is a computer program developed in the Matlab environment, designed for the simultaneous modeling and inversion of large Horizontal-to-Vertical Spectral Ratio (HVSR or H/V) datasets in order to construct 2D/3D subsurface models (topography included). The program is designed to provide a high level of interactive experience to the user and still to be of intuitive use. It implements several effective and established tools already present in the code ModelHVSR by Herak (2008), and many novel features such as: -confidence evaluation on lateral heterogeneity -evaluation of frequency dependent single parameter impact on the misfit function -relaxation of Vp/Vs bounds to allow for water table inclusion -a new cost function formulation which include a slope dependent term for fast matching of peaks, which greatly enhances convergence in case of low quality HVSR curves inversion -capability for the user of editing the subsurface model at any time during the inversion and capability to test the changes before acceptance. In what follows, we shall present many features of the program and we shall show its capabilities on both simulated and real data. We aim to supply a powerful tool to the scientific and professional community capable of handling large sets of HSVR curves, to retrieve the most from their microtremor data within a reduced amount of time and allowing the experienced scientist the necessary flexibility to integrate into the model their own geological knowledge of the sites under investigation. This is especially desirable now that microtremor testing has become routinely used. After testing the code over different datasets, both simulated and real, we finally decided to make it available in an open source format. The program is available by contacting the authors. 5. 3-D shear wave radially and azimuthally anisotropic velocity model of the North American upper mantle Yuan, Huaiyu; Romanowicz, Barbara; Fischer, Karen M.; Abt, David 2011-03-01 Using a combination of long period seismic waveforms and SKS splitting measurements, we have developed a 3-D upper-mantle model (SAWum_NA2) of North America that includes isotropic shear velocity, with a lateral resolution of ˜250 km, as well as radial and azimuthal anisotropy, with a lateral resolution of ˜500 km. Combining these results, we infer several key features of lithosphere and asthenosphere structure. A rapid change from thin (˜70-80 km) lithosphere in the western United States (WUS) to thick lithosphere (˜200 km) in the central, cratonic part of the continent closely follows the Rocky Mountain Front (RMF). Changes with depth of the fast axis direction of azimuthal anisotropy reveal the presence of two layers in the cratonic lithosphere, corresponding to the fast-to-slow discontinuity found in receiver functions. Below the lithosphere, azimuthal anisotropy manifests a maximum, stronger in the WUS than under the craton, and the fast axis of anisotropy aligns with the absolute plate motion, as described in the hotspot reference frame (HS3-NUVEL 1A). In the WUS, this zone is confined between 70 and 150 km, decreasing in strength with depth from the top, from the RMF to the San Andreas Fault system and the Juan de Fuca/Gorda ridges. This result suggests that shear associated with lithosphere-asthenosphere coupling dominates mantle deformation down to this depth in the western part of the continent. The depth extent of the zone of increased azimuthal anisotropy below the cratonic lithosphere is not well resolved in our study, although it is peaked around 270 km, a robust result. Radial anisotropy is such that, predominantly, ξ > 1, where ξ= (Vsh/Vsv)2, under the continent and its borders down to ˜200 km, with stronger ξ in the bordering oceanic regions. Across the continent and below 200 km, alternating zones of weaker and stronger radial anisotropy, with predominantly ξ < 1, correlate with zones of small lateral changes in the fast axis direction of 6. Fast and accurate 3-D ray tracing using bilinear traveltime interpolation and the wave front group marching Zhang, Jianzhong; Huang, Yueqin; Song, Lin-Ping; Liu, Qing-Huo 2011-03-01 We propose a new ray tracing technique in a 3-D heterogeneous isotropic media based on bilinear traveltime interpolation and the wave front group marching. In this technique, the media is discretized into a series of rectangular cells. There are two steps to be carried out: one is a forward step where wave front expansion is evolved from sources to whole computational domain and the subsequent one is a backward step where ray paths are calculated for any source-receiver configuration as desired. In the forward step, we derive a closed-form expression to calculate traveltime at an arbitrary point in a cell using a bilinear interpolation of the known traveltimes on the cell's surface. Then the group marching method (GMM), a fast wave front advancing method, is applied to expand the wave front from the source to all girds. In the backward step, ray paths starting from receivers are traced by finding the intersection points of potential ray propagation vectors with the surfaces of relevant cells. In this step, the same TI scheme is used to compute the candidate intersection points on all surfaces of each relevant cell. In this process, the point with the minimum traveltime is selected as a ray point from which the similar step is continued until sources. A number of numerical experiments demonstrate that our 3-D ray tracing technique is able to achieve very accurate computation of traveltimes and ray paths and meanwhile take much less computer time in comparison with the existing popular ones like the finite-difference-based GMM method, which is combined with the maximum gradient ray tracing, and the shortest path method. 7. 3-D upper mantle shear wave speed structure beneath the South Pacific Superswell by a BBOBS array Isse, T.; Suetsugu, D.; Shiobara, H.; Sugioka, H.; Yoshizawa, K.; Kanazawa, T.; Fukao, Y. 2005-12-01 Previous seismic tomography studies show a broad low velocity anomaly in the lower mantle, so-called superplume, beneath the South Pacific and there are hotspot chains and large scale topographic high at surface of this region. However, the resolution of seismic tomography is poor, especially in the upper mantle, because of limited spatial distribution of seismic stations. To improve the station coverage, we deployed an array of long-term broadband ocean bottom seismometers (BBOBS) in this region. The quality of the vertical component of seismograms recorded by the BBOBS array is comparable with those by island seismic stations. This observation has enabled us to obtain a more precise 3-D shear wave speed structure in the upper mantle of this region by analyzing Rayleigh waves. We employed a two-station method to determine phase velocity of fundamental mode Rayleigh wave recorded by the BBOBS array and island stations in the Pacific Ocean. We obtained 1025 path-average phase velocity dispersion curves including 188 dispersion curves using the BBOBS data in a period range between 40 and 140 seconds. We then inverted them to a 3-D shear wave speed structure down to a depth of 200 km. At shallow depths the eastern part of the French Polynesia region is in general slower than the western part, which indicates an age-dependence of seismic structure of the uppermost mantle. Slow speed anomalies corresponding to the hotspots are apparently superposed on this age-dependence: Slow speed anomalies can be seen from the surface to a depth of 200 km beneath the Society, Pitcairn, and Macdonald hotspots, but they are limited only to the deep part beneath the Samoa hotspot. The slow speed anomalies beneath the Pitcairn and Society hotspots apparently coalesce at a depth of 100 km, where a single anomaly extending upward from below seems to branch into two directions. A resolution analysis indicates that the BBOBS array data has improved the spatial resolution substantially. 8. High-resolution 3-D S-wave Tomography of upper crust structures in Yilan Plain from Ambient Seismic Noise Chen, Kai-Xun; Chen, Po-Fei; Liang, Wen-Tzong; Chen, Li-Wei; Gung, YuanCheng 2015-04-01 The Yilan Plain (YP) in NE Taiwan locates on the western YP of the Okinawa Trough and displays high geothermal gradients with abundant hot springs, likely resulting from magmatism associated with the back-arc spreading as attested by the offshore volcanic island (Kueishantao). YP features NS distinctive characteristics that the South YP exhibits thin top sedimentary layer, high on-land seismicity and significant SE movements, relative those of the northern counterpart. A dense network (~2.5 km station interval) of 89 Texan instruments was deployed in Aug. 2014, covering most of the YP and its vicinity. The ray path coverage density of each 0.015 degree cells are greater than 150 km that could provide the robustness assessment of tomographic results. We analyze ambient noise signals to invert a high-resolution 3D S-wave model for shallow velocity structures in and around YP. The aim is to investigate the velocity anomalies corresponding to geothermal resources and the NS geological distinctions aforementioned. We apply the Welch's method to generate empirical Rayleigh wave Green's functions between two stations records of continuous vertical components. The group velocities of thus derived functions are then obtained by the multiple-filter analysis technique measured at the frequency range between 0.25 and 1 Hz. Finally, we implement a wavelet-based multi-scale parameterization technique to construct 3D model of S-wave velocity. Our first month results exhibit low velocity in the plain, corresponding existing sediments, those of whole YP show low velocity offshore YP and those of high-resolution south YP reveal stark velocity contrast across the Sanshin fault. Key words: ambient seismic noises, Welch's method, S-wave, Yilan Plain 9. Imaging of 3D Ocean Turbulence Microstructure Using Low Frequency Acoustic Waves Minakov, Alexander; Kolyukhin, Dmitriy; Keers, Henk 2015-04-01 In the past decade the technique of imaging the ocean structure with low-frequency signal (Hz), produced by air-guns and typically employed during conventional multichannel seismic data acquisition, has emerged. The method is based on extracting and stacking the acoustic energy back-scattered by the ocean temperature and salinity micro- and meso-structure (1 - 100 meters). However, a good understanding of the link between the scattered wavefield utilized by the seismic oceanography and physical processes in the ocean is still lacking. We describe theory and the numerical implementation of a 3D time-dependent stochastic model of ocean turbulence. The velocity and temperature are simulated as homogeneous Gaussian isotropic random fields with the Kolmogorov-Obukhov energy spectrum in the inertial subrange. Numerical modeling technique is employed for sampling of realizations of random fields with a given spatial-temporal spectral tensor. The model used is shown to be representative for a wide range of scales. Using this model, we provide a framework to solve the forward and inverse acoustic scattering problem using marine seismic data. Our full-waveform inversion method is based on the ray-Born approximation which is specifically suitable for the modelling of small velocity perturbations in the ocean. This is illustrated by showing a good match between synthetic seismograms computed using ray-Born and synthetic seismograms produced with a more computationally expensive finite-difference method. 10. Horizontal structure and propagation characteristics of mesospheric gravity waves observed by Antarctic Gravity Wave Imaging/Instrument Network (ANGWIN), using a 3-D spectral analysis technique Matsuda, Takashi S.; Nakamura, Takuji; Murphy, Damian; Tsutsumi, Masaki; Moffat-Griffin, Tracy; Zhao, Yucheng; Pautet, Pierre-Dominique; Ejiri, Mitsumu K.; Taylor, Michael 2016-07-01 ANGWIN (Antarctic Gravity Wave Imaging/Instrument Network) is an international airglow imager/instrument network in the Antarctic, which commenced observations in 2011. It seeks to reveal characteristics of mesospheric gravity waves, and to study sources, propagation, breaking of the gravity waves over the Antarctic and the effects on general circulation and upper atmosphere. In this study, we compared distributions of horizontal phase velocity of the gravity waves at around 90 km altitude observed in the mesospheric airglow imaging over different locations using our new statistical analysis method of 3-D Fourier transform, developed by Matsuda et al. (2014). Results from the airglow imagers at four stations at Syowa (69S, 40E), Halley (76S, 27W), Davis (69S, 78E) and McMurdo (78S, 156E) out of the ANGWIN imagers have been compared, for the observation period between April 6 and May 21 in 2013. In addition to the horizontal distribution of propagation and phase speed, gravity wave energies have been quantitatively compared, indicating a smaller GW activity in higher latitude stations. We further investigated frequency dependence of gravity wave propagation direction, as well as nightly variation of the gravity wave direction and correlation with the background wind variations. We found that variation of propagation direction is partly due to the effect of background wind in the middle atmosphere, but variation of wave sources could play important role as well. Secondary wave generation is also needed to explain the observed results. 11. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields Hu, Y.; Ji, Y.; Egbert, G. D. 2015-12-01 The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM 12. Electrolysis-induced bubbling in soft solids for elastic-wave generation Montalescot, S.; Roger, B.; Zorgani, A.; Souchon, R.; Grasland-Mongrain, P.; Ben Haj Slama, R.; Bera, J.-C.; Catheline, S. 2016-02-01 Water electrolysis was discovered in 1800, with the famous experiment investigated here within soft tissue from an elastic-wave point of view. Indeed, we report that the rapid formation of hydrogen bubbles after transient (10 ms) electrolysis in water-based gels produces elastic waves. These bubbles are observed using an ultrafast optical camera. As the bubbles are trapped between the rigid electrode and the soft matter, they act as a source of elastic waves that are measured in the bulk using an ultrafast ultrasound scanner. The elastic-wave amplitude is shown to be in good agreement with a simple bubble model. 13. 3-D Modelling of Stretched Solitary Waves along Magnetic Field Lines Muschietti, L.; Roth, I.; Carlson, C. W.; Berthomier, M. 2001-12-01 A model is presented for a new type of fast solitary waves which is observed by FAST in downward current regions of the auroral zone. The three-dimensional, coherent structures are electrostatic, have a positive potential, and move along the ambient magnetic field lines with speeds on the order of the electron drift. Their potential profile in the parallel direction, which can be directly measured, is flat-top whereby it cannot fit to the Gaussian shape used in previous work. Their potential profile in the perpendicular direction can only be inferred from a measured unipolar electric signal. We develop an extended BGK model which includes a flattened potential and an assumed cylindrical symmetry around a centric magnetic field line. The model envisions concentric shells of trapped electrons slowly drifting azimuthally while bouncing back and forth in the parallel direction. The electron dynamics is analysed in terms of three basic motions that occur on different time scales. These are defined by the cyclotron frequency Ω e, the bounce frequency ω b, and the azimuthal drift frequency ω γ , for which explicit analytical expressions are obtained. Subject to the ordering ω γ <<ωb<< Ωe, we calculate self-consistent distribution functions in terms of approximate constants of motion. Constraints on the parameters characterizing the amplitude and shape of the stretched solitary wave are discussed. 14. Elastic reverse-time migration based on amplitude-preserving P- and S-wave separation Yang, Jia-Jia; Luan, Xi-Wu; Fang, Gang; Liu, Xin-Xin; Pan, Jun; Wang, Xiao-Jie 2016-09-01 Imaging the PP- and PS-wave for the elastic vector wave reverse-time migration requires separating the P- and S-waves during the wave field extrapolation. The amplitude and phase of the P- and S-waves are distorted when divergence and curl operators are used to separate the P- and S-waves. We present a P- and S-wave amplitude-preserving separation algorithm for the elastic wavefield extrapolation. First, we add the P-wave pressure and P-wave vibration velocity equation to the conventional elastic wave equation to decompose the P- and S-wave vectors. Then, we synthesize the scalar P- and S-wave from the vector Pand S-wave to obtain the scalar P- and S-wave. The amplitude-preserved separated P- and S-waves are imaged based on the vector wave reverse-time migration (RTM). This method ensures that the amplitude and phase of the separated P- and S-wave remain unchanged compared with the divergence and curl operators. In addition, after decomposition, the P-wave pressure and vibration velocity can be used to suppress the interlayer reflection noise and to correct the S-wave polarity. This improves the image quality of P- and S-wave in multicomponent seismic data and the true-amplitude elastic reverse time migration used in prestack inversion. 15. Understanding the core-halo relation of quantum wave dark matter from 3D simulations. PubMed Schive, Hsi-Yu; Liao, Ming-Hsuan; Woo, Tak-Pong; Wong, Shing-Kwong; Chiueh, Tzihong; Broadhurst, Tom; Hwang, W-Y Pauchy 2014-12-31 We examine the nonlinear structure of gravitationally collapsed objects that form in our simulations of wavelike cold dark matter, described by the Schrödinger-Poisson (SP) equation with a particle mass ∼10(-22)  eV. A distinct gravitationally self-bound solitonic core is found at the center of every halo, with a profile quite different from cores modeled in the warm or self-interacting dark matter scenarios. Furthermore, we show that each solitonic core is surrounded by an extended halo composed of large fluctuating dark matter granules which modulate the halo density on a scale comparable to the diameter of the solitonic core. The scaling symmetry of the SP equation and the uncertainty principle tightly relate the core mass to the halo specific energy, which, in the context of cosmological structure formation, leads to a simple scaling between core mass (Mc) and halo mass (Mh), Mc∝a(-1/2)Mh(1/3), where a is the cosmic scale factor. We verify this scaling relation by (i) examining the internal structure of a statistical sample of virialized halos that form in our 3D cosmological simulations and by (ii) merging multiple solitons to create individual virialized objects. Sufficient simulation resolution is achieved by adaptive mesh refinement and graphic processing units acceleration. From this scaling relation, present dwarf satellite galaxies are predicted to have kiloparsec-sized cores and a minimum mass of ∼10(8)M⊙, capable of solving the small-scale controversies in the cold dark matter model. Moreover, galaxies of 2×10(12)M⊙ at z=8 should have massive solitonic cores of ∼2×10(9)M⊙ within ∼60  pc. Such cores can provide a favorable local environment for funneling the gas that leads to the prompt formation of early stellar spheroids and quasars. 16. Skin-Friction Measurements in a 3-D, Supersonic Shock-Wave/Boundary-Layer Interaction NASA Technical Reports Server (NTRS) Wideman, J. K.; Brown, J. L.; Miles, J. B.; Ozcan, O. 1994-01-01 The experimental documentation of a three-dimensional shock-wave/boundary-layer interaction in a nominal Mach 3 cylinder, aligned with the free-stream flow, and 20 deg. half-angle conical flare offset 1.27 cm from the cylinder centerline. Surface oil flow, laser light sheet illumination, and schlieren were used to document the flow topology. The data includes surface-pressure and skin-friction measurements. A laser interferometric skin friction data. Included in the skin-friction data are measurements within separated regions and three-dimensional measurements in highly-swept regions. The skin-friction data will be particularly valuable in turbulence modeling and computational fluid dynamics validation. 17. Pseudo 3-D P wave refraction seismic monitoring of permafrost in steep unstable bedrock Krautblatter, Michael; Draebing, Daniel 2014-02-01 permafrost in steep rock walls can cause hazardous rock creep and rock slope failure. Spatial and temporal patterns of permafrost degradation that operate at the scale of instability are complex and poorly understood. For the first time, we used P wave seismic refraction tomography (SRT) to monitor the degradation of permafrost in steep rock walls. A 2.5-D survey with five 80 m long parallel transects was installed across an unstable steep NE-SW facing crestline in the Matter Valley, Switzerland. P wave velocity was calibrated in the laboratory for water-saturated low-porosity paragneiss samples between 20°C and -5°C and increases significantly along and perpendicular to the cleavage by 0.55-0.66 km/s (10-13%) and 2.4-2.7 km/s (>100%), respectively, when freezing. Seismic refraction is, thus, technically feasible to detect permafrost in low-porosity rocks that constitute steep rock walls. Ray densities up to 100 and more delimit the boundary between unfrozen and frozen bedrock and facilitate accurate active layer positioning. SRT shows monthly (August and September 2006) and annual active layer dynamics (August 2006 and 2007) and reveals a contiguous permafrost body below the NE face with annual changes of active layer depth from 2 to 10 m. Large ice-filled fractures, lateral onfreezing of glacierets, and a persistent snow cornice cause previously unreported permafrost patterns close to the surface and along the crestline which correspond to active seasonal rock displacements up to several mm/a. SRT provides a geometrically highly resolved subsurface monitoring of active layer dynamics in steep permafrost rocks at the scale of instability. 18. Applications of elastic full waveform inversion to shallow seismic surface waves Bohlen, Thomas; Forbriger, Thomas; Groos, Lisa; Schäfer, Martin; Metz, Tilman 2015-04-01 Shallow-seismic Rayleigh waves are attractive for geotechnical site investigations. They exhibit a high signal to noise ratio in field data recordings and have a high sensitivity to the S-wave velocity, an important lithological and geotechnical parameter to characterize the very shallow subsurface. Established inversion methods assume (local) 1-D subsurface models, and allow the reconstruction of the S-wave velocity as a function of depth by inverting the dispersion properties of the Rayleigh waves. These classical methods, however, fail if significant lateral variations of medium properties are present. Then the full waveform inversion (FWI) of the elastic wave field seems to be the only solution. Moreover, FWI may have the potential to recover multi-parameter models of seismic wave velocities, attenuation and eventually mass density. Our 2-D elastic FWI is a conjugate-gradient method where the gradient of the misfit function is calculated by the time-domain adjoint method. The viscoelastic forward modelling is performed with a classical staggered-grid 2-D finite-difference forward solver. Viscoelastic damping is implemented in the time-domain by a generalized standard linear solid. We use a multi-scale inversion approach by applying frequency filtering in the inversion. We start with the lowest frequency oft the field data and increase the upper corner frequency sequentially. Our modelling and FWI software is freely available under the terms of GNU GPL on www.opentoast.de. In recent years we studied the applicability of two-dimensional elastic FWI using numerous synthetic reconstruction tests and several field data examples. Important pre-processing steps for the application of 2-D elastic FWI to shallow-seismic field data are the 3D to 2D correction of geometrical spreading and the estimation of a priori Q-values that must be used as a passive medium parameter during the FWI. Furthermore, a source-wavelet correction filter should be applied during the FWI 19. A combined dislocation fan-finite element (DF-FE) method for stress field simulation of dislocations emerging at the free surfaces of 3D elastically anisotropic crystals Balusu, K.; Huang, H. 2017-04-01 A combined dislocation fan-finite element (DF-FE) method is presented for efficient and accurate simulation of dislocation nodal forces in 3D elastically anisotropic crystals with dislocations intersecting the free surfaces. The finite domain problem is decomposed into half-spaces with singular traction stresses, an infinite domain, and a finite domain with non-singular traction stresses. As such, the singular and non-singular parts of the traction stresses are addressed separately; the dislocation fan (DF) method is introduced to balance the singular traction stresses in the half-spaces while the finite element method (FEM) is employed to enforce the non-singular boundary conditions. The accuracy and efficiency of the DF method is demonstrated using a simple isotropic test case, by comparing it with the analytical solution as well as the FEM solution. The DF-FE method is subsequently used for calculating the dislocation nodal forces in a finite elastically anisotropic crystal, which produces dislocation nodal forces that converge rapidly with increasing mesh resolutions. In comparison, the FEM solution fails to converge, especially for nodes closer to the surfaces. 20. Conical refraction of elastic waves in absorbing crystals SciTech Connect Alshits, V. I. Lyubimov, V. N. 2011-10-15 The absorption-induced acoustic-axis splitting in a viscoelastic crystal with an arbitrary anisotropy is considered. It is shown that after 'switching on' absorption, the linear vector polarization field in the vicinity of the initial degeneracy point having an orientation singularity with the Poincare index n = {+-}1/2, transforms to a planar distribution of ellipses with two singularities n = {+-}1/4 corresponding to new axes. The local geometry of the slowness surface of elastic waves is studied in the vicinity of new degeneracy points and a self-intersection line connecting them. The absorption-induced transformation of the classical picture of conical refraction is studied. The ellipticity of waves at the edge of the self-intersection wedge in a narrow interval of propagation directions drastically changes from circular at the wedge ends to linear in the middle of the wedge. For the wave normal directed to an arbitrary point of this wedge, during movement of the displacement vector over the corresponding polarization ellipse, the wave ray velocity s runs over the same cone describing refraction in a crystal without absorption. In this case, the end of the vector moves along a universal ellipse whose plane is orthogonal to the acoustic axis for zero absorption. The areal velocity of this movement differs from the angular velocity of the displacement vector on the polarization ellipse only by a constant factor, being delayed by {pi}/2 in phase. When the wave normal is localized at the edge of the wedge in its central region, the movement of vector s along the universal ellipse becomes drastically nonuniform and the refraction transforms from conical to wedge-like. 1. Capturing atmospheric effects on 3D millimeter wave radar propagation patterns Cook, Richard D.; Fiorino, Steven T.; Keefer, Kevin J.; Stringer, Jeremy 2016-05-01 Traditional radar propagation modeling is done using a path transmittance with little to no input for weather and atmospheric conditions. As radar advances into the millimeter wave (MMW) regime, atmospheric effects such as attenuation and refraction become more pronounced than at traditional radar wavelengths. The DoD High Energy Laser Joint Technology Offices High Energy Laser End-to-End Operational Simulation (HELEEOS) in combination with the Laser Environmental Effects Definition and Reference (LEEDR) code have shown great promise simulating atmospheric effects on laser propagation. Indeed, the LEEDR radiative transfer code has been validated in the UV through RF. Our research attempts to apply these models to characterize the far field radar pattern in three dimensions as a signal propagates from an antenna towards a point in space. Furthermore, we do so using realistic three dimensional atmospheric profiles. The results from these simulations are compared to those from traditional radar propagation software packages. In summary, a fast running method has been investigated which can be incorporated into computational models to enhance understanding and prediction of MMW propagation through various atmospheric and weather conditions. 2. Intensity images and statistics from numerical simulation of wave propagation in 3-D random media. PubMed Martin, J M; Flatté, S M 1988-06-01 An extended random medium is modeled by a set of 2-D thin Gaussian phase-changing screens with phase power spectral densities appropriate to the natural medium being modeled. Details of the algorithm and limitations on its application to experimental conditions are discussed, concentrating on power-law spectra describing refractive-index fluctuations of the neutral atmosphere. Inner and outer scale effects on intensity scintillation spectra and intensity variance are also included. Images of single realizations of the intensity field at the observing plane are presented, showing that under weak scattering the small-scale Fresnel length structure of the medium dominates the intensity scattering pattern. As the strength of scattering increases, caustics and interference fringes around focal regions begin to form. Finally, in still stronger scatter, the clustering of bright regions begins to reflect the large-scale structure of the medium. For plane waves incident on the medium, physically reasonable inner scales do not produce the large values of intensity variance observed in the focusing region during laser propagation experiments over kilometer paths in the atmosphere. Values as large as experimental observations have been produced in the simulations, but they require inner scales of the order of 10 cm. Inclusion of an outer scale depresses the low-frequency end of the intensity spectrum and reduces the maximum of the intensity variance. Increasing the steepness of the power law also slightly increases the maximum value of intensity variance. 3. Topology optimization of two-dimensional elastic wave barriers Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G. 2016-08-01 Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed. 4. Elastic Wave Transmission and Stop Band Characteristics in Unidirectional Composites Nakashima, Kazuhiro; Biwa, Shiro; Matsumoto, Eiji Elastic wave transmission characteristics in unidirectional fiber-reinforced composites are studied based on the two-dimensional finite element analysis. The composite is assumed to be a lay-up of a finite number of monolayers, each of which contains a single row of fibers spaced at equal distance. Influences of the stacking number and misalignment of monolayers as well as the presence of coating layer around the fibers on the wave transmission spectra are demonstrated for unidirectional SiC-fiber-reinforced Ti-alloy composites. It is shown that the transmission coefficients fall to low values in certain bands of frequency, i.e., stop bands in terminology analogous to perfectly periodic structures. This feature is found to appear more clearly for the transverse wave incidence, irrespective of the misalignment of monolayers. The stiffness reduction of the coating layer is shown to shift the stop bands to lower frequencies, which can be a useful feature for the monitoring of the fiber/matrix interfacial damage. 5. Double porosity modeling in elastic wave propagation for reservoir characterization SciTech Connect Berryman, J. G., LLNL 1998-06-01 Phenomenological equations for the poroelastic behavior of a double porosity medium have been formulated and the coefficients in these linear equations identified. The generalization from a single porosity model increases the number of independent coefficients from three to six for an isotropic applied stress. In a quasistatic analysis, the physical interpretations are based upon considerations of extremes in both spatial and temporal scales. The limit of very short times is the one most relevant for wave propagation, and in this case both matrix porosity and fractures behave in an undrained fashion. For the very long times more relevant for reservoir drawdown,the double porosity medium behaves as an equivalent single porosity medium At the macroscopic spatial level, the pertinent parameters (such as the total compressibility) may be determined by appropriate field tests. At the mesoscopic scale pertinent parameters of the rock matrix can be determined directly through laboratory measurements on core, and the compressibility can be measured for a single fracture. We show explicitly how to generalize the quasistatic results to incorporate wave propagation effects and how effects that are usually attributed to squirt flow under partially saturated conditions can be explained alternatively in terms of the double-porosity model. The result is therefore a theory that generalizes, but is completely consistent with, Biots theory of poroelasticity and is valid for analysis of elastic wave data from highly fractured reservoirs. 6. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S. 2014-10-01 Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances. 7. A coupled wave-3-D hydrodynamics model of the Taranto Sea (Italy): a multiple-nesting approach Gaeta, Maria Gabriella; Samaras, Achilleas G.; Federico, Ivan; Archetti, Renata; Maicu, Francesco; Lorenzetti, Giuliano 2016-09-01 The present work describes an operational strategy for the development of a multiscale modeling system, based on a multiple-nesting approach and open-source numerical models. The strategy was applied and validated for the Gulf of Taranto in southern Italy, scaling large-scale oceanographic model results to high-resolution coupled wave-3-D hydrodynamics simulations for the area of Mar Grande in the Taranto Sea. The spatial and temporal high-resolution simulations were performed using the open-source TELEMAC suite, forced by wind data from the COSMO-ME database, boundary wave spectra from the RON buoy at Crotone and results from the Southern Adriatic Northern Ionian coastal Forecasting System (SANIFS) regarding sea levels and current fields. Model validation was carried out using data collected in the Mar Grande basin from a fixed monitoring station and during an oceanographic campaign in October 2014. The overall agreement between measurements and model results in terms of waves, sea levels, surface currents, circulation patterns and vertical velocity profiles is deemed to be satisfactory, and the methodology followed in the process can constitute a useful tool for both research and operational applications in the same field and as support of decisions for management and design of infrastructures. 8. Computational modeling of pitching cylinder-type ocean wave energy converters using 3D MPI-parallel simulations Freniere, Cole; Pathak, Ashish; Raessi, Mehdi 2016-11-01 Ocean Wave Energy Converters (WECs) are devices that convert energy from ocean waves into electricity. To aid in the design of WECs, an advanced computational framework has been developed which has advantages over conventional methods. The computational framework simulates the performance of WECs in a virtual wave tank by solving the full Navier-Stokes equations in 3D, capturing the fluid-structure interaction, nonlinear and viscous effects. In this work, we present simulations of the performance of pitching cylinder-type WECs and compare against experimental data. WECs are simulated at both model and full scales. The results are used to determine the role of the Keulegan-Carpenter (KC) number. The KC number is representative of viscous drag behavior on a bluff body in an oscillating flow, and is considered an important indicator of the dynamics of a WEC. Studying the effects of the KC number is important for determining the validity of the Froude scaling and the inviscid potential flow theory, which are heavily relied on in the conventional approaches to modeling WECs. Support from the National Science Foundation is gratefully acknowledged. 9. Elastic waves push organic fluids from reservoir rock Beresnev, Igor A.; Vigil, R. Dennis; Li, Wenqing; Pennington, Wayne D.; Turpening, Roger M.; Iassonov, Pavel P.; Ewing, Robert P. 2005-07-01 Elastic waves have been observed to increase productivity of oil wells, although the reason for the vibratory mobilization of the residual organic fluids has remained unclear. Residual oil is entrapped as ganglia in pore constrictions because of resisting capillary forces. An external pressure gradient exceeding an unplugging'' threshold is needed to carry the ganglia through. The vibrations help overcome this resistance by adding an oscillatory inertial forcing to the external gradient; when the vibratory forcing acts along the gradient and the threshold is exceeded, instant unplugging'' occurs. The mobilization effect is proportional to the amplitude and inversely proportional to the frequency of vibrations. We observe this dependence in a laboratory experiment, in which residual saturation is created in a glass micromodel, and mobilization of the dyed organic ganglia is monitored using digital photography. We also directly demonstrate the release of an entrapped ganglion by vibrations in a computational fluid-dynamics simulation. 10. Modeling and validation of a 3D velocity structure for the Santa Clara Valley, California, for seismic-wave simulations USGS Publications Warehouse Hartzell, S.; Harmsen, S.; Williams, R.A.; Carver, D.; Frankel, A.; Choy, G.; Liu, P.-C.; Jachens, R.C.; Brocher, T.M.; Wentworth, C.M. 2006-01-01 A 3D seismic velocity and attenuation model is developed for Santa Clara Valley, California, and its surrounding uplands to predict ground motions from scenario earthquakes. The model is developed using a variety of geologic and geophysical data. Our starting point is a 3D geologic model developed primarily from geologic mapping and gravity and magnetic surveys. An initial velocity model is constructed by using seismic velocities from boreholes, reflection/refraction lines, and spatial autocorrelation microtremor surveys. This model is further refined and the seismic attenuation is estimated through waveform modeling of weak motions from small local events and strong-ground motion from the 1989 Loma Prieta earthquake. Waveforms are calculated to an upper frequency of 1 Hz using a parallelized finite-difference code that utilizes two regions with a factor of 3 difference in grid spacing to reduce memory requirements. Cenozoic basins trap and strongly amplify ground motions. This effect is particularly strong in the Evergreen Basin on the northeastern side of the Santa Clara Valley, where the steeply dipping Silver Creek fault forms the southwestern boundary of the basin. In comparison, the Cupertino Basin on the southwestern side of the valley has a more moderate response, which is attributed to a greater age and velocity of the Cenozoic fill. Surface waves play a major role in the ground motion of sedimentary basins, and they are seen to strongly develop along the western margins of the Santa Clara Valley for our simulation of the Loma Prieta earthquake. 11. Modeling ionospheric disturbance features in quasi-vertically incident ionograms using 3-D magnetoionic ray tracing and atmospheric gravity waves Cervera, M. A.; Harris, T. J. 2014-01-01 The Defence Science and Technology Organisation (DSTO) has initiated an experimental program, Spatial Ionospheric Correlation Experiment, utilizing state-of-the-art DSTO-designed high frequency digital receivers. This program seeks to understand ionospheric disturbances at scales < 150 km and temporal resolutions under 1 min through the simultaneous observation and recording of multiple quasi-vertical ionograms (QVI) with closely spaced ionospheric control points. A detailed description of and results from the first campaign conducted in February 2008 were presented by Harris et al. (2012). In this paper we employ a 3-D magnetoionic Hamiltonian ray tracing engine, developed by DSTO, to (1) model the various disturbance features observed on both the O and X polarization modes in our QVI data and (2) understand how they are produced. The ionospheric disturbances which produce the observed features were modeled by perturbing the ionosphere with atmospheric gravity waves. 12. Dynamic diffraction-limited light-coupling of 3D-maneuvered wave-guided optical waveguides. PubMed Villangca, Mark; Bañas, Andrew; Palima, Darwin; Glückstad, Jesper 2014-07-28 We have previously proposed and demonstrated the targeted-light delivery capability of wave-guided optical waveguides (WOWs). As the WOWs are maneuvered in 3D space, it is important to maintain efficient light coupling through the waveguides within their operating volume. We propose the use of dynamic diffractive techniques to create diffraction-limited spots that will track and couple to the WOWs during operation. This is done by using a spatial light modulator to encode the necessary diffractive phase patterns to generate the multiple and dynamic coupling spots. The method is initially tested for a single WOW and we have experimentally demonstrated dynamic tracking and coupling for both lateral and axial displacements. 13. A staggered-grid convolutional differentiator for elastic wave modelling Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun 2015-11-01 The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations. 14. Bulk elastic waves with unidirectional backscattering-immune topological states in a time-dependent superlattice Swinteck, N.; Matsuo, S.; Runge, K.; Vasseur, J. O.; Lucas, P.; Deymier, P. A. 2015-08-01 Recent progress in electronic and electromagnetic topological insulators has led to the demonstration of one way propagation of electron and photon edge states and the possibility of immunity to backscattering by edge defects. Unfortunately, such topologically protected propagation of waves in the bulk of a material has not been observed. We show, in the case of sound/elastic waves, that bulk waves with unidirectional backscattering-immune topological states can be observed in a time-dependent elastic superlattice. The superlattice is realized via spatial and temporal modulation of the stiffness of an elastic material. Bulk elastic waves in this superlattice are supported by a manifold in momentum space with the topology of a single twist Möbius strip. Our results demonstrate the possibility of attaining one way transport and immunity to scattering of bulk elastic waves. 15. Bulk elastic waves with unidirectional backscattering-immune topological states in a time-dependent superlattice SciTech Connect Swinteck, N. Matsuo, S.; Runge, K.; Lucas, P.; Deymier, P. A.; Vasseur, J. O. 2015-08-14 Recent progress in electronic and electromagnetic topological insulators has led to the demonstration of one way propagation of electron and photon edge states and the possibility of immunity to backscattering by edge defects. Unfortunately, such topologically protected propagation of waves in the bulk of a material has not been observed. We show, in the case of sound/elastic waves, that bulk waves with unidirectional backscattering-immune topological states can be observed in a time-dependent elastic superlattice. The superlattice is realized via spatial and temporal modulation of the stiffness of an elastic material. Bulk elastic waves in this superlattice are supported by a manifold in momentum space with the topology of a single twist Möbius strip. Our results demonstrate the possibility of attaining one way transport and immunity to scattering of bulk elastic waves. 16. Nonlinear elastic wave tomography for the imaging of corrosion damage. PubMed Ciampa, Francesco; Scarselli, Gennaro; Pickering, Simon; Meo, M 2015-09-01 This paper presents a nonlinear elastic wave tomography method, based on ultrasonic guided waves, for the image of nonlinear signatures in the dynamic response of a damaged isotropic structure. The proposed technique relies on a combination of high order statistics and a radial basis function approach. The bicoherence of ultrasonic waveforms originated by a harmonic excitation was used to characterise the second order nonlinear signature contained in the measured signals due to the presence of surface corrosion. Then, a radial basis function interpolation was employed to achieve an effective visualisation of the damage over the panel using only a limited number of receiver sensors. The robustness of the proposed nonlinear imaging method was experimentally demonstrated on a damaged 2024 aluminium panel, and the nonlinear source location was detected with a high level of accuracy, even with few receiving elements. Compared to five standard ultrasonic imaging methods, this nonlinear tomography technique does not require any baseline with the undamaged structure for the evaluation of the corrosion damage, nor a priori knowledge of the mechanical properties of the specimen. 17. Longitudinal elastic wave propagation characteristics of inertant acoustic metamaterials Kulkarni, Prateek P.; Manimala, James M. 2016-06-01 Longitudinal elastic wave propagation characteristics of acoustic metamaterials with various inerter configurations are investigated using their representative one-dimensional discrete element lattice models. Inerters are dynamic mass-amplifying mechanical elements that are activated by a difference in acceleration across them. They have a small device mass but can provide a relatively large dynamic mass presence depending on accelerations in systems that employ them. The effect of introducing inerters both in local attachments and in the lattice was examined vis-à-vis the propagation characteristics of locally resonant acoustic metamaterials. A simple effective model based on mass, stiffness, or their combined equivalent was used to establish dispersion behavior and quantify attenuation within bandgaps. Depending on inerter configurations in local attachments or in the lattice, both up-shift and down-shift in the bandgap frequency range and their extent are shown to be possible while retaining static mass addition to the host structure to a minimum. Further, frequency-dependent negative and even extreme effective-stiffness regimes are encountered. The feasibility of employing tuned combinations of such mass-delimited inertant configurations to engineer acoustic metamaterials that act as high-pass filters without the use of grounded elements or even as complete longitudinal wave inhibitors is shown. Potential device implications and strategies for practical applications are also discussed. 18. Analogy between a 10D model for nonlinear wave-wave interaction in a plasma and the 3D Lorenz dynamics Letellier, C.; Aguirre, L. A.; Maquet, J.; Lefebvre, B. 2003-05-01 This paper investigates nonlinear wave-wave interactions in a system that describes a modified decay instability and consists of three Langmuir and one ion-sound waves. As a means to establish that the underlying dynamics exists in a 3D space and that it is of the Lorenz-type, both continuous and discrete-time multivariable global models were obtained from data. These data were obtained from a 10D dynamical system that describes the modified decay instability obtained from Zakharov’s equations which characterise Langmuir turbulence. This 10D model is equivariant under a continuous rotation symmetry and a discrete order-2 rotation symmetry. When the continuous rotation symmetry is modded out, that is, when the dynamics are represented with the continuous rotation symmetry removed under a local diffeomorphism, it is shown that a 3D system may describe the underlying dynamics. For certain parameter values, the models, obtained using global modelling techniques from three time series from the 10D dynamics with the continuous rotation symmetry modded out, generate attractors which are topologically equivalent. These models can be simulated easily and, due to their simplicity, are amenable for analysis of the original dynamics after symmetries have been modded out. Moreover, it is shown that all of these attractors are topologically equivalent to an attractor generated by the well-known Lorenz system. 19. Elastic parabolic equation solutions for oceanic T-wave generation and propagation from deep seismic sources. PubMed Frank, Scott D; Collis, Jon M; Odom, Robert I 2015-06-01 Oceanic T-waves are earthquake signals that originate when elastic waves interact with the fluid-elastic interface at the ocean bottom and are converted to acoustic waves in the ocean. These waves propagate long distances in the Sound Fixing and Ranging (SOFAR) channel and tend to be the largest observed arrivals from seismic events. Thus, an understanding of their generation is important for event detection, localization, and source-type discrimination. Recently benchmarked seismic self-starting fields are used to generate elastic parabolic equation solutions that demonstrate generation and propagation of oceanic T-waves in range-dependent underwater acoustic environments. Both downward sloping and abyssal ocean range-dependent environments are considered, and results demonstrate conversion of elastic waves into water-borne oceanic T-waves. Examples demonstrating long-range broadband T-wave propagation in range-dependent environments are shown. These results confirm that elastic parabolic equation solutions are valuable for characterization of the relationships between T-wave propagation and variations in range-dependent bathymetry or elastic material parameters, as well as for modeling T-wave receptions at hydrophone arrays or coastal receiving stations. 20. 3-D finite-difference, finite-element, discontinuous-Galerkin and spectral-element schemes analysed for their accuracy with respect to P-wave to S-wave speed ratio Moczo, Peter; Kristek, Jozef; Galis, Martin; Chaljub, Emmanuel; Etienne, Vincent 2011-12-01 We analyse 13 3-D numerical time-domain explicit schemes for modelling seismic wave propagation and earthquake motion for their behaviour with a varying P-wave to S-wave speed ratio (VP/VS). The second-order schemes include three finite-difference, three finite-element and one discontinuous-Galerkin schemes. The fourth-order schemes include three finite-difference and two spectral-element schemes. All schemes are second-order in time. We assume a uniform cubic grid/mesh and present all schemes in a unified form. We assume plane S-wave propagation in an unbounded homogeneous isotropic elastic medium. We define relative local errors of the schemes in amplitude and the vector difference in one time step and normalize them for a unit time. We also define the equivalent spatial sampling ratio as a ratio at which the maximum relative error is equal to the reference maximum error. We present results of the extensive numerical analysis. We theoretically (i) show how a numerical scheme sees the P and S waves if the VP/VS ratio increases, (ii) show the structure of the errors in amplitude and the vector difference and (iii) compare the schemes in terms of the truncation errors of the discrete approximations to the second mixed and non-mixed spatial derivatives. We find that four of the tested schemes have errors in amplitude almost independent on the VP/VS ratio. The homogeneity of the approximations to the second mixed and non-mixed spatial derivatives in terms of the coefficients of the leading terms of their truncation errors as well as the absolute values of the coefficients are key factors for the behaviour of the schemes with increasing VP/VS ratio. The dependence of the errors in the vector difference on the VP/VS ratio should be accounted for by a proper (sufficiently dense) spatial sampling. 1. Electromechanical wave imaging (EWI) validation in all four cardiac chambers with 3D electroanatomic mapping in canines in vivo Costet, Alexandre; Wan, Elaine; Bunting, Ethan; Grondin, Julien; Garan, Hasan; Konofagou, Elisa 2016-11-01 Characterization and mapping of arrhythmias is currently performed through invasive insertion and manipulation of cardiac catheters. Electromechanical wave imaging (EWI) is a non-invasive ultrasound-based imaging technique, which tracks the electromechanical activation that immediately follows electrical activation. Electrical and electromechanical activations were previously found to be linearly correlated in the left ventricle, but the relationship has not yet been investigated in the three other chambers of the heart. The objective of this study was to investigate the relationship between electrical and electromechanical activations and validate EWI in all four chambers of the heart with conventional 3D electroanatomical mapping. Six (n  =  6) normal adult canines were used in this study. The electrical activation sequence was mapped in all four chambers of the heart, both endocardially and epicardially using the St Jude’s EnSite 3D mapping system (St. Jude Medical, Secaucus, NJ). EWI acquisitions were performed in all four chambers during normal sinus rhythm, and during pacing in the left ventricle. Isochrones of the electromechanical activation were generated from standard echocardiographic imaging views. Electrical and electromechanical activation maps were co-registered and compared, and electrical and electromechanical activation times were plotted against each other and linear regression was performed for each pair of activation maps. Electromechanical and electrical activations were found to be directly correlated with slopes of the correlation ranging from 0.77 to 1.83, electromechanical delays between 9 and 58 ms and R 2 values from 0.71 to 0.92. The linear correlation between electrical and electromechanical activations and the agreement between the activation maps indicate that the electromechanical activation follows the pattern of propagation of the electrical activation. This suggests that EWI may be used as a novel non-invasive method 2. Self-Propagating Combustion Triggered Synthesis of 3D Lamellar Graphene/BaFe12O19 Composite and Its Electromagnetic Wave Absorption Properties PubMed Central Zhao, Tingkai; Ji, Xianglin; Jin, Wenbo; Yang, Wenbo; Peng, Xiarong; Duan, Shichang; Dang, Alei; Li, Hao; Li, Tiehu 2017-01-01 The synthesis of 3D lamellar graphene/BaFe12O19 composites was performed by oxidizing graphite and sequentially self-propagating combustion triggered process. The 3D lamellar graphene structures were formed due to the synergistic effect of the tremendous heat induced gasification as well as huge volume expansion. The 3D lamellar graphene/BaFe12O19 composites bearing 30 wt % graphene present the reflection loss peak at −27.23 dB as well as the frequency bandwidth at 2.28 GHz (< −10 dB). The 3D lamellar graphene structures could consume the incident waves through multiple reflection and scattering within the layered structures, prolonging the propagation path of electromagnetic waves in the absorbers. PMID:28336889 3. Modeling elastic wave propagation in kidney stones with application to shock wave lithotripsy Cleveland, Robin O.; Sapozhnikov, Oleg A. 2005-10-01 A time-domain finite-difference solution to the equations of linear elasticity was used to model the propagation of lithotripsy waves in kidney stones. The model was used to determine the loading on the stone (principal stresses and strains and maximum shear stresses and strains) due to the impact of lithotripsy shock waves. The simulations show that the peak loading induced in kidney stones is generated by constructive interference from shear waves launched from the outer edge of the stone with other waves in the stone. Notably the shear wave induced loads were significantly larger than the loads generated by the classic Hopkinson or spall effect. For simulations where the diameter of the focal spot of the lithotripter was smaller than that of the stone the loading decreased by more than 50%. The constructive interference was also sensitive to shock rise time and it was found that the peak tensile stress reduced by 30% as rise time increased from 25 to 150 ns. These results demonstrate that shear waves likely play a critical role in stone comminution and that lithotripters with large focal widths and short rise times should be effective at generating high stresses inside kidney stones. 4. Nonlinear elastic wave NDE I : nonlinear resonant ultrasound spectroscopy (NRUS) and slow dynamics diagnostics (SDD) SciTech Connect Johnson, Paul; Sutin, A. 2004-01-01 The nonlinear elastic response of materials (e.g., wave mixing, harmonic generation) is much more sensitive to the presence of damage than the linear response (e.g., wavespeed, dissipation). An overview of the four primary Nonlinear Elastic Wave Spectroscopy (NEWS) methods used in nonlinear damage detection are presented in this and the following paper. Those presented in this paper are Nonlinear Resonant Ultrasound Spectroscopy (NRUS), based on measurement of the nonlinear response of one or more resonant modes in a test sample, and Slow Dynamics Diagnostics (SDD), manifest by an alteration in the material dissipation and elastic modulus after application of relatively high-amplitude wave that slowly recovers in time. 5. An Approximate Method for Analysis of Solitary Waves in Nonlinear Elastic Materials Rushchitsky, J. J.; Yurchuk, V. N. 2016-05-01 Two types of solitary elastic waves are considered: a longitudinal plane displacement wave (longitudinal displacements along the abscissa axis of a Cartesian coordinate system) and a radial cylindrical displacement wave (displacements in the radial direction of a cylindrical coordinate system). The basic innovation is the use of nonlinear wave equations similar in form to describe these waves and the use of the same approximate method to analyze these equations. The distortion of the wave profile described by Whittaker (plane wave) or Macdonald (cylindrical wave) functions is described theoretically 6. Elastic-Anelastic properties beneath the Aegean inferred from long period Rayleigh Waves Kassaras, I.; Louis, F.; Makropoulos, K.; Kaviris, G. 2007-12-01 This work is towards contributing to the better knowledge of the deep structure of the Aegean by introducing experimental elastic and anelastic parameters via the study of long period Rayleigh waves. For this scope path- average phase velocities and attenuation coefficients of fundamental Rayleigh waves crossing the Aegean were extracted over the period range 10-100 s. It is mean worth that it is the first time that anelastic parameters of the long period wavefield are determined for the region. The wavetrains were recorded at the broadband stations installed some years ago in the Aegean region for the SEISFAULTGREECE project. The stochastic inversion algorithm has been used to derive 36 path-average models of shear velocity and 19 path-average models of inverse shear Q down to 200 km. Average over the study region shear Q values at depths from 0 to 200 km range between 29±13. The observed low shear Q likely indicate that fluids reside in lower crustal, as well as upper mantle depths. Furthermore, the elastic and anelastic 1-D path-average models were combined in a continuous regionalization tomographic scheme to obtain a 3-D model of shear velocity variation down to 200 km and a 3-D model of inverse shear Q variation down to 120 km. The most prominent features in the tomograms are: a) A low shear velocity zone in the back-arc region, especially in the central and north Aegean. This region is located south of the North Aegean Trough (the western edge of the North Anatolian Fault) and correlates well with the derived anelastic tomograms which present high attenuation in this area. b) A high velocity/low attenuation zone in South Aegean indicating the subducted African lithosphere beneath the Aegean. The zone in central and north Aegean characterized by low velocities/high attenuation is compatible with a region of high extensional strain rates, recent volcanism and high heat flow. These observations suggest a hot or perhaps partially molten upper mantle and 7. Optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling Li, Y.; Han, B.; Métivier, L.; Brossier, R. 2016-09-01 We investigate an optimal fourth-order staggered-grid finite-difference scheme for 3D frequency-domain viscoelastic wave modeling. An anti-lumped mass strategy is incorporated to minimize the numerical dispersion. The optimal finite-difference coefficients and the mass weighting coefficients are obtained by minimizing the misfit between the normalized phase velocities and the unity. An iterative damped least-squares method, the Levenberg-Marquardt algorithm, is utilized for the optimization. Dispersion analysis shows that the optimal fourth-order scheme presents less grid dispersion and anisotropy than the conventional fourth-order scheme with respect to different Poisson's ratios. Moreover, only 3.7 grid-points per minimum shear wavelength are required to keep the error of the group velocities below 1%. The memory cost is then greatly reduced due to a coarser sampling. A parallel iterative method named CARP-CG is used to solve the large ill-conditioned linear system for the frequency-domain modeling. Validations are conducted with respect to both the analytic viscoacoustic and viscoelastic solutions. Compared with the conventional fourth-order scheme, the optimal scheme generates wavefields having smaller error under the same discretization setups. Profiles of the wavefields are presented to confirm better agreement between the optimal results and the analytic solutions. 8. Triboelectric nanogenerator built on suspended 3D spiral structure as vibration and positioning sensor and wave energy harvester. PubMed Hu, Youfan; Yang, Jin; Jing, Qingshen; Niu, Simiao; Wu, Wenzhuo; Wang, Zhong Lin 2013-11-26 An unstable mechanical structure that can self-balance when perturbed is a superior choice for vibration energy harvesting and vibration detection. In this work, a suspended 3D spiral structure is integrated with a triboelectric nanogenerator (TENG) for energy harvesting and sensor applications. The newly designed vertical contact-separation mode TENG has a wide working bandwidth of 30 Hz in low-frequency range with a maximum output power density of 2.76 W/m(2) on a load of 6 MΩ. The position of an in-plane vibration source was identified by placing TENGs at multiple positions as multichannel, self-powered active sensors, and the location of the vibration source was determined with an error less than 6%. The magnitude of the vibration is also measured by the output voltage and current signal of the TENG. By integrating the TENG inside a buoy ball, wave energy harvesting at water surface has been demonstrated and used for lighting illumination light, which shows great potential applications in marine science and environmental/infrastructure monitoring. 9. Time-stepping stability of continuous and discontinuous finite-element methods for 3-D wave propagation Mulder, W. A.; Zhebel, E.; Minisini, S. 2014-02-01 We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the leap-frog time-stepping scheme, which is second-order accurate and conditionally stable, leads to a fully explicit scheme. We provide estimates of its stability limit for simple cases, namely, the reference element with Neumann boundary conditions, its distorted version of arbitrary shape, the unit cube that can be partitioned into six tetrahedra with periodic boundary conditions and its distortions. The Courant-Friedrichs-Lewy stability limit contains an element diameter for which we considered different options. The one based on the sum of the eigenvalues of the spatial operator for the first-degree mass-lumped element gives the best results. It resembles the diameter of the inscribed sphere but is slightly easier to compute. The stability estimates show that the mass-lumped continuous and the discontinuous Galerkin finite elements of degree 2 have comparable stability conditions, whereas the mass-lumped elements of degree one and three allow for larger time steps. 10. Airborne & SAR Synergy Reveals the 3D Structure of Air Bubble Entrainment in Internal Waves and Frontal Zones da Silva, J. C. B.; Magalhaes, J. M.; Batista, M.; Gostiaux, L.; Gerkema, T.; New, A. L. 2013-03-01 spectral range 8-12 μm. With a nominal ground resolution of approximately 1.5 meters (at an altitude of 500 meters) it is capable to detect fine structure associated to turbulence. The LiDAR system that has been used is the Leica ALS50-II (1064nm) with a hit rate greater than 1 hit per square meter and a vertical resolution of approximately 15 cm. Both systems were available simultaneously, together with the hyperspectral system and the RCD105 39Mpx digital camera, integrated with the LiDAR navigation system. We analyse the airborne data together with a comprehensive dataset of satellite Synthetic Aperture Radar (SAR) that includes ENVISAT and TerraSAR-X images. In addition, in situ observations in the near-shore zone were obtained in a previous experiment (Project SPOTIWAVE-II POCI/MAR/57836/2004 funded by the Portuguese FCT) during the summer period in 2006. These included thermistor chain measurements along the water column that captured the vertical structure of shoaling internal (tidal) waves and ISWs close to the breaking point. The SAR and airborne images were obtained in light wind conditions, in the near-shore zone, and in the presence of ISWs. The LiDAR images revealed sub-surface structures (some 1-2 m below the sea surface) that were co-located with surface films. These film slicks were induced by the convergent fields of internal waves and upwelling fronts. Some of the sub-surface features were located over the front slopes of the internal waves, which coincides with the internal wave slick band visible in the aerial photos and hyperspectral systems. Our flight measurements revealed thermal features similar to “boils” of cold water within the wake of (admittedly breaking) internal waves. These features are consistent with the previous in situ measurements of breaking ISWs. In this paper we will show coincident multi-sensor airborne and satellite SAR observations that reveal the 3D structure of air bubble entrainment in the internal wave field and frontal 11. Elastic-wave velocity in marine sediments with gas hydrates: Effective medium modeling USGS Publications Warehouse Helgerud, M.B.; Dvorkin, J.; Nur, A.; Sakai, A.; Collett, T. 1999-01-01 We offer a first-principle-based effective medium model for elastic-wave velocity in unconsolidated, high porosity, ocean bottom sediments containing gas hydrate. The dry sediment frame elastic constants depend on porosity, elastic moduli of the solid phase, and effective pressure. Elastic moduli of saturated sediment are calculated from those of the dry frame using Gassmann's equation. To model the effect of gas hydrate on sediment elastic moduli we use two separate assumptions: (a) hydrate modifies the pore fluid elastic properties without affecting the frame; (b) hydrate becomes a component of the solid phase, modifying the elasticity of the frame. The goal of the modeling is to predict the amount of hydrate in sediments from sonic or seismic velocity data. We apply the model to sonic and VSP data from ODP Hole 995 and obtain hydrate concentration estimates from assumption (b) consistent with estimates obtained from resistivity, chlorinity and evolved gas data. Copyright 1999 by the American Geophysical Union. 12. Evaluation of Compressive Strength and Stiffness of Grouted Soils by Using Elastic Waves PubMed Central Lee, In-Mo; Kim, Jong-Sun; Yoon, Hyung-Koo; Lee, Jong-Sub 2014-01-01 Cement grouted soils, which consist of particulate soil media and cementation agents, have been widely used for the improvement of the strength and stiffness of weak ground and for the prevention of the leakage of ground water. The strength, elastic modulus, and Poisson's ratio of grouted soils have been determined by classical destructive methods. However, the performance of grouted soils depends on several parameters such as the distribution of particle size of the particulate soil media, grouting pressure, curing time, curing method, and ground water flow. In this study, elastic wave velocities are used to estimate the strength and elastic modulus, which are generally obtained by classical strength tests. Nondestructive tests by using elastic waves at small strain are conducted before and during classical strength tests at large strain. The test results are compared to identify correlations between the elastic wave velocity measured at small strain and strength and stiffness measured at large strain. The test results show that the strength and stiffness have exponential relationship with elastic wave velocities. This study demonstrates that nondestructive methods by using elastic waves may significantly improve the strength and stiffness evaluation processes of grouted soils. PMID:25025082 13. Elastic Waves Push Residual Organic Fluids From Saturated Rock Beresnev, I. A.; Vigil, R. D.; Li, W. 2004-12-01 With world oil reserves dwindling and production shifting to increasingly forbidding environments, the emphasis is greater than ever on the more efficient extraction of the existing oil. Yet typically up to two-thirds of the U. S. domestic oil is abandoned underground. Elastic waves have been observed to increase productivity of oil wells, although the reason for the vibratory motion mobilizing the residual organic fluids has remained unclear. Residual oil is entrapped as blobs or ganglia in narrow pore constrictions due to the resisting capillary forces that prevent free motion of non-wetting fluids driven by water. A finite external pressure gradient, exceeding an "unplugging" threshold, is needed to carry the residual ganglia through. We show that vibrations help overcome the resistance of capillary forces by adding an oscillatory inertial forcing to the external gradient; when the vibratory forcing acts along the gradient and the threshold is exceeded, instant "unplugging" occurs. This mechanism predicts the mobilization effect to be proportional to the amplitude and inversely proportional to the frequency of vibrations. We observe this dependence in a laboratory experiment, in which residual saturation of an organic fluid is created in a glass micromodel, and mobilization of the dyed ganglia is monitored using digital photography. We also directly demonstrate the release of an entrapped ganglion from a pore constriction by the application of vibrations in a computational fluid-dynamics simulation. The technologies that can utilize this phenomenon are not limited to enhanced oil recovery, but also apply to the remediation of groundwater contaminated by leaks from underground storage tanks and surface spills of organic fluids. 14. Boundary integral equation method for electromagnetic and elastic waves Chen, Kun In this thesis, the boundary integral equation method (BIEM) is studied and applied to electromagnetic and elastic wave problems. First of all, a spectral domain BIEM called the spectral domain approach is employed for full wave analysis of metal strip grating on grounded dielectric slab (MSG-GDS) and microstrips shielded with either perfect electric conductor (PEC) or perfect magnetic conductor (PMC) walls. The modal relations between these structures are revealed by exploring their symmetries. It is derived analytically and validated numerically that all the even and odd modes of the latter two (when they are mirror symmetric) find their correspondence in the modes of metal strip grating on grounded dielectric slab when the phase shift between adjacent two unit cells is 0 or pi. Extension to non-symmetric case is also made. Several factors, including frequency, grating period, slab thickness and strip width, are further investigated for their impacts on the effective permittivity of the dominant mode of PEC/PMC shielded microstrips. It is found that the PMC shielded microstrip generally has a larger wave number than the PEC shielded microstrip. Secondly, computational aspects of the layered medim doubly periodic Green's function (LMDPGF) in matrix-friendly formulation (MFF) are investigated. The MFF for doubly periodic structures in layered medium is derived, and the singularity of the periodic Green's function when the transverse wave number equals zero in this formulation is analytically extracted. A novel approach is proposed to calculate the LMDPGF, which makes delicate use of several techniques including factorization of the Green's function, generalized pencil of function (GPOF) method and high order Taylor expansion to derive the high order asymptotic expressions, which are then evaluated by newly derived fast convergent series. This approach exhibits robustness, high accuracy and fast and high order convergence; it also allows fast frequency sweep for 15. The character of elastic deformations on the interface by the passing of longitudinal wave 2016-11-01 The problem of longitudinal wave passing through the interface of two elastic media is considered. The reflection and refraction coefficients obtained by solving this problem can be used to study the character of dynamic deformation on the interface. Expressions for various deformation modes and rotation at the interface revealing their dependences on the angle of incidence of a longitudinal wave and on the elastic properties of the contacting media have been analyzed. 16. H-He elastic scattering at low energies: Contribution of nonzero partial waves SciTech Connect Sinha, Prabal K.; Ghosh, A.S. 2005-01-01 The present study reports the nonzero partial wave elastic cross sections together with s-wave results for the scattering of an antihydrogen atom off a gaseous helium target at thermal energies (up to 10{sup -2} a.u.). We have used a nonadiabatic atomic orbital method having different basis sets to investigate the system. The consideration of all the significant partial waves (up to J=24) reduces the oscillatory nature present in the individual partial wave cross section. The added elastic cross section is almost constant up to 10{sup -7} a.u. and then decreases steadily and very slowly with increasing energy. 17. Verification of elastic-wave static displacement in solids. [using ultrasonic techniques on Ge single crystals NASA Technical Reports Server (NTRS) Cantrell, J. H., Jr.; Winfree, W. P. 1980-01-01 The solution of the nonlinear differential equation which describes an initially sinusoidal finite-amplitude elastic wave propagating in a solid contains a static-displacement term in addition to the harmonic terms. The static-displacement amplitude is theoretically predicted to be proportional to the product of the squares of the driving-wave amplitude and the driving-wave frequency. The first experimental verification of the elastic-wave static displacement in a solid (the 111 direction of single-crystal germanium) is reported, and agreement is found with the theoretical predictions. 18. Highly Nonlinear Wave Propagation in Elastic Woodpile Periodic Structures DTIC Science & Technology 2016-08-03 attenuated over time (again, we briefly discuss the relevant features in Supple- mental Material [41]). We now explore this nanopteronic waveform more...formation of genuinely traveling waves composed of a strongly-localized solitary wave on top of a small amplitude oscillatory tail. This type of wave...manipulat- ing highly nonlinear stress waves at will, including high wave attenuation and spontaneous formation of novel traveling waves after an impact 19. High-resolution 3-D P-wave tomographic imaging of the shallow magmatic system of Erebus volcano, Antarctica Zandomeneghi, D.; Aster, R. C.; Barclay, A. H.; Chaput, J. A.; Kyle, P. R. 2011-12-01 Erebus volcano (Ross Island), the most active volcano in Antarctica, is characterized by a persistent phonolitic lava lake at its summit and a wide range of seismic signals associated with its underlying long-lived magmatic system. The magmatic structure in a 3 by 3 km area around the summit has been imaged using high-quality data from a seismic tomographic experiment carried out during the 2008-2009 austral field season (Zandomeneghi et al., 2010). An array of 78 short period, 14 broadband, and 4 permanent Mount Erebus Volcano Observatory seismic stations and a program of 12 shots were used to model the velocity structure in the uppermost kilometer over the volcano conduit. P-wave travel times were inverted for the 3-D velocity structure using the shortest-time ray tracing (50-m grid spacing) and LSQR inversion (100-m node spacing) of a tomography code (Toomey et al., 1994) that allows for the inclusion of topography. Regularization is controlled by damping and smoothing weights and smoothing lengths, and addresses complications that are inherent in a strongly heterogeneous medium featuring rough topography and a dense parameterization and distribution of receivers/sources. The tomography reveals a composite distribution of very high and low P-wave velocity anomalies (i.e., exceeding 20% in some regions), indicating a complex sub-lava-lake magmatic geometry immediately beneath the summit region and in surrounding areas, as well as the presence of significant high velocity shallow regions. The strongest and broadest low velocity zone is located W-NW of the crater rim, indicating the presence of an off-axis shallow magma body. This feature spatially corresponds to the inferred centroid source of VLP signals associated with Strombolian eruptions and lava lake refill (Aster et al., 2008). Other resolved structures correlate with the Side Crater and with lineaments of ice cave thermal anomalies extending NE and SW of the rim. High velocities in the summit area possibly 20. Constructing a 3D Crustal Model Across the Entire Contiguous US Using Broadband Rayleigh Wave Phase Velocity and Ellipticity Measurements Lin, F. C.; Schmandt, B. 2015-12-01 Imaging the crust and lithosphere structure beneath North America is one of the primary targets for the NSF-funded EarthScope project. In this study, we apply the recently developed ambient noise and surface wave tomography methods to construct a detailed 3D crustal model across the entire contiguous US using USArray data between January 2007 and May 2015. By using both Rayleigh wave phase velocity and ellipticity measurements between 8 and 100 sec period, the shear velocity structure can be well resolved within the five crustal layers we modeled: three upper crust, one middle crust, and one lower crust. Clear correlations are observed between the resolved velocity anomalies and known geological features at all depths. In the uppermost crust, slow Vs anomalies are observed within major sedimentary environments such as the Williston Basin, Denver Basin, and Mississippi embayment, and fast Vs anomalies are observed in environments with deeply exhumed bedrock outcrops at the surface including the Laurentian Highlands, Ouachita-Ozark Interior Highlands, and Appalachian Highlands. In the deeper upper crust, slow anomalies are observed in deep sedimentary basins such as the Green River Basin, Appalachian Basin, Southern Oklahoma Aulacogen, and areas surrounding the Gulf of Mexico. Fast anomalies, on the other hand, are observed in the Colorado Plateau, within the Great Plains between the Front Ranges and Midcontinental Rift, and east of the Appalachian Mountains. At this depth, the Midcontinental Rift and Grenville Front clearly correlate well with various velocity structure boundaries. In the middle crust, slow anomalies are mostly observed in the tectonically active areas in the western US, but relatively slow anomalies are also observed southeast of the Precambrian Rift Margins. At this depth, fast anomalies are observed beneath various deep sedimentary basins such as the Southern Oklahoma Aulacogen, Appalachian Basin, and Central Valley. In the lower crust, a clear 1. The effective second-order elastic constants of a strained crystal using the elastic wave propagation in a homogeneously deformed material 1988-06-01 The equation for elastic wave propagation in a homogeneously deformed crystal has been used to obtain the expressions for the effective second-order elastic constants of the seven crystal systems in terms of their natural second- and third-order elastic constants. These expressions are employed to obtain the pressure derivatives of the effective second-order elastic constants of some cubic crystals for which experimental data are available. 2. Observations of Plasma Waves in the Colliding Jet Region of a 3D Magnetic Flux Rope Flanked by Two Active Reconnection X Lines at the Subsolar Magnetopause Oieroset, M.; Sundkvist, D. J.; Chaston, C. C.; Phan, T. D.; Mozer, F.; McFadden, J. P.; Angelopoulos, V.; Andersson, L.; Eastwood, J. P. 2014-12-01 We have performed a detailed analysis of plasma and wave observations in a 3D magnetic flux rope encountered by the THEMIS spacecraft at the subsolar magnetopause. The extent of the flux rope was ˜270 ion skin depths in the outflow direction, and it was flanked by two active reconnection X lines producing colliding plasma jets in the flux rope core where ion heating and suprathermal electrons were observed. The colliding jet region was highly dynamic and characterized by the presence of high-frequency waves such as ion acoustic-like waves, electron holes, and whistler mode waves near the flux rope center and low-frequency kinetic Alfvén waves over a larger region. We will discuss possible links between these waves and particle heating. 3. AE3D SciTech Connect Spong, Donald A 2016-06-20 AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included. 4. Theoretical and numerical comparison of 3D numerical schemes for their accuracy with respect to P-wave to S-wave speed ratio Moczo, P.; Kristek, J.; Galis, M.; Chaljub, E.; Chen, X.; Zhang, Z. 2012-04-01 Numerical modeling of earthquake ground motion in sedimentary basins and valleys often has to account for the P-wave to S-wave speed ratios (VP/VS) as large as five and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 - the unconsolidated lake sediments in Ciudad de México are a good example. At the same time, accuracy of the numerical schemes with respect to VP/VS has not been sufficiently analyzed. The numerical schemes are often applied without adequate check of the accuracy. We present theoretical analysis and numerical comparison of 18 3D numerical time-domain explicit schemes for modeling seismic motion for their accuracy with the varying VP/VS. The schemes are based on the finite-difference, spectral-element, finite-element and discontinuous-Galerkin methods. All schemes are presented in a unified form. Theoretical analysis compares accuracy of the schemes in terms of local errors in amplitude and vector difference. In addition to the analysis we compare numerically simulated seismograms with exact solutions for canonical configurations. We compare accuracy of the schemes in terms of the local errors, grid dispersion and full wavefield simulations with respect to the structure of the numerical schemes. 5. Combinatorial 3D Mechanical Metamaterials Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin 2015-03-01 We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability. 6. Nonlocal thermo-elastic wave propagation in temperature-dependent embedded small-scaled nonhomogeneous beams 2016-11-01 In this paper, the thermo-elastic wave propagation analysis of a temperature-dependent functionally graded (FG) nanobeam supported by Winkler-Pasternak elastic foundation is studied using nonlocal elasticity theory. The nanobeam is modeled via a higher-order shear deformable refined beam theory which has a trigonometric shear stress function. The temperature field has a nonlinear distribution called heat conduction across the nanobeam thickness. Temperature-dependent material properties change gradually in the spatial coordinate according to the Mori-Tanaka model. The governing equations of the wave propagation of the refined FG nanobeam are derived by using Hamilton's principle. The analytic dispersion relation of the embedded nonlocal functionally graded nanobeam is obtained by solving an eigenvalue problem. Numerical examples show that the wave characteristics of the functionally graded nanobeam are related to the temperature distribution, elastic foundation parameters, nonlocality and material composition. 7. Elastic anisotropy and pore space geometry of schlieren granite: direct 3-D measurements at high confining pressure combined with microfabric analysis Staněk, Martin; Géraud, Yves; Lexa, Ondrej; Špaček, Petr; Ulrich, Stanislav; Diraison, Marc 2013-07-01 Pore space geometry of granitic rocks and its evolution with depth are key factors in large-scale seismics or in projects of enhanced geothermal systems or of deep hazardous waste repositories. In this study, we studied macroscopically anisotropic schlieren-bearing granite by experimental P-wave velocity (VP) measurements on spherical sample in 132 directions at seven different confining pressures in the range 0.1-400 MPa. In order to discriminate the phenomena affecting the rock elastic properties we analysed the orientation of microcracks and of grain boundaries and we measured the anisotropy of magnetic susceptibility of the rock. Three sets of microcracks were defined, with two of them linked to the massif exfoliation process and one to cooling contraction of the massif. During pressurization the measured mean VP and VP anisotropy degree at ambient pressure and at highest confinement (400 MPa) yielded 3.3 km s-1 and 24 per cent, and 6.2 km s-1 and 3 per cent, respectively. The associated VP anisotropy pattern was transversely isotropic and governed by the schlieren, with a minimum VP direction perpendicular to them and a girdle of high VP directions parallel to them. The highest change in VP was observed between 0.1 and 10 MPa, suggesting a significant closure of porosity below depths of 500 m. Change of the VP anisotropy pattern to orthorhombic together with increase of mean VP and VP anisotropy degree during depressurization was attributed to inelastic response of one of the sets of microcracks to the loading-unloading cycle. 8. 3D Buckligami: Digital Matter van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin 2014-03-01 We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells. 9. Time reversal of continuous-wave, monochromatic signals in elastic media SciTech Connect Anderson, Brian E; Guyer, Robert A; Ulrich, Timothy J; Johnson, Paul A 2009-01-01 Experimental observations of spatial focusing of continuous-wave, steady-state elastic waves in a reverberant elastic cavity using time reversal are reported here. Spatially localized focusing is achieved when multiple channels are employed, while a single channel does not yield such focusing. The amplitude of the energy at the focal location increases as the square of the number of channels used, while the amplitude elsewhere in the medium increases proportionally with the number of channels used. The observation is important in the context of imaging in solid laboratory samples as well as problems involving continuous-wave signals in Earth. 10. Using a time-domain higher-order boundary element method to simulate wave and current diffraction from a 3-D body Liu, Zhen; Teng, Bin; Ning, De-Zhi; Sun, Liang 2010-06-01 To study wave-current actions on 3-D bodies a time-domain numerical model was established using a higher-order boundary element method (HOBEM). By assuming small flow velocities, the velocity potential could be expressed for linear and higher order components by perturbation expansion. A 4th-order Runge-Kutta method was applied for time marching. An artificial damping layer was adopted at the outer zone of the free surface mesh to dissipate scattering waves. Validation of the numerical method was carried out on run-up, wave exciting forces, and mean drift forces for wave-currents acting on a bottom-mounted vertical cylinder. The results were in close agreement with the results of a frequency-domain method and a published time-domain method. The model was then applied to compute wave-current forces and run-up on a Seastar mini tension-leg platform. 11. Lamb-type waves generated by a cylindrical bubble oscillating between two planar elastic walls Doinikov, A. A.; Mekki-Berrada, F.; Thibault, P.; Marmottant, P. 2016-04-01 The volume oscillation of a cylindrical bubble in a microfluidic channel with planar elastic walls is studied. Analytical solutions are found for the bulk scattered wave propagating in the fluid gap and the surface waves of Lamb-type propagating at the fluid-solid interfaces. This type of surface wave has not yet been described theoretically. A dispersion equation for the Lamb-type waves is derived, which allows one to evaluate the wave speed for different values of the channel height h. It is shown that for h<λt, where λt is the wavelength of the transverse wave in the walls, the speed of the Lamb-type waves decreases with decreasing h, while for h on the order of or greater than λt, their speed tends to the Scholte wave speed. The solutions for the wave fields in the elastic walls and in the fluid are derived using the Hankel transforms. Numerical simulations are carried out to study the effect of the surface waves on the dynamics of a bubble confined between two elastic walls. It is shown that its resonance frequency can be up to 50% higher than the resonance frequency of a similar bubble confined between two rigid walls. 12. Lamb-type waves generated by a cylindrical bubble oscillating between two planar elastic walls PubMed Central Mekki-Berrada, F.; Thibault, P.; Marmottant, P. 2016-01-01 The volume oscillation of a cylindrical bubble in a microfluidic channel with planar elastic walls is studied. Analytical solutions are found for the bulk scattered wave propagating in the fluid gap and the surface waves of Lamb-type propagating at the fluid–solid interfaces. This type of surface wave has not yet been described theoretically. A dispersion equation for the Lamb-type waves is derived, which allows one to evaluate the wave speed for different values of the channel height h. It is shown that for h<λt, where λt is the wavelength of the transverse wave in the walls, the speed of the Lamb-type waves decreases with decreasing h, while for h on the order of or greater than λt, their speed tends to the Scholte wave speed. The solutions for the wave fields in the elastic walls and in the fluid are derived using the Hankel transforms. Numerical simulations are carried out to study the effect of the surface waves on the dynamics of a bubble confined between two elastic walls. It is shown that its resonance frequency can be up to 50% higher than the resonance frequency of a similar bubble confined between two rigid walls. PMID:27274695 13. Lamb-type waves generated by a cylindrical bubble oscillating between two planar elastic walls. PubMed Doinikov, A A; Mekki-Berrada, F; Thibault, P; Marmottant, P 2016-04-01 The volume oscillation of a cylindrical bubble in a microfluidic channel with planar elastic walls is studied. Analytical solutions are found for the bulk scattered wave propagating in the fluid gap and the surface waves of Lamb-type propagating at the fluid-solid interfaces. This type of surface wave has not yet been described theoretically. A dispersion equation for the Lamb-type waves is derived, which allows one to evaluate the wave speed for different values of the channel height h. It is shown that for h<λt, where λt is the wavelength of the transverse wave in the walls, the speed of the Lamb-type waves decreases with decreasing h, while for h on the order of or greater than λt, their speed tends to the Scholte wave speed. The solutions for the wave fields in the elastic walls and in the fluid are derived using the Hankel transforms. Numerical simulations are carried out to study the effect of the surface waves on the dynamics of a bubble confined between two elastic walls. It is shown that its resonance frequency can be up to 50% higher than the resonance frequency of a similar bubble confined between two rigid walls. 14. Edge waves in plates with resonators: an elastic analogue of the quantum valley Hall effect Pal, Raj Kumar; Ruzzene, Massimo 2017-02-01 We investigate elastic periodic structures characterized by topologically nontrivial bandgaps supporting backscattering suppressed edge waves. These edge waves are topologically protected and are obtained by breaking inversion symmetry within the unit cell. Examples for discrete one and two-dimensional lattices elucidate the concept and illustrate parallels with the quantum valley Hall effect. The concept is implemented on an elastic plate featuring an array of resonators arranged according to a hexagonal topology. The resulting continuous structures have non-trivial bandgaps supporting edge waves at the interface between two media with different topological invariants. The topological properties of the considered configurations are predicted by unit cell and finite strip dispersion analyses. Numerical simulations demonstrate edge wave propagation for excitation at frequencies belonging to the bulk bandgaps. The considered plate configurations define a framework for the implementation of topological concepts on continuous elastic structures of potential engineering relevance. 15. Reflection and transmission of elastic waves at five types of possible interfaces between two dipolar gradient elastic half-spaces Li, Yueqiu; Wei, Peijun 2017-02-01 Reflection and transmission of an incident plane wave at five types of possible interfaces between two dipolar gradient elastic solids are studied in this paper. First, the explicit expressions of monopolar tractions and dipolar tractions are derived from the postulated function of strain energy density. Then, the displacements, the normal derivative of displacements, monopolar tractions, and dipolar tractions are used to create the nontraditional interface conditions. There are five types of possible interfaces based on all possible combinations of the displacements and the normal derivative of displacements. These interfacial conditions with consideration of microstructure effects are used to determine the amplitude ratio of the reflection and transmission waves with respect to the incident wave. Further, the energy ratios of the reflection and transmission waves with respect to the incident wave are calculated. Some numerical results of the reflection and transmission coefficients are given in terms of energy flux ratio for five types of possible interfaces. The influences of the five types of possible interfaces on the energy partition between the refection waves and the transmission waves are discussed, and the concept of double channels of energy transfer is first proposed to explain the different influences of five types of interfaces. 16. Elastic metamaterials for tuning circular polarization of electromagnetic waves PubMed Central Zárate, Yair; Babaee, Sahab; Kang, Sung H.; Neshev, Dragomir N.; Shadrivov, Ilya V.; Bertoldi, Katia; Powell, David A. 2016-01-01 Electromagnetic resonators are integrated with advanced elastic material to develop a new type of tunable metamaterial. An electromagnetic-elastic metamaterial able to switch on and off its electromagnetic chiral response is experimentally demonstrated. Such tunability is attained by harnessing the unique buckling properties of auxetic elastic materials (buckliballs) with embedded electromagnetic resonators. In these structures, simple uniaxial compression results in a complex but controlled pattern of deformation, resulting in a shift of its electromagnetic resonance, and in the structure transforming to a chiral state. The concept can be extended to the tuning of three-dimensional materials constructed from the meta-molecules, since all the components twist and deform into the same chiral configuration when compressed. PMID:27320212 17. Elastic metamaterials for tuning circular polarization of electromagnetic waves. PubMed Zárate, Yair; Babaee, Sahab; Kang, Sung H; Neshev, Dragomir N; Shadrivov, Ilya V; Bertoldi, Katia; Powell, David A 2016-06-20 Electromagnetic resonators are integrated with advanced elastic material to develop a new type of tunable metamaterial. An electromagnetic-elastic metamaterial able to switch on and off its electromagnetic chiral response is experimentally demonstrated. Such tunability is attained by harnessing the unique buckling properties of auxetic elastic materials (buckliballs) with embedded electromagnetic resonators. In these structures, simple uniaxial compression results in a complex but controlled pattern of deformation, resulting in a shift of its electromagnetic resonance, and in the structure transforming to a chiral state. The concept can be extended to the tuning of three-dimensional materials constructed from the meta-molecules, since all the components twist and deform into the same chiral configuration when compressed. 18. Elastic metamaterials for tuning circular polarization of electromagnetic waves Zárate, Yair; Babaee, Sahab; Kang, Sung H.; Neshev, Dragomir N.; Shadrivov, Ilya V.; Bertoldi, Katia; Powell, David A. 2016-06-01 Electromagnetic resonators are integrated with advanced elastic material to develop a new type of tunable metamaterial. An electromagnetic-elastic metamaterial able to switch on and off its electromagnetic chiral response is experimentally demonstrated. Such tunability is attained by harnessing the unique buckling properties of auxetic elastic materials (buckliballs) with embedded electromagnetic resonators. In these structures, simple uniaxial compression results in a complex but controlled pattern of deformation, resulting in a shift of its electromagnetic resonance, and in the structure transforming to a chiral state. The concept can be extended to the tuning of three-dimensional materials constructed from the meta-molecules, since all the components twist and deform into the same chiral configuration when compressed. 19. Negative refraction of elastic waves at the deep-subwavelength scale in a single-phase metamaterial. PubMed Zhu, R; Liu, X N; Hu, G K; Sun, C T; Huang, G L 2014-11-24 Negative refraction of elastic waves has been studied and experimentally demonstrated in three- and two-dimensional phononic crystals, but Bragg scattering is impractical for low-frequency wave control because of the need to scale the structures to manageable sizes. Here we present an elastic metamaterial with chiral microstructure made of a single-phase solid material that aims to achieve subwavelength negative refraction of elastic waves. Both negative effective mass density and modulus are observed owing to simultaneous translational and rotational resonances. We experimentally demonstrate negative refraction of the longitudinal elastic wave at the deep-subwavelength scale in the metamaterial fabricated in a stainless steel plate. The experimental measurements are in good agreement with numerical simulations. Moreover, wave mode conversion related with negative refraction is revealed and discussed. The proposed elastic metamaterial may thus be used as a flat lens for elastic wave focusing. 20. Simulating wave-turbulence on thin elastic plates with arbitrary boundary conditions van Rees, Wim M.; Mahadevan, L. 2016-11-01 The statistical characteristics of interacting waves are described by the theory of wave turbulence, with the study of deep water gravity wave turbulence serving as a paradigmatic physical example. Here we consider the elastic analog of this problem in the context of flexural waves arising from vibrations of a thin elastic plate. Such flexural waves generate the unique sounds of so-called thunder machines used in orchestras - thin metal plates that make a thunder-like sound when forcefully shaken. Wave turbulence in elastic plates is typically investigated numerically using spectral simulations with periodic boundary conditions, which are not very realistic. We will present the results of numerical simulations of the dynamics of thin elastic plates in physical space, with arbitrary shapes, boundary conditions, anisotropy and inhomogeneity, and show first results on wave turbulence beyond the conventionally studied rectangular plates. Finally, motivated by a possible method to measure ice-sheet thicknesses in the open ocean, we will further discuss the behavior of a vibrating plate when floating on an inviscid fluid. 1. Pressure wave propagation in fluid-filled co-axial elastic tubes. Part 1: Basic theory. PubMed Berkouk, K; Carpenter, P W; Lucey, A D 2003-12-01 Our work is motivated by ideas about the pathogenesis of syringomyelia. This is a serious disease characterized by the appearance of longitudinal cavities within the spinal cord. Its causes are unknown, but pressure propagation is probably implicated. We have developed an inviscid theory for the propagation of pressure waves in co-axial, fluid-filled, elastic tubes. This is intended as a simple model of the intraspinal cerebrospinal-fluid system. Our approach is based on the classic theory for the propagation of longitudinal waves in single, fluid-filled, elastic tubes. We show that for small-amplitude waves the governing equations reduce to the classic wave equation. The wave speed is found to be a strong function of the ratio of the tubes' cross-sectional areas. It is found that the leading edge of a transmural pressure pulse tends to generate compressive waves with converging wave fronts. Consequently, the leading edge of the pressure pulse steepens to form a shock-like elastic jump. A weakly nonlinear theory is developed for such an elastic jump. 2. Structure, Elasticity, and Wave-Velocities of MgSiO3-PEROVSKITE at Lower Mantle Conditions Wentzcovitch, R. M.; Karki, B. B.; Coccocioni, M. 2002-12-01 The crystal structure, elastic constants, and wave-velocities of MgSiO3-perovskite (Mg-pv) have been determined throughout the lower mantle's (LM) pressure/temperature (P,T) regime by means of first principles computations of its vibrational density of states at various strained configurations and free energy calculations within the quasi-harmonic-approximation (QHA). The latter is tested "a posteriori" and shown to be valid at expected conditions. This completes the series of calculations on the thermoelastic properties of Mg-pv that are necessary to 1) narrow down constraints on LM's composition and thermal state, 2) shed light on the relative role of temperature on 3D velocity structures, and 3)on the anisotropy of this phase. 3. On Waves in a Linear Elastic Half-Space with Free Boundary Rushchitsky, J. J. 2016-11-01 The problem of linear elasticity for free harmonic (periodic) and solitary bell-shaped (nonperiodic) waves in an isotropic half-space with stress-free plane boundary is considered. The half-space is made of either conventional (classical structural) or nonconventional (nonclassical auxetic) material. Two cases of wave damping are studied: rapid (surface wave) and periodic (nonsurface wave). The following conclusions on a free harmonic wave are drawn: a surface wave exists in materials of both classes, but the ratio of the wave velocity to the velocity of a transverse plane wave in auxetic materials is somewhat lower than in conventional materials; a nonsurface wave cannot be described by the approach applied to conventional materials, but can theoretically exist in auxetic materials where there are two wave velocities. For a solitary (bell-shaped) wave, the assumption that the wave velocity depends on the wave phase is substantiated and some constraint is imposed on the time of travel of the wave and the way the wave velocity varies with time. The following conclusions are drawn: a rapidly damped bell-shaped wave cannot be described by the approach for both classes of materials, whereas a periodically damped bell-shaped wave can be described 4. Numerical study of interfacial solitary waves propagating under an elastic sheet. PubMed Wang, Zhan; Părău, Emilian I; Milewski, Paul A; Vanden-Broeck, Jean-Marc 2014-08-08 Steady solitary and generalized solitary waves of a two-fluid problem where the upper layer is under a flexible elastic sheet are considered as a model for internal waves under an ice-covered ocean. The fluid consists of two layers of constant densities, separated by an interface. The elastic sheet resists bending forces and is mathematically described by a fully nonlinear thin shell model. Fully localized solitary waves are computed via a boundary integral method. Progression along the various branches of solutions shows that barotropic (i.e. surface modes) wave-packet solitary wave branches end with the free surface approaching the interface. On the other hand, the limiting configurations of long baroclinic (i.e. internal) solitary waves are characterized by an infinite broadening in the horizontal direction. Baroclinic wave-packet modes also exist for a large range of amplitudes and generalized solitary waves are computed in a case of a long internal mode in resonance with surface modes. In contrast to the pure gravity case (i.e without an elastic cover), these generalized solitary waves exhibit new Wilton-ripple-like periodic trains in the far field. 5. Analytical modeling of elastic-plastic wave behavior near grain boundaries in crystalline materials SciTech Connect Loomis, Eric; Greenfield, Scott; Luo, Shengnian; Swift, Damian; Peralta, Pedro 2009-01-01 It is well known that changes in material properties across an interface will produce differences in the behavior of reflected and transmitted waves. This is seen frequently in planar impact experiments, and to a lesser extent, oblique impacts. In anisotropic elastic materials, wave behavior as a function of direction is usually accomplished with the aid of velocity surfaces, a graphical method for predicting wave scattering configurations. They have expanded this method to account for inelastic deformation due to crystal plasticity. The set of derived equations could not be put into a characteristic form, but instead led to an implicit problem. to overcome this difficulty an algorithm was developed to search the parameters space defined by a wave normal vector, particle velocity vector, and a wave speed. A solution was said to exist when a set from this parameter space satisfied the governing vector equation. Using this technique they can predict the anisotropic elastic-plastic velocity surfaces and grain boundary scattering configuration for crystalline materials undergoing deformation by slip. Specifically, they have calculated the configuration of scattered elastic-plastic waves in anisotropic NiAl for an incident compressional wave propagating along a <111> direction and contacting a 45 degree inclined grain boundary and found that large amplitude transmitted waves exist owing to the fact that the wave surface geometry forces it to propagate near the zero Schmid factor direction <100>. 6. Numerical study of interfacial solitary waves propagating under an elastic sheet PubMed Central Wang, Zhan; Părău, Emilian I.; Milewski, Paul A.; Vanden-Broeck, Jean-Marc 2014-01-01 Steady solitary and generalized solitary waves of a two-fluid problem where the upper layer is under a flexible elastic sheet are considered as a model for internal waves under an ice-covered ocean. The fluid consists of two layers of constant densities, separated by an interface. The elastic sheet resists bending forces and is mathematically described by a fully nonlinear thin shell model. Fully localized solitary waves are computed via a boundary integral method. Progression along the various branches of solutions shows that barotropic (i.e. surface modes) wave-packet solitary wave branches end with the free surface approaching the interface. On the other hand, the limiting configurations of long baroclinic (i.e. internal) solitary waves are characterized by an infinite broadening in the horizontal direction. Baroclinic wave-packet modes also exist for a large range of amplitudes and generalized solitary waves are computed in a case of a long internal mode in resonance with surface modes. In contrast to the pure gravity case (i.e without an elastic cover), these generalized solitary waves exhibit new Wilton-ripple-like periodic trains in the far field. PMID:25104909 7. Focusing, refraction, and asymmetric transmission of elastic waves in solid metamaterials with aligned parallel gaps. PubMed Su, Xiaoshi; Norris, Andrew N 2016-06-01 Gradient index (GRIN), refractive, and asymmetric transmission devices for elastic waves are designed using a solid with aligned parallel gaps. The gaps are assumed to be thin so that they can be considered as parallel cracks separating elastic plate waveguides. The plates do not interact with one another directly, only at their ends where they connect to the exterior solid. To formulate the transmission and reflection coefficients for SV- and P-waves, an analytical model is established using thin plate theory that couples the waveguide modes with the waves in the exterior body. The GRIN lens is designed by varying the thickness of the plates to achieve different flexural wave speeds. The refractive effect of SV-waves is achieved by designing the slope of the edge of the plate array, and keeping the ratio between plate length and flexural wavelength fixed. The asymmetric transmission of P-waves is achieved by sending an incident P-wave at a critical angle, at which total conversion to SV-wave occurs. An array of parallel gaps perpendicular to the propagation direction of the reflected waves stop the SV-wave but let P-waves travel through. Examples of focusing, steering, and asymmetric transmission devices are discussed. 8. Characterising fatigue crack in an aluminium plate using guided elastic waves Zhou, Chao; Su, Zhongqing; Cheng, Li 2011-04-01 Integrity of in-service engineering structures is prone to fatigue damage over their lifespan. Majority of the currently existing elastic-wave-based damage identification techniques have been developed and validated for damage at macroscopic levels, by canvassing linear properties of elastic waves such as attenuation, transmission, reflection and mode conversion. However the real damage in engineering structures often initiates from fatigue crack, presenting highly nonlinear characteristics under cyclic loads. It is of great significance but vast challenge to detect fatigue damage of small dimension at its initial stage. In this study, traditional elastic-wave-based damage identification techniques were first employed with an attempt to detect fatigue crack initiated from a notch in an aluminium plate with the assistance of a signal correlation analysis, to observe the deficiency of the approach. Then the higher-order harmonic wave generation was used to exploit the nonlinear characteristics of acousto-ultrasonic waves (Lamb waves), whereby the fatigue damage was characterised. Results show that nonlinear characteristics of acousto-ultrasonic waves can facilitate more effective detection of fatigue damage than linear signal features such as wave reflection, transmission or mode conversion. 9. Shear Wave Speed Measurements Using Crawling Wave Sonoelastography and Single Tracking Location Shear Wave Elasticity Imaging for Tissue Characterization. PubMed Ormachea, Juvenal; Lavarello, Roberto J; McAleavey, Stephen A; Parker, Kevin J; Castaneda, Benjamin 2016-09-01 Elastography provides tissue stiffness information that attempts to characterize the elastic properties of tissue. However, there is still limited literature comparing elastographic modalities for tissue characterization. This study focuses on two quantitative techniques using different vibration sources that have not been compared to date: crawling wave sonoelastography (CWS) and single tracking location shear wave elasticity imaging (STL-SWEI). To understand each technique's performance, shear wave speed (SWS) was measured in homogeneous phantoms and ex vivo beef liver tissue. Then, the contrast, contrast-to-noise ratio (CNR), and lateral resolution were measured in an inclusion and two-layer phantoms. The SWS values obtained with both modalities were validated with mechanical measurements (MM) which serve as ground truth. The SWS results for the three different homogeneous phantoms (10%, 13%, and 16% gelatin concentrations) and ex vivo beef liver tissue showed good agreement between CWS, STL-SWEI, and MM as a function of frequency. For all gelatin phantoms, the maximum accuracy errors were 2.52% and 2.35% using CWS and STL-SWEI, respectively. For the ex vivo beef liver, the maximum accuracy errors were 9.40% and 7.93% using CWS and STL-SWEI, respectively. For lateral resolution, contrast, and CNR, both techniques obtained comparable measurements for vibration frequencies less than 300 Hz (CWS) and distances between the push beams ( ∆x ) between 3 mm and 5.31 mm (STL-SWEI). The results obtained in this study agree over an SWS range of 1-6 m/s. They are expected to agree in perfectly linear, homogeneous, and isotropic materials, but the SWS overlap is not guaranteed in all materials because each of the three methods have unique features. 10. Prediction of crack density in porous-cracked rocks from elastic wave velocities Byun, Ji-Hwan; Lee, Jong-Sub; Park, Keunbo; Yoon, Hyung-Koo 2015-04-01 The stability of structures that are built over rock is affected by cracks in the rock that result from weathering, thawing and freezing processes. This study investigates a new method for determining rock crack densities using elastic wave velocities. The Biot-Gassmann model, which consists of several elastic moduli and Poisson's ratio, was used to determine a theoretical equation to predict the crack density of rocks. Ten representative specimens were extracted from ten boreholes to highlight the spatial variability. Each specimen was characterized using X-Ray Diffraction (XRD) analysis. The specimens were carved into cylinders measuring 50 mm in diameter and 30 mm in height using an abrasion process. A laboratory test was performed to obtain the elastic wave velocity using transducers that can transmit and receive compressional and shear waves. The measured compressional wave and shear wave velocities were approximately 2955 m/s-5209 m/s and 1652 m/s-2845 m/s, respectively. From the measured elastic wave velocities, the analyzed crack density and crack porosity were approximately 0.051-0.185 and 0.03%-0.14%, respectively. The calculated values were compared with the results of previous studies, and they exhibit similar values and trends. The sensitivity of the suggested theoretical equation was analyzed using the error norm technique. The results show that the compressional wave velocity and the shear modulus of a particle are the most influential factors in this equation. The study demonstrates that rock crack density can be estimated using the elastic wave velocities, which may be useful for investigating the stability of structures that are built over rock. 11. Orbital-type trapping of elastic Lamb waves. PubMed Lomonosov, Alexey M; Yan, Shi-Ling; Han, Bing; Zhang, Hong-Chao; Shen, Zhong-Hua 2016-01-01 The interaction of laser-generated Lamb waves propagating in a plate with a sharp-angle conical hole was studied experimentally and numerically. Part of the energy of the incident wave is trapped within the conic area in two ways: the antisymmetric Lamb wave orbiting the center of the hole and the wave localized at the acute edge. Parameters and conditions for optimal conversion of the incident wave into the trapped modes were studied in this work. Experiments were performed using the laser stroboscopic shearography technique, which delivers the time evolution of the acoustic field in the whole area of interest. The effect of trapping can be used for efficient damping, similar to the one-dimensional acoustical black hole effect. 12. The impact of intraocular pressure on elastic wave velocity estimates in the crystalline lens. PubMed Park, Suhyun; Yoon, Heechul; Larin, Kirill V; Emelianov, Stanislav Y; Aglyamov, Salavat R 2016-12-20 Intraocular pressure (IOP) is believed to influence the mechanical properties of ocular tissues including cornea and sclera. The elastic properties of the crystalline lens have been mainly investigated with regard to presbyopia, the age-related loss of accommodation power of the eye. However, the relationship between the elastic properties of the lens and IOP remains to be established. The objective of this study is to measure the elastic wave velocity, which represents the mechanical properties of tissue, in the crystalline lens ex vivo in response to changes in IOP. The elastic wave velocities in the cornea and lens from seven enucleated bovine globe samples were estimated using ultrasound shear wave elasticity imaging. To generate and then image the elastic wave propagation, an ultrasound imaging system was used to transmit a 600 µs pushing pulse at 4.5 MHz center frequency and to acquire ultrasound tracking frames at 6 kHz frame rate. The pushing beams were separately applied to the cornea and lens. IOP in the eyeballs was varied from 5 to 50 mmHg. The results indicate that while the elastic wave velocity in the cornea increased from 0.96  ±  0.30 m s(-1) to 6.27  ±  0.75 m s(-1) as IOP was elevated from 5 to 50 mmHg, there were insignificant changes in the elastic wave velocity in the crystalline lens with the minimum and the maximum speeds of 1.44  ±  0.27 m s(-1) and 2.03  ±  0.46 m s(-1), respectively. This study shows that ultrasound shear wave elasticity imaging can be used to assess the biomechanical properties of the crystalline lens noninvasively. Also, it was observed that the dependency of the crystalline lens stiffness on the IOP was significantly lower in comparison with that of cornea. 13. The impact of intraocular pressure on elastic wave velocity estimates in the crystalline lens Park, Suhyun; Yoon, Heechul; Larin, Kirill V.; Emelianov, Stanislav Y.; Aglyamov, Salavat R. 2017-02-01 Intraocular pressure (IOP) is believed to influence the mechanical properties of ocular tissues including cornea and sclera. The elastic properties of the crystalline lens have been mainly investigated with regard to presbyopia, the age-related loss of accommodation power of the eye. However, the relationship between the elastic properties of the lens and IOP remains to be established. The objective of this study is to measure the elastic wave velocity, which represents the mechanical properties of tissue, in the crystalline lens ex vivo in response to changes in IOP. The elastic wave velocities in the cornea and lens from seven enucleated bovine globe samples were estimated using ultrasound shear wave elasticity imaging. To generate and then image the elastic wave propagation, an ultrasound imaging system was used to transmit a 600 µs pushing pulse at 4.5 MHz center frequency and to acquire ultrasound tracking frames at 6 kHz frame rate. The pushing beams were separately applied to the cornea and lens. IOP in the eyeballs was varied from 5 to 50 mmHg. The results indicate that while the elastic wave velocity in the cornea increased from 0.96  ±  0.30 m s-1 to 6.27  ±  0.75 m s-1 as IOP was elevated from 5 to 50 mmHg, there were insignificant changes in the elastic wave velocity in the crystalline lens with the minimum and the maximum speeds of 1.44  ±  0.27 m s-1 and 2.03  ±  0.46 m s-1, respectively. This study shows that ultrasound shear wave elasticity imaging can be used to assess the biomechanical properties of the crystalline lens noninvasively. Also, it was observed that the dependency of the crystalline lens stiffness on the IOP was significantly lower in comparison with that of cornea. 14. Regional seismic wavefield computation on a 3-D heterogeneous Earth model by means of coupled traveling wave synthesis USGS Publications Warehouse Pollitz, F.F. 2002-01-01 I present a new algorithm for calculating seismic wave propagation through a three-dimensional heterogeneous medium using the framework of mode coupling theory originally developed to perform very low frequency (f < ???0.01-0.05 Hz) seismic wavefield computation. It is a Greens function approach for multiple scattering within a defined volume and employs a truncated traveling wave basis set using the locked mode approximation. Interactions between incident and scattered wavefields are prescribed by mode coupling theory and account for the coupling among surface waves, body waves, and evanescent waves. The described algorithm is, in principle, applicable to global and regional wave propagation problems, but I focus on higher frequency (typically f ??????0.25 Hz) applications at regional and local distances where the locked mode approximation is best utilized and which involve wavefields strongly shaped by propagation through a highly heterogeneous crust. Synthetic examples are shown for P-SV-wave propagation through a semi-ellipsoidal basin and SH-wave propagation through a fault zone. 15. Guiding of elastic waves in a two-dimensional graded phononic crystal plate Guo, Yuning; Hettich, Mike; Dekorsy, Thomas 2017-01-01 The guiding of elastic waves in a two-dimensional graded phononic crystal plate is investigated. This effect is induced by the resonance coupling of attachments and matrix in a silicon pillar-substrate system and the resonance frequencies of guided surface modes can be tuned by tailoring the geometry and material properties of the pillars. The resonance frequencies increase with radius and Young’s modulus, and decrease with height and density of the pillars, which provides several possibilities for the guiding of elastic waves. These devices show the capability of spatially selecting different frequencies into designed channels, thus acting as a phononic multi-channel filter. 16. Elastic waves at periodically-structured surfaces and interfaces of solids SciTech Connect Every, A. G.; Maznev, A. A. 2014-12-15 This paper presents a simple treatment of elastic wave scattering at periodically structured surfaces and interfaces of solids, and the existence and nature of surface acoustic waves (SAW) and interfacial (IW) waves at such structures. Our treatment is embodied in phenomenological models in which the periodicity resides in the boundary conditions. These yield zone folding and band gaps at the boundary of, and within the Brillouin zone. Above the transverse bulk wave threshold, there occur leaky or pseudo-SAW and pseudo-IW, which are attenuated via radiation into the bulk wave continuum. These have a pronounced effect on the transmission and reflection of bulk waves. We provide examples of pseudo-SAW and pseudo-IW for which the coupling to the bulk wave continuum vanishes at isloated points in the dispersion relation. These supersonic guided waves correspond to embedded discrete eigenvalues within a radiation continuum. We stress the generality of the phenomena that are exhibited at widely different scales of length and frequency, and their relevance to situations as diverse as the guiding of seismic waves in mine stopes, the metrology of periodic metal interconnect structures in the semiconductor industry, and elastic wave scattering by an array of coplanar cracks in a solid. 17. Elastic Waves: Mental Models and Teaching/Learning Sequences Tarantino, Giovanni In last years many research studies have pointed out relevant student diff- culties in understanding the physics of mechanical waves. Moreover, it has been reported that these diffculties deal with some fundamental concepts as the role of the medium in wave propagation, the superposition principle and the mathematical description of waves involving the use of functions of two variables. In the context of pre-service courses for teacher preparation a teaching/learning (T/L) sequence based on using simple RTL experiments and interactive simulation environments aimed to show the effect of medium properties on the propagation speed of a wave pulse, has been experimented. Here, preliminary results of investigations carried out with a 120 traineeteacher (TT) group are reported and discussed. 18. Highly Nonlinear Wave Propagation in Elastic Woodpile Periodic Structures Kim, E.; Li, F.; Chong, C.; Theocharis, G.; Yang, J.; Kevrekidis, P. G. 2015-03-01 In the present work, we experimentally implement, numerically compute with, and theoretically analyze a configuration in the form of a single column woodpile periodic structure. Our main finding is that a Hertzian, locally resonant, woodpile lattice offers a test bed for the formation of genuinely traveling waves composed of a strongly localized solitary wave on top of a small amplitude oscillatory tail. This type of wave, called a nanopteron, is not only motivated theoretically and numerically, but is also visualized experimentally by means of a laser Doppler vibrometer. This system can also be useful for manipulating stress waves at will, for example, to achieve strong attenuation and modulation of high-amplitude impacts without relying on damping in the system. 19. Interaction of acoustic-gravity waves with an elastic shelf-break 2016-04-01 In contrast to surface gravity waves that induce flow field which decays exponentially with depth, acoustic-gravity waves oscillate throughout the water column. Their oscillatory profile exerts stresses to the ground which provides a natural explanation for the earth's microseism (Longuet-Higgins, 1950). This work is an extension of the shelf-break problem by Kadri and Stiassnie (2012) who considered the sea floor and the shelf-break to be rigid, and the elastic problem by Eyov et al. (2013) who illustrated the importance of the sea-floor elasticity. In this study we formulate and solve the two-dimensional problem of an incident acoustic-gravity wave mode propagating over an elastic wall and interacting with a shelf-break in a weakly compressible fluid. As the modes approach the shelf-break, part of the energy is reflected whereas the other part is transmitted. A mathematical model is formulated by matching particular solutions for each subregion of constant depth along vertical boundaries; the resulting matrix equation is then solved numerically. The physical properties of these waves are studied, and compared with those for waves over a rigid bottom. The present work broadens our knowledge of acoustic-gravity-waves propagation in realistic environment and can potentially benefit the early detection of tsunami, generated from landslides or submarine earthquakes. References Eyov E., Klar A., Kadri U. , Stiassnie M. 2013 Progressive waves in a compressible-ocean with an elastic bottom. Wave Motion 50, 929-939. Kadri, U., and M. Stiassnie, 2012 Acoustic-Gravity waves interacting with the shelf break. J. Geophys. Res. 117, C03035. Longuet-Higgins, M.S. 1950 A theory of the origin of microseisms. Philos. Trans. R. Soc. Lond. A 243, 1-35. 20. Study of S-wave ray elastic impedance for identifying lithology and fluid Gong, Xue-Ping; Zhang, Feng; Li, Xiang-Yang; Chen, Shuang-Quan 2013-06-01 In this paper, we derive an approximation of the SS-wave reflection coefficient and the expression of S-wave ray elastic impedance (SREI) in terms of the ray parameter. The SREI can be expressed by the S-wave incidence angle or P-wave reflection angle, referred to as SREIS and SREIP, respectively. Our study using elastic models derived from real log measurements shows that SREIP has better capability for lithology and fluid discrimination than SREIS and conventional S-wave elastic impedance (SEI). We evaluate the SREIP feasibility using 25 groups of samples from Castagna and Smith (1994). Each sample group is constructed by using shale, brine-sand, and gas-sand. Theoretical evaluation also indicates that SREIP at large incident angles is more sensitive to fluid than conventional fluid indicators. Real seismic data application also shows that SREIP at large angles calculated using P-wave and S-wave impedance can efficiently characterize tight gas-sand. 1. Scattering of waves by three-dimensional obstacles in elastic metamaterials with zero index Liu, Fengming; Zhang, Feng; Wei, Wei; Hu, Ni; Deng, Gang; Wang, Ziyu 2016-12-01 The scattering of elastic waves by three-dimensional obstacles in isotropic elastic zero-index-metamaterials (ZIM) is theoretically investigated. We show that the zero values of each single effective parameter and their various combinations of the elastic ZIM can produce different types of wave propagation. Particularly, there is no mode conversion when either longitudinal (P ) wave or transverse (S ) wave is scattered by the obstacles in a specific type of double-ZIM (DZIM), possessing near zero reciprocal of shear modulus and near zero mass density. When the obstacle is off resonance, elastic waves are scarcely scattered; nevertheless, the scattering cross section of the obstacle can be drastically enhanced by orders of magnitude when it is on resonance. While in another type of DZIM possessing near zero reciprocal of bulk modulus and near zero mass density, mode conversion occurs during the scattering process and many other transmission characteristics are also different to the former. Moreover, enhanced transmission can be realized for various types of single-ZIM (SZIM) by introducing obstacles, and numerical analysis shows that the enhanced transmission is due to resonant modes arisen in the embedded obstacles. We expect that our findings could have potential practical application, such as seismic protection and on-chip phononic devices. 2. Detailed 3-D S-wave velocity beneath the High Lava Plains, Oregon, from 2-plane-wave Rayleigh wave inversions Wagner, L. S.; Forsyth, D. W.; Fouch, M. J.; James, D. E. 2009-12-01 The High Lava Plains (HLP) of eastern Oregon represent an unusual track of bimodal volcanism extending from the southeastern-most corner of the state to its current position beneath the Newberry Volcano on the eastern margin of the Cascades. The silicic volcanism is time progressive along this track, beginning some 15 Ma near the Owyhee plateau and then trending to the north east. The timing and location of the start of the HLP coincides with that of the initial volcanism associated with the Yellowstone/Snake River Plain track (YSRP). While the YSRP has often been interpreted as the classic intra-continental hot spot track, the HLP, which trends almost normal to absolute plate motion, is harder to explain. This study uses the 100+ stations associated with the HLP seismic deployment together with another ~100 Earthscope Transportable Array stations (TA) to perform a high resolution inversion for Rayleigh wave phase velocities using the 2-plane-wave methodology of Forsyth and Li (2004). Because of the comparatively small grid spacing of this study, we are able to discern much finer scale structures than studies looking at the entire western U.S. with only TA stations. Preliminary results indicate very low velocities across the study area, especially at upper mantle depths. Especially low velocities are seen beneath the Owyhee plateau and along both the HLP and YSRP tracks. Final details about the exact geometries of these features will help constrain possible scenarios for the formation of the HLP volcanic sequence. 3. A 3-D crustal and uppermost mantle model of the western US from receiver functions and surface wave dispersion derived from ambient noise and teleseismic earthquakes Shen, W.; Schulte-Pelkum, V.; Ritzwoller, M. H. 2011-12-01 The joint inversion of surface wave dispersion and receiver functions was proven feasible on a station by station basis more than a decade ago. Joint application to a large number of stations across a broad region such as western US is more challenging, however, because of the different resolutions of the two methods. Improvements in resolution in surface wave studies derived from ambient noise and array-based methods applied to earthquake data now allow surface wave dispersion and receiver functions to be inverted simultaneously across much of the Earthscope/USArray Transportable Array (TA), and we have developed a Monte-Carlo procedure for this purpose. As a proof of concept we applied this procedure to a region containing 186 TA stations in the intermountain west, including a variety of tectonic settings such as the Colorado Plateau, the Basin and Range, the Rocky Mountains, and the Great Plains. This work has now been expanded to encompass all TA stations in the western US. Our approach includes three main components. (1) We enlarge the Earthscope Automated Receiver Survey (EARS) receiver function database by adding more events within a quality control procedure. A back-azimuth-independent receiver function and its associated uncertainties are constructed using a harmonic stripping algorithm. (2) Rayleigh wave dispersion curves are generated from the eikonal tomography applied to ambient noise cross-correlation data and Helmoholtz tomography applied to teleseismic surface wave data to yield dispersion maps from 8 sec to 80 sec period. (3) We apply a Metropolis Monte Carlo algorithm to invert for the average velocity structure beneath each station. Simple kriging is applied to interpolate to the discrete results into a continuous 3-D model. This method has now been applied to over 1,000 TA stations in the western US. We show that the receiver functions and surface wave dispersion data can be reconciled beneath more than 80% of the stations using a smooth 4. Elastic Waves in Binary Solid Liquid Mixtures, Similarities at Macro and Nono Scales Tavossi, Hasson M. 2007-04-01 Stress wave propagation in solid liquid mixtures at ultrasonic frequencies, in some cases, resembles wave propagation behaviors of materials at nanometer or atomic scales. For instance, it can be shown that wave; dispersion, attenuation, and cutoff-frequency effects depend on the same structural parameters as those observed at nano or atomic levels and can have similar interpretations at both scales. It follows that, to investigate theoretical models of wave and matter interactions at nano scale it is more convenient to use, as experimental tools, the readily analyzable models of propagation at macro-scales. Experimental findings on elastic wave propagation in the mixtures of liquid and solid particles will be presented and discussed. Results of wave dispersion, attenuation, band-pass, and cutoff frequency measured for ultrasonic waves in inhomogeneous mixtures of solid and liquid will be presented showing these similarities at the radically different scales. 5. Study of Ocean Bottom Interactions with Acoustic Waves by a New Elastic Wave Propagation Algorithm and an Energy Flow Analysis Technique DTIC Science & Technology 2016-06-07 Study Of Ocean Bottom Interactions With Acoustic Waves By A New Elastic Wave Propagation Algorithm And An Energy Flow Analysis Technique Ru-Shan Wu...imaging to study the wave/sea-bottom interaction, energy partitioning, scattering mechanism and other problems that are crucial for many ocean bottom...Elastic Wave Propagation Algorithm And An Energy Flow Analysis Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR 6. Effective-medium theory of elastic waves in random networks of rods. PubMed Katz, J I; Hoffman, J J; Conradi, M S; Miller, J G 2012-06-01 We formulate an effective medium (mean field) theory of a material consisting of randomly distributed nodes connected by straight slender rods, hinged at the nodes. Defining wavelength-dependent effective elastic moduli, we calculate both the static moduli and the dispersion relations of ultrasonic longitudinal and transverse elastic waves. At finite wave vector k the waves are dispersive, with phase and group velocities decreasing with increasing wave vector. These results are directly applicable to networks with empty pore space. They also describe the solid matrix in two-component (Biot) theories of fluid-filled porous media. We suggest the possibility of low density materials with higher ratios of stiffness and strength to density than those of foams, aerogels, or trabecular bone. 7. Wave propagation analysis of a size-dependent magneto-electro-elastic heterogeneous nanoplate 2016-12-01 The analysis of the wave propagation behavior of a magneto-electro-elastic functionally graded (MEE-FG) nanoplate is carried out in the framework of a refined higher-order plate theory. In order to take into account the small-scale influence, the nonlocal elasticity theory of Eringen is employed. Furthermore, the material properties of the nanoplate are considered to be variable through the thickness based on the power-law form. Nonlocal governing equations of the MEE-FG nanoplate have been derived using Hamilton's principle. The results of the present study have been validated by comparing them with previous researches. An analytical solution of governing equations is performed to obtain wave frequencies, phase velocities and escape frequencies. The effect of different parameters, such as wave number, nonlocal parameter, gradient index, magnetic potential and electric voltage on the wave dispersion characteristics of MEE-FG nanoscale plates is studied in detail. 8. Propagation of elastic pressure waves in a beam window Davenne, T. R.; Loveridge, P. 2016-09-01 As particle accelerator beam power increases, stress on beam windows and targets increases. Many simulations are carried out to model the dynamic stresses that are induced in these critical components by near instantaneous beam heating. However while it is often easy to obtain simulation results there are few analytical solutions available to check the accuracy of simulation techniques. We follow the strand of several authors over the years who have offered analytical solutions to the classic problem of radial stress waves in a beam window. Many of these significant contributions have still had niggling issues with regard to resolving peak stress and limitations on the applied initial heating condition. We formulate an analytical expression for the radial pressure waves based on a Green's function solution of Feynman's wave equation. A complete analysis of the problem demonstrates that a hypothesis that beam induced pressure waves are composed of a static and transient component is indeed correct. The analytical expression is shown to give stable bounded solutions with easily determined peak stress levels. Finally a comparison between analytical expression and finite element analysis of the problem yields some general guidelines that should be adhered to for achieving accurate stress wave simulations. 9. Modelling of nonlinear wave scattering in a delaminated elastic bar PubMed Central Khusnutdinova, K. R.; Tranter, M. R. 2015-01-01 Integrity of layered structures, extensively used in modern industry, strongly depends on the quality of their interfaces; poor adhesion or delamination can lead to a failure of the structure. Can nonlinear waves help us to control the quality of layered structures? In this paper, we numerically model the dynamics of a long longitudinal strain solitary wave in a split, symmetric layered bar. The recently developed analytical approach, based on matching two asymptotic multiple-scales expansions and the integrability theory of the Korteweg–de Vries equation by the inverse scattering transform, is used to develop an effective semi-analytical numerical approach for these types of problems. We also employ a direct finite-difference method and compare the numerical results with each other, and with the analytical predictions. The numerical modelling confirms that delamination causes fission of an incident solitary wave and, thus, can be used to detect the defect. PMID:26730218 10. Elastic guided waves in plates with surface roughness. II. Experiments SciTech Connect Lobkis, O.I.; Chimenti, D.E. 1997-07-01 In this artice are reported fundamental experimental measurements on guided waves in plates with surface roughness; the experimental data are critically compared to theoretical calculations presented in Part I. All experiments, in either immersion or contact coupling mode, are modeled by the theory developed in I that exploits the phase-screen approximation. In this theory the effect of the rough surface on the received signal, on a local scale, is assumed to be restricted to the signal phase. The comparisons between experiment and predictions show good agreement in most regimes, despite the rather simplifying approximations contained in the calculation. The model is shown to fail only when the guided wave vector is close to a branch point, that is when the guided wave phase velocity approaches the compressional or shear wavespeeds of the plate. Near these values the internal partial waves comprising the guided wave strike the surfaces at grazing incidence or are evanescent, and a simple phase-screen model cannot account for this behavior. Elsewhere in the guided wave spectrum, agreement is quite good. Of practical significance is the finding that the rough-surface damping contrast can be maximized by configuring the experimental conditions to measure just below and well above the compressional critical angle. Aluminum samples, prepared by indenting or sandblasting and independently profiled to determine rms roughness, are measured in immersion and in contact transduction, the latter with wedge couplers and line sources. The influence of the roughness in immersion experiments is strongly affected by whether the upper or lower plate surface is rough, but only in the interaction zone between specular and nonspecular reflection components. {copyright} {ital 1997 Acoustical Society of America.} 11. Elastic wave propagation through a material with voids Wright, Thomas W. 1998-10-01 An exact mathematical analogy exists between plane wave propagation through a material with voids and axial wave propagation along a circular cylindrical rod with radial shear and inertia. In both cases the internal energy can be regarded as a function of a displacement gradient, an internal variable, and the gradient of the internal variable. In the rod the internal variable represents radial strain, and in the material with voids it is related to changes in void volume fraction. In both cases kinetic energy is associated not only with particle translation, but also with the internal variable. In the rod this microkinetic energy represents radial inertia ; in the material with voids it represents dilitational inertia around the voids. Thus, the basis for the analogy is that in both cases there are two kinematic degrees of freedom, the Lagrangians are identical in form, and therefore, the Euler-Lagrange equations are also identical in form. Of course, the constitutive details and the internal length scales for the two cases are very different, but insight into the behavior of rods can be transferred directly to interpreting the effects of wave propagation in a material with voids. The main result is that just as impact on the end of a rod produces a pulse that first travels with the longitudinal wave speed and then transfers the bulk of its energy into a dispersive wave that travels with the bar speed (calculated using Youngs modulus), so impact on the material with voids produces a pulse that also begins with the longitudinal speed but then transfers to a slower dispersive wave whose speed is determined by an effective longitudinal modulus. The rate of transfer and the strength of the dispersive effect depend on the details in the two cases. 12. Implications of elastic wave velocities for Apollo 17 rock powders NASA Technical Reports Server (NTRS) Talwani, P.; Nur, A.; Kovach, R. L. 1974-01-01 Ultrasonic P- and S-wave velocities of lunar rock powders 172701, 172161, 170051, and 175081 were measured at room temperature and to 2.5 kb confining pressure. The results compare well with those of terrestrial volcanic ash and powdered basalt. P-wave velocity values up to pressures corresponding to a lunar depth of 1.4 km preclude cold compaction alone as an explanation for the observed seismic velocity structure at the Apollo 17 site. Application of small amounts of heat with simultaneous application of pressure causes rock powders to achieve equivalence of seismic velocities for competent rocks. 13. 3D Numerical Simulation of the Wave and Current Loads on a Truss Foundation of the Offshore Wind Turbine During the Extreme Typhoon Event Lin, C. W.; Wu, T. R.; Chuang, M. H.; Tsai, Y. L. 2015-12-01 The wind in Taiwan Strait is strong and stable which offers an opportunity to build offshore wind farms. However, frequently visited typhoons and strong ocean current require more attentions on the wave force and local scour around the foundation of the turbine piles. In this paper, we introduce an in-house, multi-phase CFD model, Splash3D, for solving the flow field with breaking wave, strong turbulent, and scour phenomena. Splash3D solves Navier-Stokes Equation with Large-Eddy Simulation (LES) for the fluid domain, and uses volume of fluid (VOF) with piecewise linear interface reconstruction (PLIC) method to describe the break free-surface. The waves were generated inside the computational domain by internal wave maker with a mass-source function. This function is designed to adequately simulate the wave condition under observed extreme events based on JONSWAP spectrum and dispersion relationship. Dirichlet velocity boundary condition is assigned at the upper stream boundary to induce the ocean current. At the downstream face, the sponge-layer method combined with pressure Dirichlet boundary condition is specified for dissipating waves and conducting current out of the domain. Numerical pressure gauges are uniformly set on the structure surface to obtain the force distribution on the structure. As for the local scour around the foundation, we developed Discontinuous Bi-viscous Model (DBM) for the development of the scour hole. Model validations were presented as well. The force distribution under observed irregular wave condition was extracted by the irregular-surface force extraction (ISFE) method, which provides a fast and elegant way to integrate the force acting on the surface of irregular structure. From the Simulation results, we found that the total force is mainly induced by the impinging waves, and the force from the ocean current is about 2 order of magnitude smaller than the wave force. We also found the dynamic pressure, wave height, and the 14. Topology optimization for wave propagation and vibration phenomena in elastic and piezoelectric solids Rupp, Cory J. Topology optimization is a versatile design tool for the synthesis of heterogeneous engineering systems where the optimal distribution of constituent materials is sought such that a prescribed measure of performance is optimized. In this dissertation, topology optimization methodologies are developed for solving problems associated with wave propagation and vibration in elastic and piezoelectric media. These methodologies utilize the finite element method in conjunction with gradient-based optimization algorithms to create functional materials, structures, and devices. The methodologies are demonstrated in a number of examples and illustrative studies that progress the state-of-the-art in the fields of topology optimization, elastic waveguides, phononic band-gap materials, and piezoelectric energy harvesting systems. These include the design of bulk and surface wave elastic waveguides in two and three dimensions that guide various forms of wave energy as desired, band-gap structures that provide tailored frequency transmission spectrums for bulk waves and surface waves, band-gap materials that prevent wave propagation within certain frequencies, and piezoelectric energy harvesting systems designed to optimize power output. Also addressed are previously unreported issues with the application of topology optimization to these types of problems including the role of physical phenomena in the solutions, mesh dependency effects, non-uniqueness, and the impact of small feature sizes. 15. Impact of event-specific chorus wave realization for modeling the October 8-9, 2012, event using the LANL DREAM3D diffusion code Cunningham, G.; Tu, W.; Chen, Y.; Reeves, G. D.; Henderson, M. G.; Baker, D. N.; Blake, J. B.; Spence, H. 2013-12-01 During the interval October 8-9, 2012, the phase-space density (PSD) of high-energy electrons exhibited a dropout preceding an intense enhancement observed by the MagEIS and REPT instruments aboard the Van Allen Probes. The evolution of the PSD suggests heating by chorus waves, which were observed to have high intensities at the time of the enhancement [1]. Although intense chorus waves were also observed during the first Dst dip on October 8, no PSD enhancement was observed at this time. We demonstrate a quantitative reproduction of the entire event that makes use of three recent modifications to the LANL DREAM3D diffusion code: 1) incorporation of a time-dependent, low-energy, boundary condition from the MagEIS instrument, 2) use of a time-dependent estimate of the chorus wave intensity derived from observations of POES low-energy electron precipitation, and 3) use of an estimate of the last closed drift shell, beyond which electrons are assumed to have a lifetime that is proportional to their drift period around earth. The key features of the event are quantitatively reproduced by the simulation, including the dropout on October 8, and a rapid increase in PSD early on October 9, with a peak near L*=4.2. The DREAM3D code predicts the dropout on October 8 because this feature is dominated by magnetospheric compression and outward radial diffusion-the L* of the last closed drift-shell reaches a minimum value of 5.33 at 1026 UT on October 8. We find that a ';statistical' wave model based on historical CRRES measurements binned in AE* does not reproduce the enhancement because the peak wave amplitudes are only a few 10's of pT, whereas an ';event-specific' model reproduces both the magnitude and timing of the enhancement very well, a s shown in the Figure, because the peak wave amplitudes are 10x higher. [1] 'Electron Acceleration in the Heart of the Van Allen Radiation Belts', G. D. Reeves et al., Science 1237743, Published online 25 July 2013 [DOI:10.1126/science 16. Broadband sub-millimeter wave amplifer module with 38dB gain and 8.3dB noise figure Sarkozy, S.; Leong, K.; Lai, R.; Leakey, R.; Yoshida, W.; Mei, X.; Lee, J.; Liu, P.-H.; Gorospe, B.; Deal, W. R. 2011-05-01 Broadband sub-millimeter wave technology has received significant attention for potential applications in security, medical, and military imaging. Despite theoretical advantages of reduced size, weight, and power compared to current millimeter-wave systems, sub-millimeter-wave systems are hampered by a fundamental lack of amplification with sufficient gain and noise figure properties. We report on the development of a sub-millimeter wave amplifier module as part of a broadband pixel operating from 300-350 GHz, biased off of a single 2V power supply. Over this frequency range, > 38 dB gain and < 8.3 dB noise figure are obtained and represent the current state-of-art performance capabilities. The prototype pixel chain consists of two WR3 waveguide amplifier blocks, and a horn antenna and diode detector. The low noise amplifier Sub-Millimeter-wave Monolithic Integrated Circuit (SMMIC) was originally developed under the DARPA SWIFT and THz Electronics programs and is based on sub 50 nm Indium Arsenide Composite Channel (IACC) transistor technology with a projected maximum oscillation frequency fmax > 1.0 THz. This development and demonstration may bring to life future sub-millimeter-wave and THz applications such as solutions to brown-out problems, ultra-high bandwidth satellite communication cross-links, and future planetary exploration missions. 17. Determination of elastic properties of a MnO2 coating by surface acoustic wave velocity dispersion analysis Sermeus, J.; Sinha, R.; Vanstreels, K.; Vereecken, P. M.; Glorieux, C. 2014-07-01 MnO2 is a material of interest in the development of high energy-density batteries, specifically as a coating material for internal 3D structures, thus ensuring rapid energy deployment. Its electrochemical properties have been mapped extensively, but there are, to the best of the authors' knowledge, no records of the elastic properties of thin film MnO2. Impulsive stimulated thermal scattering (ISTS), also known as the heterodyne diffraction or transient grating technique, was used to determine the Young's modulus (E) and porosity (ψ) of a 500 nm thick MnO2 coating on a Si(001) substrate. ISTS is an all optical method that is able to excite and detect surface acoustic waves (SAWs) on opaque samples. From the measured SAW velocity dispersion, the Young's modulus and porosity were determined to be E = 25 ± 1 GPa and ψ = 42 ± 1 %, respectively. These values were confirmed by independent techniques and determined by a most-squares analysis of the carefully fitted SAW velocity dispersion. This study demonstrates the ability of the presented technique to determine the elastic parameters of a thin, porous film on an anisotropic substrate. 18. Full elastic characterization of absorptive rubber using laser excited guided ultrasonic waves Verstraeten, Bert; Xu, Xiadong; Martinez, Loïc; Glorieux, Christ 2012-05-01 Because of the highly damping nature of rubber, it is difficult to characterize its dynamic elastic properties using classical methods. In this paper, an experimental approach employing laser excited guided acoustic waves is proposed to accurately determine the real and imaginary part of the longitudinal and shear elastic modulus of a rubber layer. From the spatiotemporal evolution of a propagating laser excited Lamb wave measured by a laser Doppler vibrometer, which is scanning along a line perpendicular to a line of excitation, the phase velocity dispersion curves in the wave number - frequency domain are obtained. The results are interpreted in the framework of a detailed semianalytical study, analyzing the influence of elastic damping on the Lamb dispersion curves. This analysis is exploited to adequately fit the experimental dispersion curves and thus extract information about the elastic moduli and absorption coefficients of the rubber plate. The results are validated by a pulse-echo measurement, and by guided wave propagation results with the rubber layer connected in a bi-layer plate configuration to non-damping plates. 19. Global effects of transmitted shock wave propagation through the Earth's inner magnetosphere: First results from 3-D hybrid kinetic modeling Lipatov, A. S.; Sibeck, D. G. 2016-09-01 We use a new hybrid kinetic model to simulate the response of ring current, outer radiation belt, and plasmaspheric particle populations to impulsive interplanetary shocks. Since particle distributions attending the interplanetary shock waves and in the ring current and radiation belts are non-Maxwellian, wave-particle interactions play a crucial role in energy transport within the inner magnetosphere. Finite gyroradius effects become important in mass loading the shock waves with the background plasma in the presence of higher energy ring current and radiation belt ions and electrons. Initial results show that shocks cause strong deformations in the global structure of the ring current, radiation belt, and plasmasphere. The ion velocity distribution functions at the shock front, in the ring current, and in the radiation belt help us determine energy transport through the Earth's inner magnetosphere. 20. Standing-wave-excited multiplanar fluorescence in a laser scanning microscope reveals 3D information on red blood cells. PubMed Amor, Rumelo; Mahajan, Sumeet; Amos, William Bradshaw; McConnell, Gail 2014-12-08 Standing-wave excitation of fluorescence is highly desirable in optical microscopy because it improves the axial resolution. We demonstrate here that multiplanar excitation of fluorescence by a standing wave can be produced in a single-spot laser scanning microscope by placing a plane reflector close to the specimen. We report here a variation in the intensity of fluorescence of successive planes related to the Stokes shift of the dye. We show by the use of dyes specific for the cell membrane how standing-wave excitation can be exploited to generate precise contour maps of the surface membrane of red blood cells, with an axial resolution of ≈90 nm. The method, which requires only the addition of a plane mirror to an existing confocal laser scanning microscope, may well prove useful in studying diseases which involve the red cell membrane, such as malaria. 1. Standing-wave-excited multiplanar fluorescence in a laser scanning microscope reveals 3D information on red blood cells Amor, Rumelo; Mahajan, Sumeet; Amos, William Bradshaw; McConnell, Gail 2014-12-01 Standing-wave excitation of fluorescence is highly desirable in optical microscopy because it improves the axial resolution. We demonstrate here that multiplanar excitation of fluorescence by a standing wave can be produced in a single-spot laser scanning microscope by placing a plane reflector close to the specimen. We report here a variation in the intensity of fluorescence of successive planes related to the Stokes shift of the dye. We show by the use of dyes specific for the cell membrane how standing-wave excitation can be exploited to generate precise contour maps of the surface membrane of red blood cells, with an axial resolution of ~90 nm. The method, which requires only the addition of a plane mirror to an existing confocal laser scanning microscope, may well prove useful in studying diseases which involve the red cell membrane, such as malaria. 2. Theoretical and numerical investigation of HF elastic wave propagation in two-dimensional periodic beam lattices Tie, B.; Tian, B. Y.; Aubry, D. 2013-12-01 The elastic wave propagation phenomena in two-dimensional periodic beam lattices are studied by using the Bloch wave transform. The numerical modeling is applied to the hexagonal and the rectangular beam lattices, in which, both the in-plane (with respect to the lattice plane) and out-of-plane waves are considered. The dispersion relations are obtained by calculating the Bloch eigenfrequencies and eigenmodes. The frequency bandgaps are observed and the influence of the elastic and geometric properties of the primitive cell on the bandgaps is studied. By analyzing the phase and the group velocities of the Bloch wave modes, the anisotropic behaviors and the dispersive characteristics of the hexagonal beam lattice with respect to the wave propagation are highlighted in high frequency domains. One important result presented herein is the comparison between the first Bloch wave modes to the membrane and bending/transverse shear wave modes of the classical equivalent homogenized orthotropic plate model of the hexagonal beam lattice. It is shown that, in low frequency ranges, the homogenized plate model can correctly represent both the in-plane and out-of-plane dynamic behaviors of the beam lattice, its frequency validity domain can be precisely evaluated thanks to the Bloch modal analysis. As another important and original result, we have highlighted the existence of the retropropagating Bloch wave modes with a negative group velocity, and of the corresponding "retro-propagating" frequency bands. 3. Anomalous incident-angle and elliptical-polarization rotation of an elastically refracted P-wave. PubMed Fa, Lin; Fa, Yuxiao; Zhang, Yandong; Ding, Pengfei; Gong, Jiamin; Li, Guohui; Li, Lijun; Tang, Shaojie; Zhao, Meishan 2015-08-05 We report a newly discovered anomalous incident-angle of an elastically refracted P-wave, arising from a P-wave impinging on an interface between two VTI media with strong anisotropy. This anomalous incident-angle is found to be located in the post-critical incident-angle region corresponding to a refracted P-wave. Invoking Snell's law for a refracted P-wave provides two distinctive solutions before and after the anomalous incident-angle. For an inhomogeneously refracted and elliptically polarized P-wave at the anomalous incident-angle, its rotational direction experiences an acute variation, from left-hand elliptical to right-hand elliptical polarization. The new findings provide us an enhanced understanding of acoustical-wave scattering and lead potentially to widespread and novel applications. 4. Anomalous incident-angle and elliptical-polarization rotation of an elastically refracted P-wave Fa, Lin; Fa, Yuxiao; Zhang, Yandong; Ding, Pengfei; Gong, Jiamin; Li, Guohui; Li, Lijun; Tang, Shaojie; Zhao, Meishan 2015-08-01 We report a newly discovered anomalous incident-angle of an elastically refracted P-wave, arising from a P-wave impinging on an interface between two VTI media with strong anisotropy. This anomalous incident-angle is found to be located in the post-critical incident-angle region corresponding to a refracted P-wave. Invoking Snell’s law for a refracted P-wave provides two distinctive solutions before and after the anomalous incident-angle. For an inhomogeneously refracted and elliptically polarized P-wave at the anomalous incident-angle, its rotational direction experiences an acute variation, from left-hand elliptical to right-hand elliptical polarization. The new findings provide us an enhanced understanding of acoustical-wave scattering and lead potentially to widespread and novel applications. 5. New constraints on the 3D shear wave velocity structure of the upper mantle underneath Southern Scandinavia revealed from non-linear tomography Wawerzinek, B.; Ritter, J. R. R.; Roy, C. 2013-08-01 We analyse travel times of shear waves, which were recorded at the MAGNUS network, to determine the 3D shear wave velocity (vS) structure underneath Southern Scandinavia. The travel time residuals are corrected for the known crustal structure of Southern Norway and weighted to account for data quality and pick uncertainties. The resulting residual pattern of subvertically incident waves is very uniform and simple. It shows delayed arrivals underneath Southern Norway compared to fast arrivals underneath the Oslo Graben and the Baltic Shield. The 3D upper mantle vS structure underneath the station network is determined by performing non-linear travel time tomography. As expected from the residual pattern the resulting tomographic model shows a simple and continuous vS perturbation pattern: a negative vS anomaly is visible underneath Southern Norway relative to the Baltic Shield in the east with a contrast of up to 4% vS and a sharp W-E dipping transition zone. Reconstruction tests reveal besides vertical smearing a good lateral reconstruction of the dipping vS transition zone and suggest that a deep-seated anomaly at 330-410 km depth is real and not an inversion artefact. The upper part of the reduced vS anomaly underneath Southern Norway (down to 250 km depth) might be due to an increase in lithospheric thickness from the Caledonian Southern Scandes in the west towards the Proterozoic Baltic Shield in Sweden in the east. The deeper-seated negative vS anomaly (330-410 km depth) could be caused by a temperature anomaly possibly combined with effects due to fluids or hydrous minerals. The determined simple 3D vS structure underneath Southern Scandinavia indicates that mantle processes might influence and contribute to a Neogene uplift of Southern Norway. 6. Full-Wave Tomographic and Moment Tensor Inversion Based on 3D Multigrid Strain Green’s Tensor Databases DTIC Science & Technology 2014-04-30 105. Shen, Y., et al., 2013, Construction of a nested, global empirical Green’s tensor database, Seismological Society of America meeting, Salt...W. Zhang, 2010, Full-wave ambient noise tomography of the northern Cascadia, SSA meeting (abstract), Seismological Research Letters, 81, 300. Shen 7. Comprehensive 3D Model of Shock Wave-Brain Interactions in Blast-Induced Traumatic Brain Injuries DTIC Science & Technology 2009-10-01 waves can cause brain damage by other mechanisms including excess pressure (leading to contusions), excess strain (leading to subdural ... hematomas and/or diffuse axonal injuries), and, in particular, cavitation effects (leading to subcellular damage). This project aims at the development of a 8. Low-frequency elastic waves alter pore-scale colloid mobilization SciTech Connect Beckham, Richard Edward; Abdel-fattah, Amr I; Roberts, Peter M; Ibrahim, Reem; Tarimala, Sownitri 2009-01-01 Naturally occurring seismic events and artificially generated low-frequency elastic waves have been observed to alter the production rates of oil and water wells, sometimes increasing and sometimes decreasing production, and to influence the turbidity of water wells. TEe decreases in production are of particular concern - especially when artificially generated elastic waves are applied as a method for enhanced oil recovery. The exact conditions that result in a decrease in production remain unknown. While the underlying environment is certainly complex, the observed increase in water well turbidity after seismic events suggests the existence of a mechanism that can affect both the subsurface flow paths and mobilization of in-situ colloidal particles. This paper explores the macroscopic and microscopic effects of elastic wave stimulations on the release of colloidal particles and investigates the microscopic mechanism of particle release during stimulation. Experiments on a column packed with 1-mm borosilicate beads loaded with polystyrene microspheres demonstrate that low-frequency elastic wave stimulations enhance the mobilization of captured microspheres. Increasing the intensity of the stimulations increases the number of microspheres released and can also result in cyclical variations in effluent microsphere concentration during and after stimulations. Under a prolonged period of stimulation, the cyclical effluent variations coincided with fluctuations in the column pressure data. This behavior can be attributed to flow pathways fouling and/or rearrangements of the beads in the column. Optical microscopy observations of the beads during low frequency oscillations reveal that the individual beads rotate, thereby rubbing against each other and scraping off portions of the adsorbed microspheres. These results support the theory that mechanical interactions between soil grains are important mechanisms in flow path alteration and the mobilization of naturally 9. Acoustic microscope based on magneto-elastic wave phase conjugator Brysev, A.; Krutyansky, L.; Pernod, P.; Preobrazhensky, V. 2000-05-01 Acoustic C-scan imaging (acoustic microscopy) by means of supercritical parametric wave phase conjugation (WPC) is studied experimentally. A phase conjugator based on a magneto-acoustic active material is used for compensating phase distortions introduced by solid and polymer aberration layers covering objects (electronic integrated circuits as examples). Improvement of images is demonstrated on an acoustic microscope, operating at a frequency of 10 MHz. 10. Extracting Earth's Elastic Wave Response from Noise Measurements Snieder, Roel; Larose, Eric 2013-05-01 Recent research has shown that noise can be turned from a nuisance into a useful seismic source. In seismology and other fields in science and engineering, the estimation of the system response from noise measurements has proven to be a powerful technique. To convey the essence of the method, we first treat the simplest case of a homogeneous medium to show how noise measurements can be used to estimate waves that propagate between sensors. We provide an overview of physics research—dating back more than 100 years—showing that random field fluctuations contain information about the system response. This principle has found extensive use in surface-wave seismology but can also be applied to the estimation of body waves. Because noise provides continuous illumination of the subsurface, the extracted response is ideally suited for time-lapse monitoring. We present examples of time-lapse monitoring as applied to the softening of soil after the 2011 Tohoku-oki earthquake, the detection of a precursor to a landslide, and temporal changes in the lunar soil. 11. Lamb waves propagation in elastic plane layers with a joint strip. PubMed Predoi, Mihai Valentin; Rousseau, Martine 2005-06-01 The Lamb waves are used for the ultrasonic characterization of welds because of their relative long-range propagation. In this paper, a simplified model of a weld-strip between two identical semi-infinite elastic layers is investigated. The reflected and transmitted ultrasonic fields are expressed by modal series whose coefficients are obtained by application of orthogonality relation. Comparisons with solutions obtained by finite elements wave propagation simulations are made. The energy balance between the incident and the scattered waves is also used to verify the accuracy of the obtained modal amplitudes. 12. Evolutions of elastic-plastic shock compression waves in different materials Kanel, G. I.; Zaretsky, E. B.; Razorenov, S. V.; Savinykh, A. S.; Garkushin, G. V. 2015-06-01 Measurements of decay of the elastic precursor wave are used to determine the initial plastic strain rate as a function of the stress. Last years we performed large series of such kind experiments with metals and alloys at various temperatures, ceramics and glasses. In course of these measurements we observed several unexpected effects which have not got exhaustive explanations yet. In the presentation, we'll discuss a departure from self-similar development of the wave process which is accompanied with apparent sub-sonic wave propagation, changes of shape of elastic precursor wave as a result of variations in the material structure and the temperature, unexpected peculiarities of reflection of elastic-plastic waves from free surface, effects of internal friction at shock compression of glasses and some other effects. It seems the experimental data contain more information about kinetics of the time-dependent phenomena than we are able to get from their analysis now. Financial support from the Russian Science Foundation via Grant No 14-12-01127 is gratefully acknowledged. 13. Low velocity crustal flow and crust-mantle coupling mechanism in Yunnan, SE Tibet, revealed by 3D S-wave velocity and azimuthal anisotropy Chen, Haopeng; Zhu, Liangbao; Su, Youjin 2016-08-01 We used teleseismic data recorded by a permanent seismic network in Yunnan, SE Tibet, and measured the interstation Rayleigh wave phase velocity between 10 and 60 s. A two-step inversion scheme was used to invert for the 3D S-wave velocity and azimuthal anisotropy structure of 10-110 km. The results show that there are two low velocity channels between depths of 20-30 km in Yunnan and that the fast axes are sub-parallel to the strikes of the low velocity channels, which supports the crustal flow model. The azimuthal anisotropy pattern is quite complicated and reveals a complex crust-mantle coupling mechanism in Yunnan. The N-S trending Lüzhijiang Fault separates the Dianzhong Block into two parts. In the western Dianzhong Block, the fast axis of the S-wave changes with depth, which indicates that the crust and the lithospheric mantle are decoupled. In the eastern Dianzhong Block and the western Yangtze Craton, the crust and the lithospheric mantle may be decoupled because of crustal flow, despite a coherent S-wave fast axis at depths of 10-110 km. In addition, the difference between the S-wave fast axis in the lithosphere and the SKS splitting measurement suggests that the lithosphere and the upper mantle are decoupled there. In the Baoshan Block, the stratified anisotropic pattern suggests that the crust and the upper mantle are decoupled. 14. An energy absorbing far-field boundary condition for the elastic wave equation SciTech Connect 2008-07-15 The authors present an energy absorbing non-reflecting boundary condition of Clayton-Engquist type for the elastic wave equation together with a discretization which is stable for any ratio of compressional to shear wave speed. They prove stability for a second order accurate finite-difference discretization of the elastic wave equation in three space dimensions together with a discretization of the proposed non-reflecting boundary condition. The stability proof is based on a discrete energy estimate and is valid for heterogeneous materials. The proof includes all six boundaries of the computational domain where special discretizations are needed at the edges and corners. The stability proof holds also when a free surface boundary condition is imposed on some sides of the computational domain. 15. Loop heating by D.C. electric current and electromagnetic wave emissions simulated by 3-D EM particle zone NASA Technical Reports Server (NTRS) Sakai, J. I.; Zhao, J.; Nishikawa, K.-I. 1994-01-01 We have shown that a current-carrying plasma loop can be heated by magnetic pinch driven by the pressure imbalance between inside and outside the loop, using a 3-dimensional electromagnetic (EM) particle code. Both electrons and ions in the loop can be heated in the direction perpendicular to the ambient magnetic field, therefore the perpendicular temperature can be increased about 10 times compared with the parallel temperature. This temperature anisotropy produced by the magnetic pinch heating can induce a plasma instability, by which high-frequency electromagnetic waves can be excited. The plasma current which is enhanced by the magnetic pinch can also excite a kinetic kink instability, which can heat ions perpendicular to the magnetic field. The heating mechanism of ions as well as the electromagnetic emission could be important for an understanding of the coronal loop heating and the electromagnetic wave emissions from active coronal regions. 16. Global Effects of Transmitted Shock Wave Propagation Through the Earth's Inner Magnetosphere: First Results from 3-D Hybrid Kinetic Modeling NASA Technical Reports Server (NTRS) Lipatov, A. S.; Sibeck, D. G. 2016-01-01 We use a new hybrid kinetic model to simulate the response of ring current, outer radiation belt, and plasmaspheric particle populations to impulsive interplanetary shocks. Since particle distributions attending the interplanetary shock waves and in the ring current and radiation belts are non-Maxwellian, waveparticle interactions play a crucial role in energy transport within the inner magnetosphere. Finite gyroradius effects become important in mass loading the shock waves with the background plasma in the presence of higher energy ring current and radiation belt ions and electrons. Initial results show that shocks cause strong deformations in the global structure of the ring current, radiation belt, and plasmasphere. The ion velocity distribution functions at the shock front, in the ring current, and in the radiation belt help us determine energy transport through the Earth's inner magnetosphere. 17. Data Communications Using Guided Elastic Waves by Time Reversal Pulse Position Modulation: Experimental Study PubMed Central Jin, Yuanwei; Ying, Yujie; Zhao, Deshuang 2013-01-01 In this paper, we present and demonstrate a low complexity elastic wave signaling and reception method to achieve high data rate communication on dispersive solid elastic media, such as metal pipes, using piezoelectric transducers of PZT (lead zirconate titanate). Data communication is realized using pulse position modulation (PPM) as the signaling method and the elastic medium as the communication channel. The communication system first transmits a small number of training pulses to probe the dispersive medium. The time-reversed probe signals are then utilized as the information carrying waveforms. Rapid timing acquisition of transmitted waveforms for demodulation over elastic medium is made possible by exploring the reciprocity property of guided elastic waves. The experimental tests were conducted using a National Instrument PXI system for waveform excitation and data acquisition. Data telemetry bit rates of 10 kbps, 20 kbps, 50 kbps and 100 kbps with the average bit error rates of 0, 5.75 x 10-4, 1.09 x 10-2 and 5.01 x 10-2, respectively, out of a total of 40, 000 transmitted bits were obtained when transmitting at the center frequency of 250 kHz and a 500 kHz bandwidth on steel pipe specimens. To emphasize the influence of time reversal, no complex processing techniques, such as adaptive channel equalization or error correction coding, were employed. PMID:23881122 18. 3D-ambient noise Rayleigh wave tomography of Snæfellsjökull volcano, Iceland Obermann, Anne; Lupi, Matteo; Mordret, Aurélien; Jakobsdóttir, Steinunn S.; Miller, Stephen A. 2016-05-01 From May to September 2013, 21 seismic stations were deployed around the Snæfellsjökull volcano, Iceland. We cross-correlate the five months of seismic noise and measure the Rayleigh wave group velocity dispersion curves to gain more information about the geological structure of the Snæfellsjökull volcano. In particular, we investigate the occurrence of seismic wave anomalies in the first 6 km of crust. We regionalize the group velocity dispersion curves into 2-D velocity maps between 0.9 and 4.8 s. With a neighborhood algorithm we then locally invert the velocity maps to obtain accurate shear-velocity models down to 6 km depth. Our study highlights three seismic wave anomalies. The deepest, located between approximately 3.3 and 5.5 km depth, is a high velocity anomaly, possibly representing a solidified magma chamber. The second anomaly is also a high velocity anomaly east of the central volcano that starts at the surface and reaches approximately 2.5 km depth. It may represent a gabbroic intrusion or a dense swarm of inclined magmatic sheets (similar to the dike swarms found in the ophiolites), typical of Icelandic volcanic systems. The third anomaly is a low velocity anomaly extending up to 1.5 km depth. This anomaly, located directly below the volcanic edifice, may be interpreted either as a shallow magmatic reservoir (typical of Icelandic central volcanoes), or alternatively as a shallow hydrothermal system developed above the cooling magmatic reservoir. 19. Two-dimensional numerical simulation of acoustic wave phase conjugation in magnetostrictive elastic media Voinovich, Peter; Merlen, Alain 2005-12-01 The effect of parametric wave phase conjugation (WPC) in application to ultrasound or acoustic waves in magnetostrictive solids has been addressed numerically by Ben Khelil et al. [J. Acoust. Soc. Am. 109, 75-83 (2001)] using 1-D unsteady formulation. Here the numerical method presented by Voinovich et al. [Shock waves 13(3), 221-230 (2003)] extends the analysis to the 2-D effects. The employed model describes universally elastic solids and liquids. A source term similar to Ben Khelil et al.'s accounts for the coupling between deformation and magnetostriction due to external periodic magnetic field. The compatibility between the isotropic constitutive law of the medium and the model of magnetostriction has been considered. Supplementary to the 1-D simulations, the present model involves longitudinal/transversal mode conversion at the sample boundaries and separate magnetic field coupling with dilatation and shear stress. The influence of those factors in a 2-D geometry on the potential output of a magneto-elastic wave phase conjugator is analyzed in this paper. The process under study includes propagation of a wave burst of a given frequency from a point source in a liquid into the active solid, amplification of the waves due to parametric resonance, and formation of time-reversed waves, their radiation into liquid, and focusing. The considered subject is particularly important for ultrasonic applications in acoustic imaging, nondestructive testing, or medical diagnostics and therapy. 20. Reduced Patellar Tendon Elasticity with Aging: In Vivo Assessment by Shear Wave Elastography. PubMed Hsiao, Ming-Yen; Chen, Yi-Ching; Lin, Che-Yu; Chen, Wen-Shian; Wang, Tyng-Guey 2015-11-01 How aging affects the elasticity of tendons has long been debated, partly because of the limited methods for in vivo evaluation, which differ vastly from those for in vitro animal studies. In this study, we tested the reliability of shear wave elastography (SWE) in the evaluation of patellar tendons and their change in elasticity with age. We recruited 62 healthy participants in three age groups: 20-30 years (group 1), 40-50 years (group 2) and 60-70 years (group 3). Shear wave velocity and elastic modulus were measured at the proximal, middle and distal areas of the patellar tendon. Reliability was excellent at the middle area and fair to good at both ends. Compared with the other groups, group 3 had significantly decreased elastic modulus and shear wave velocity values (p ≤ 0.001 vs. group 1 or 2), with significant increased side-to-side differences. SWE may be valuable in detecting aging tendons before visible abnormalities are observed on B-mode ultrasonography. 1. Prediction of rocks thermal conductivity from elastic wave velocities, mineralogy and microstructure Pimienta, Lucas; Sarout, Joel; Esteban, Lionel; Piane, Claudio Delle 2014-05-01 While knowledge on Thermal Conductivity (TC) of rocks is of interest in many fields, determining this property remains challenging. In this paper, a modelling approach for TC prediction from Elastic Wave Velocity (EWV) measurements is reported. To this end, a new effective TC model for a typical sedimentary rock is introduced that explicitly accounts for the presence of pores, pressure-sensitive microcracks (or grain contacts) and formation fluids. A model of effective elasticity is also devised for this same rock that links its microstructural characteristics to the velocity of elastic waves. The two models are based on the same effective medium approach and involve the same microstructural parameters. A workflow based on this explicit modelling approach is devised that allows for the prediction of the TC of a reservoir rock using (i) the elastic waves velocities, (ii) the dominant mineral content and (iii) the bulk porosity. This workflow is validated using experimental data reported in the literature for dry and water-saturated Fontainebleau and Berea sandstones. The datasets include measurements of TC and EWV as a function of effective pressure. In addition, it is shown that the dependence of TC on the rock microstructure is formally and practically similar to that of EWV. It is also demonstrated that the accuracy of TC predictions from EWV increases with effective pressure (burial depth). The underlying assumptions and limitations of the present approach together with the effect of burial are discussed. 2. Test of high-resolution 3D P-wave velocity model of Poland by back-azimuthal sections of teleseismic receiver function Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek 2015-04-01 Geological and seismic structure under area of Poland is well studied by over one hundred thousand boreholes, over thirty deep seismic refraction and wide angle reflection profiles and by vertical seismic profiling, magnetic, gravity, magnetotelluric and thermal methods. Compilation of these studies allowed to create a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Polkowski et al. 2014). Model also provides details about the geometry of main layers of sediments (Tertiary and Quaternary, Cretaceous, Jurassic, Triassic, Permian, old Paleozoic), consolidated/crystalline crust (upper, middle and lower) and uppermost mantle. This model gives an unique opportunity for calculation synthetic receiver function and compering it with observed receiver function calculated for permanent and temporary seismic stations. Modified ray-tracing method (Langston, 1977) can be used directly to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. So, 3D P-wave velocity model has been interpolated to 2.5D P-wave velocity model beneath each seismic station and back-azimuthal sections of components of receiver function have been calculated. Vp/Vs ratio is assumed to be 1.8, 1.67, 1.73, 1.77 and 1.8 in the sediments, upper/middle/lower consolidated/crystalline crust and uppermost mantle, respectively. Densities were calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Additionally, to test a visibility of the lithosphere-asthenosphere boundary phases at receiver function sections models have been extended to 250 km depth based on P4-mantle model (Wilde-Piórko et al., 2010). National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284 and by NCN grant UMO-2011/01/B/ST10/06653. 3. A 3D algorithm based on the combined inversion of Rayleigh and Love waves for imaging and monitoring of shallow structures
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042214512825012, "perplexity": 2253.7828033200803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021732-00527.warc.gz"}
https://www.physicsforums.com/threads/equilibrium-points-of-de.719074/
# Equilibrium Points of DE 1. Oct 27, 2013 ### FeDeX_LaTeX The problem statement, all variables and given/known data Find the equilibrium points of the system, determine their type and sketch the phase portrait. $\frac{dx}{dt} = -3y + xy - 10, \frac{dy}{dt} = y^2 - x^2$ The attempt at a solution Putting it together: $\frac{dy}{dx} = \frac{y^2 - x^2}{-3y + xy - 10} \equiv \frac{Q(x,y)}{P(x,y)}$ Here, we see that the horizontal nullclines are plotted along the line $y = \pm x$ and the vertical nullclines along the curve $y = \frac{10}{x - 3}$. We form the Jacobian, i.e. J = $\left( \begin{array}{cc} P_x & P_y \\ Q_x & Q_y \end{array} \right)$ = $\left( \begin{array}{cc} y & x - 3 \\ -2x & -2y \end{array} \right)$ So $-tr(J) = y$ and $det(J) = 2x^2 - 2y^2 - 3$. My question is, where do I go from here? Through using a differential equation plotter, I can see that the equilibrium points are a spiral source and spiral sink at (5,5) and (-2,-2) respectively. How does one deduce this from the Jacobian? 2. Oct 27, 2013 ### FeDeX_LaTeX Never mind, I've overcomplicated it -- all I needed to do was solve that system of DEs for x and y (substituting x = y). The magic of the Homework board strikes again! Draft saved Draft deleted Similar Discussions: Equilibrium Points of DE
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522140622138977, "perplexity": 767.4608836315384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00553.warc.gz"}
https://www.physicsforums.com/threads/close-tube-with-string-oscillation.617143/
# Close tube with string oscillation 1. Jun 28, 2012 ### IIK*JII 1. The problem statement, all variables and given/known data In attached figure, a closed tube is placed near a string that is fixed at one end and has a weight attached to its other end. When bridges A and B are positioned at the points shown,plucking the string between A and B causes the tube to resonate at its fundamental frequency. Points a-e divide the length between A and B into 6 equal segments Next, A is fixed in place, and B is gradually moved toward A while the closed tube is shifted so that it stays at the center of A and B. During this process, the string is repeatedly plucked between A and B. When B is at a certain point, the tube resonates at the next overtone above the fundamental frequency. Which of a-e represents that point? Here, the string's oscillation is only fundamental oscillation 2. Relevant equations Close tube; L= (2n-1)$\frac{λ}{4}$ ; n= 1,2,3.... String fixed end; λ=2L/n 3. The attempt at a solution In 2nd period the tube oscillate at f3 L = $\frac{3}{4}$λ ∴λtube=$\frac{4}{3}$L ....(1) String ; λstring=2L (oscillate at f1) ...(2) Divide (1)/(2) I got $\frac{λtube}{λstring}$ = 2/3 From that I guess the point is a point number 2 from first 3 points, so I don't know that I should choose point b or d as my answer. Also, I don't know that my method is correct or not.... Help is appreciated :) Thanks #### Attached Files: • ###### SoundEJU.JPG File size: 12.6 KB Views: 57 2. Jun 28, 2012 ### Simon Bridge The question is not how much you have to reduce the entire wavelength on the string but how far to move the blocks ... 3. Jun 28, 2012 ### IIK*JII Thank you very much Simon Bridge :) Did you mean the block should move up?? and I should find height of the block when it moves up?? 4. Jun 28, 2012 ### Simon Bridge What I am saying is that you have calculated a ratio in whole wavelengths, but the distance you need to find (in order to know where to put the block) is that for a half-wavelength. You need to check to see what sort of difference, if any, that makes. Presumably, the ratio of the fundamental to the first harmonic in the tube is the ratio of string wavelengths needed right? You already know the half-wavelength needed to make the string oscillate at the tube's fundamental frequency and you are keeping the tension, and so the wave-speed, fixed. I hope I'm not confusing you - it is really hard to write about without actually telling you the answer. Basically the numbers you got look good - I'm trying to get you to work out if the numbers you got are the ones you need ... what you really need is a relationship along the lines of $x_2=ax_1$ where x1 is the distance |AB| that got you the fundamental in the tube and x2 is the distance between the blocks that gets you the second fundamental and a is the ratio between them.[1] What you have is that $\lambda_{tube} = \frac{2}{3}\lambda_{string}$ ---------------------- [1] actually you can finesse it by looking for the relation $x_2=\frac{n}{6}x_1$ since n will tell you which of the lettered points to move the block to :) 5. Jun 30, 2012 ### IIK*JII Thank you Simon Bridge,, your explanation is good help me imagine what this problem want I think, for example, x2 from your meaning is length of string that I can find from wavelength right?? 6. Jun 30, 2012 ### Simon Bridge Bear in mind that I think you are very close and I have not actually done the problem myself. It's intreguing - I'll have to set it up as an experiment sometime. 7. Jul 1, 2012 ### IIK*JII Thank you Simon Bridge I got it now :) 8. Jul 3, 2012 ### Simon Bridge Cool: well done :) Similar Discussions: Close tube with string oscillation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555641174316406, "perplexity": 1011.3514640333118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00415.warc.gz"}
http://skepticalsports.com/topics/sports-2/nba/
## LeBron’s High-Usage Shooting Efficiency (Featuring Adrian Dantley) As anyone (statistically-inclined or not) can tell you, LeBron James is having a pretty good year. His 26.8 points, 8 rebounds and 7.3 assists per game (through 81) makes for another entry in his already stunning portfolio of versatile seasons: This will be his 6th time hitting 25/7/7+, a feat that has only been accomplished 8 times since the merger: Totals Shooting Per Game Rk 1 LeBron James 2012-13 28 MIA 76 1354 .565 .406 .753 26.8 8.0 7.3 .640 2 Michael Jordan* 1988-89 25 CHI 81 1795 .538 .276 .850 32.5 8.0 8.0 .614 3 Larry Bird* 1986-87 30 BOS 74 1497 .525 .400 .910 28.1 9.2 7.6 .612 4 LeBron James 2009-10 25 CLE 76 1528 .503 .333 .767 29.7 7.3 8.6 .604 5 LeBron James 2010-11 26 MIA 79 1485 .510 .330 .759 26.7 7.5 7.0 .594 6 LeBron James 2008-09 24 CLE 81 1613 .489 .344 .780 28.4 7.6 7.2 .591 7 LeBron James 2007-08 23 CLE 75 1642 .484 .315 .712 30.0 7.9 7.2 .568 8 LeBron James 2004-05 20 CLE 80 1684 .472 .351 .750 27.2 7.4 7.2 .554 Provided by Basketball-Reference.com: View Original Table (Generated 4/17/2013.) But the thing that sticks out (which stat-heads have been going berserk about) is his shooting, which has been by far the most efficient of his career.  Indeed, it may be one of the greatest shooting efficiency seasons of all time. While his raw shooting % wouldn’t break the top 100 seasons, and his “true” shooting % (adjusted for free throws and 3 point shots made) would still only rank about 60th, the key here is that James’ shooting efficiency is remarkable for someone with his role as both a primary option and a shooter of last resort.  Generally, when you increase a player’s shot-taking responsibilities, it comes at the cost of marginal shot efficiency. This doesn’t mean this is a bad decision or that the player is doing anything wrong—what may be a bad shot “for them” may be a great shot under the circumstances in which they are asked to take it (like when the shot clock is running down, etc). While there’s no simple stat that describes the degree to which someone is a “shot creator,” we can use usage rate as a decent (though obviously imperfect) proxy. There have been around 150 seasons in which one player “used” >=30% of their team’s possessions: All player seasons with USG% >= 30. LeBron’s in red. As we would expect, the best shooting percentages decline as the players’ usage rates get larger and larger.  The red points are LeBron’s seasons (which are pretty excellent across the board) and as we can see from this scatter, his 2012-13 campaign is about to set the record for this group (though we should note that it’s NOT a Rodman-esqe outlier). Amazingly, the previous record-holder was Adrian Dantley! Dantley is a Hall of Fameer who I had practically never heard of until his name kept popping up in my historical research as possibly one of the most underrated players ever. Dantley never made an All-NBA first team or won an NBA championship, but he does extremely well in a variety of plus-minus and statistical plus-minus style metrics. While he didn’t have the all-around game of a LeBron James (though he did average a respectable 6-7 rebounds and 3-4 assists in his prime), Dantley was an extremely efficient high-usage shooter. For example, if we look at the top True Shooting seasons among players with a Usage Rate of greater than 27.5%, guess who occupies fully 5 of the top 10 spots: Rk 1 Amare Stoudemire 2007-08 25 PHO 79 714 1211 1989 .590 .656 28.2 2 Adrian Dantley* 1983-84 27 UTA 79 802 1438 2418 .558 .652 28.2 3 Kevin Durant 2012-13 24 OKC 81 731 1433 2280 .510 .647 29.8 4 LeBron James 2012-13 28 MIA 76 765 1354 2036 .565 .640 30.1 5 Charles Barkley* 1990-91 27 PHI 67 665 1167 1849 .570 .635 29.1 6 Adrian Dantley* 1979-80 23 UTA 68 730 1267 1903 .576 .635 27.8 7 Adrian Dantley* 1981-82 25 UTA 81 904 1586 2457 .570 .631 27.9 8 Adrian Dantley* 1985-86 29 UTA 76 818 1453 2267 .563 .629 30.0 9 Karl Malone* 1989-90 26 UTA 82 914 1627 2540 .562 .626 32.6 10 Adrian Dantley* 1980-81 24 UTA 80 909 1627 2452 .559 .622 28.4 Provided by Basketball-Reference.com: View Original Table (Generated 4/17/2013.) Dantley was also in the news a bit last month for working part-time as a crossing guard: Key quotes from that story: “It’s not a big thing to me … I just do it. I have a routine. I exercise, I go to work, I go home. I have a spring break next week. I have a summer off, just like when I was a basketball player.” “I just did it for the kids … I just didn’t want to sit around the house all day.” “I’ve definitely saved two lives. I’ve almost gotten hit by a car twice. And I would say 70 percent of the people who go across my route are on their telephone or on their BlackBerry, text-messaging. I never would have seen that if I had not been on the post.” What a character! ## Graph of the Day: Second Look at Stan Van? Granted, “of the Day” isn’t really accurate considering how often I post, but I found it amusing enough to share: Win % in games played by Dwight Howard. Red years were with Stan Van Gundy coaching. This came up in a discussion about the possibility that Dwight Howard might not be leveraged optimally on teams that aren’t comprised mostly of small 3 point shooters. That would have interesting implications. ## The Clock: A Graph and Some Thoughts If you’re a hardcore follower of this blog, you know that one of things I have frequently complained about is the failure of NBA play-by-play data to include the shot clock. It’s so obviously important and—relative to other play-by-play data—so easy to track, that it’s a complete mystery to me why doing so isn’t completely standard. OTOH, I see stats broken down by “early” and “late” in the shot clock all the time, so someone must have this information. In the meantime, I went through the 2010 play-by-play dataset and kluged a proxy stat from the actual clock, reflecting the number of seconds passed since a team took possession. Here’s a chart summarizing the number and outcomes of possessions of various lengths: The orange X’s represent the number of league-wide possessions in which the first shot took place at the indicated time. The red diamonds represent the average number of points scored on those possessions (including from any subsequent shots following an offensive rebound, etc). We should expect there to be a constant trade-off at any given time between taking a shot “now” and waiting for a better one to open up: the deeper you get into a possession, the more your shot standards should drop. And, indeed, this is reflected in the graph by the downward-sloping curve. For now, I’m just throwing this out there. Though it represents a very basic idea, it is difficult to overstate its importance: 1. Accounting for the clock can help evaluate players where standard efficiency ratings break down. Most simply, you can take the results of each shot and compare them to the expected value of a shot taken under the same amount of time-pressure. E.g., if someone averages .9 points per attempt with only a couple of seconds left, you can spot value where normal efficiency calculations wouldn’t. 2. Actually, I’ve calculated just such preliminary “value-added” shooting for the entire league (with pretty interesting results), but I’d like to see more accurate data before posting or basing any substantial analysis on it. Among other problems, I think the right side of the curve is overly generous, as it includes possessions where it took a while to get the clock started (a process that is, unfortunately, highly variable), or where time was added and the cause wasn’t scored (also disappointingly common). 3. Examining this information can tell you some things about the league generally: For example, it’s interesting to me that there’s a noticeable dip right around where the most shots actually take place (14 to 16 seconds in). Though speculative, I suspect that this is when players are most likely to settle for mediocre 2 point jumpers. Similarly, but a bit more difficultly, you can compare the actual curve with a derived curve to examine whether NBA players, on the whole, seem to wait too long (or not long enough) to pull the trigger. With better data, the possibilities would open up further (even moreso when combined with other play-by-play information, like shot type, position, defense, etc). For example, you could look at the curve for individual players and impute whether they should be more or less aggressive with their shot selection. So, yeah, if any of you can direct me to a dataset that has what I want, please let me know. ## Sports Geek Mecca: Recap and Thoughts, Part 2 This is part 2 of my “recap” of the Sloan Sports Analytics Conference that I attended in March (part 1 is here), mostly covering Day 2 of the event, but also featuring my petty way-too-long rant about Bill James (which I’ve moved to the end). ### Day Two First I attended the Football Analytics despite finding it disappointing last year, and, alas, it wasn’t any better. Eric Mangini must be the only former NFL coach willing to attend, b/c they keep bringing him back: Overall, I spent more time in day 2 going to niche panels, research paper presentations and talking to people. The last, in particular, was great. For example, I had a fun conversation with Henry Abbott about Kobe Bryant’s lack of “clutch.” This is one of Abbott’s pet issues, and I admit he makes a good case, particularly that the Lakers are net losers in “clutch” situations (yes, relative to other teams), even over the periods where they have been dominant otherwise. Kobe is kind of a pivotal case in analytics, I think. First, I’m a big believer in “Count the Rings, Son” analysis: That is, leading a team to multiple championships is really hard, and only really great players do it. I also think he stands at a kind of nexus, in that stats like PER give spray shooters like him an unfair advantage, but more finely tuned advanced metrics probably over-punish the same. Part of the burden of Kobe’s role is that he has to take a lot of bad shots—the relevant question is how good he is at his job. Abbott also mentioned that he liked one of my tweets, but didn’t know if he could retweet the non-family-friendly “WTF”: I also had a fun conversation with Neil Paine of Basketball Reference. He seemed like a very smart guy, but this may be attributable to the fact that we seemed to be on the same page about so many things. Additionally, we discussed a very fun hypo: How far back in time would you have to go for the Charlotte Bobcats to be the odds-on favorites to win the NBA Championship? As for the “sideshow” panels, they’re generally more fruitful and interesting than the ESPN-moderated super-panels, but they offer fewer easy targets for easy blog-griping. If you’re really interested in what went down, there is a ton of info at the SSAC website. The agenda can be found here. Information on the speakers is here. And, most importantly, videos of the various panels can be found here. ### Box Score Rebooted Featuring Dean Oliver, Bill James, and others. This was a somewhat interesting, though I think slightly off-target, panel. They spent a lot of time talking about new data and metrics and pooh-poohing things like RBI (and even OPS), and the brave new world of play-by-play and video tracking, etc. But too much of this was discussing a different granularity of data than what can be improved in the current granularity levels. Or, in other words: James acquitted himself a bit on this subject, arguing that boatloads of new data isn’t useful if it isn’t boiled down into useful metrics. But a more general way of looking at this is: If we were starting over from scratch, with a box-score-sized space to report a statistical game summary, and a similar degree of game-scoring resources, what kinds of things would we want to include (or not) that are different from what we have now?  I can think of a few: 1. In basketball, it’s archaic that free-throws aren’t broken down into bonus free throws and shot-replacing free throws. 2. In football, I’d like to see passing stats by down and distance, or at least in a few key categories like 3rd and long. 3. In baseball, I’d like to see “runs relative to par” for pitchers (though this can be computed easily enough from existing box scores). In this panel, Dean Oliver took the opportunity to plug ESPN’s bizarre proprietary Total Quarterback Rating. They actually had another panel devoted just to this topic, but I didn’t go, so I’ll put a couple of thoughts here. First, I don’t understand why ESPN is pushing this as a proprietary stat. Sure, no-one knows how to calculate regular old-fashioned quarterback ratings, but there’s a certain comfort in at least knowing it’s a real thing. It’s a bit like Terms of Service agreements, which people regularly sign without reading: at least you know the terms are out there, so someone actually cares enough to read them, and presumably they would raise a stink if you had to sign away your soul. As for what we do know, I may write more on this come football season, but I have a couple of problems: One, I hate the “clutch effect.” TQBR makes a special adjustment to value clutch performance even more than its generic contribution to winning. If anything, clutch situations in football are so bizarre that they should count less. In fact, when I’ve done NFL analysis, I’ve often just cut the 4th quarter entirely, and I’ve found I get better results. That may sound crazy, but it’s a bit like how some very advanced Soccer analysts have cut goal-scoring from their models, instead just focusing on how well a player advances the ball toward his goal: even if the former matters more, its unreliability may make it less useful. Two, I’m disappointed in the way they “assign credit” for play outcomes: Division of credit is the next step. Dividing credit among teammates is one of the most difficult but important aspects of sports. Teammates rely upon each other and, as the cliché goes, a team might not be the sum of its parts. By dividing credit, we are forcing the parts to sum up to the team, understanding the limitations but knowing that it is the best way statistically for the rating. I’m personally very interested in this topic (and have discussed it with various ESPN analytics guys since long before TQBR was released). This is basically an attempt to address the entanglement problem that permeates football statistics.  ESPN’s published explanation is pretty cryptic, and it didn’t seem clear to me whether they were profiling individual players and situations or had created credit-distribution algorithms league-wide. At the conference, I had a chance to talk with their analytics guy who designed this part of the metric (his name escapes me), and I confirmed that they modeled credit distribution for the entire league and are applying it in a blanket way.  Technically, I guess this is a step in the right direction, but it’s purely a reduction of noise and doesn’t address the real issue.  What I’d really like to see is like a recursive model that imputes how much credit various players deserve broadly, then uses those numbers to re-assign credit for particular outcomes (rinse and repeat). ### Deconstructing the Rebound With Optical Tracking Data Rajiv Maheswaran, and other nerds. This presentation was so awesome that I offered them a hedge bet for the “Best Research Paper” award. That is, I would bet on them at even money, so that if they lost, at least they would receive a consolation prize. They declined. And won. Their findings are too numerous and interesting to list, so you should really check it out for yourself. Obviously my work on the Dennis Rodman mystery makes me particularly interested in their theories of why certain players get more rebounds than others, as I tweeted in this insta-hypothesis: Following the presentation, I got the chance to talk with Rajiv for quite a while, which was amazing. Obviously they don’t have any data on Dennis Rodman directly, but Rajiv was also interested in him and had watched a lot of Rodman video. Though anecdotal, he did say that his observations somewhat confirmed the theory that a big part of Rodman’s rebounding advantage seemed to come from handling space very well: 1. Even when away from the basket, Rodman typically moved to the open space immediately following a shot. This is a bit different from how people often think about rebounding as aggressively attacking the ball (or as being able to near-psychically predict where the ball is going to come down. 2. Also rather than simply attacking the board directly, Rodman’s first inclination was to insert himself between the nearest opponent and the basket. In theory, this might slightly decrease the chances of getting the ball when it heads in toward his previous position, but would make up for it by dramatically increasing his chances of getting the ball when it went toward the other guy. 3. Though a little less purely strategical, Rajiv also thought that Rodman was just incredibly good at #2. That is, he was just exceptionally good at jockeying for position. To some extent, I guess this is just rebounding fundamentals, but I still think it’s very interesting to think about the indirect probabilistic side of the rebounding game. ### Live B.S. Report with Bill James Quick tangent: At one point, I thought Neil Paine summed me up pretty well as a “contrarian to the contrarians.”  Of course, I’m don’t think I’m contrary for the sake of contrariness, or that I’m a negative person (I don’t know how many times I’ve explained to my wife that just because I hated a movie doesn’t mean I didn’t enjoy it!), it’s just that my mind is naturally inclined toward considering the limitations of whatever is put in front of it. Sometimes that means criticizing the status quo, and sometimes that means criticizing its critics. So, with that in mind, I thought Bill James’s showing at the conference was pretty disappointing, particularly his interview with Bill Simmons. I have a lot of respect for James.  I read his Historical Baseball Abstract and enjoyed it considerably more than Moneyball.  He has a very intuitive and logical mind. He doesn’t say a bunch of shit that’s not true, and he sees beyond the obvious. In Saturday’s “Rebooting the Box-score” panel, he made an observation that having 3 of 5 people on the panel named John implied that the panel was [likely] older than the rest of the room.  This got a nice laugh from the attendees, but I don’t think he was kidding.  And whether he was or not, he still gets 10 kudos from me for making the closest thing to a Bayesian argument I heard all weekend.  And I dutifully snuck in for a pic with him: James was somewhat ahead of his time, and perhaps he’s still one of the better sports analytic minds out there, but in this interview we didn’t really get to hear him analyze anything, you know, sportsy. This interview was all about Bill James and his bio and how awesome he was and how great he is and how hard it was for him to get recognized and how much he has changed the game and how, without him, the world would be a cold, dark place where ignorance reigned and nobody had ever heard of “win maximization.” Bill Simmons going this route in a podcast interview doesn’t surprise me: his audience is obviously much broader than the geeks in the room, and Simmons knows his audience’s expectations better than anyone. What got to me was James’s willingness to play along, and everyone else’s willingness to eat it up. Here’s an example of both, from the conference’s official Twitter account: Perhaps it’s because I never really liked baseball, and I didn’t really know anyone did any of this stuff until recently, but I’m pretty certain that Bill James had virtually zero impact on my own development as a sports data-cruncher.  When I made my first PRABS-style basketball formula in the early 1990’s (which was absolutely terrible, but is still more predictive than PER), I had no idea that any sports stats other than the box score even existed. By the time I first heard the word “sabermetrics,” I was deep into my own research, and didn’t bother really looking into it deeply until maybe a few months ago. Which is not to say I had no guidance or inspiration.  For me, a big epiphanous turning point in my approach to the analysis of games did take place—after I read David Sklansky’s Theory of Poker. While ToP itself was published in 1994, Sklansky’s similar offerings date back to the 70s, so I don’t think any broader causal pictures are possible. More broadly, I think the claim that sports analytics wouldn’t have developed without Bill James is preposterous. Especially if, as i assume we do, we firmly believe we’re right.  This isn’t like L. Ron Hubbard and Incident II: being for sports analytics isn’t like having faith in a person or his religion. It simply means trying to think more rigorously about sports, and using all of the available analytical techniques we can to gain an advantage. Eventually, those who embrace the right will win out, as we’ve seen begin to happen in sports, and as has already happened in nearly every other discipline. Indeed, by his own admission, James liked to stir controversy, piss people off, and talk down to the old guard whenever possible. As far as we know, he may have set the cause of sports analytics back, either by alienating the people who could have helped it gain acceptance, or by setting an arrogant and confrontational tone for his disciples (e.g., the uplifting “don’t feel the need to explain yourself” message in Moneyball). I’m not saying that this is the case or even a likely possibility, I’m just trying to illustrate that giving someone credit for all that follows—even a pioneer like James—is a dicey game that I’d rather not participate in, and that he definitely shouldn’t. On a more technical note, one of his oft-quoted and re-tweeted pearls of wisdom goes as follows: Sounds great, right? I mean, not really, I don’t get the metaphor: if the sea is full of ignorance, why are you collecting water from it with a bucket rather than some kind of filtration system? But more importantly, his argument in defense of this claim is amazingly weak. When Simmons asked what kinds of things he’s talking about, he repeatedly emphasized that we have no idea whether a college sophomore will turn out to be a great Major League pitcher.  True, but, um, we never will. There are too many variables, the input and outputs are too far apart in time, and the contexts are too different.  This isn’t the sea of ignorance, it’s a sea of unknowns. Which gets at one of my big complaints about stats-types generally.  A lot of people seem to think that stats are all about making exciting discoveries and answering questions that were previously unanswerable. Yes, sometimes you get lucky and uncover some relationship that leads to a killer new strategy or to some game-altering new dynamic. But most of the time, you’ll find static. A good statistical thinker doesn’t try to reject the static, but tries to understand it: Figuring out what you can’t know is just as important as figuring out what you can know. On Twitter I used this analogy: Success comes with knowing more true things and fewer false things than the other guy. ## Graphs of the Day: Bird vs. Bron One of my favorite stat-nuggets ever is that “Larry Bird never had a losing month.” So, yesterday, I figured it was about time to check whether or not it’s, you know, true. To do this, I first had to figure out which Celtics games Bird actually played in. The problem there is that his career began well before 1986, meaning the box score data aren’t in Basketball Reference’s database. But they do have images of the actual box scores, like so: Fortunately, Bird played in every game in his first two seasons, so figuring this out was just a matter of poring through 4 years of these pics: Easy peasy! (I’ve done more grueling work for even more trivial questions, to be sure.) But results on that later. Independently, I was trying to come up with a fun way to illustrate the fact that LeBron James won a lot more games in his last two seasons on the lowly Cleveland Cavaliers than he has so far on the perma-hyped Miami Heat: So that graph reflects every game of LeBron’s career, including the regular season and playoffs (through last night). It’s pretty straightforward: With LeBron an 18-year-old rookie, the Cavs (though much improved) were still pretty shaky, and they pretty much got better and better each year. After a slight decline from their soaring 2008 performance, LeBron left to join the latest Big 3—which is a solid contender, but no threat to the greatest Big 3. (BTW, I would like to thank the Heat for becoming Exhibit A for my long-time contention that having multiple “primary” options is less valuable than having a well-designed supporting cast—even one with considerably less talent.) But with Mr. Trifecta on my mind (not to mention overloading my browser history), I thought it might be fun to compare the two leading contenders for the small forward spot on any NBA GOAT team. So here’s Larry: Wow, pretty crazy consistent, yes? Keep in mind that, despite the Celtics long winning tradition, they only won 29 games the year before Bird’s arrival.  Note the practically opposite gradient from LeBron’s: Bird started out hot, and basically stayed hot until injuries cooled him down. As for the results of the original inquiry: It turns out Bird’s Celtics started the season 2-4 in November 1988, just before Bird had season-ending ankle surgery (of course, Bird’s 1988 games ARE in my database, so this was a bit of a “Doh!” finding). And, of course, he also had losing months in the playoffs. His worst full month in the regular season, however, was indeed exactly .500: He went 8-8 in March of 1982. So, properly qualified (like, “In the regular season, Bird never had a losing month in which he played more than 6 games”), the claim holds up. If I were a political fact-checker, I would deem it “Mostly True.” In case you’re interested, here is the complete list of months in Larry Bird’s career: ## The Case Against the Case for Dennis Rodman: Initial Volleys When I began writing about Dennis Rodman, I was so terrified that I would miss something and the whole argument would come crashing down that I kept pushing it further and further and further, until a piece I initially planned to be about 10 pages of material ended up being more like 150. [BTW, this whole post may be a bit too inside-baseball if you haven’t actually read—or at least skimmed—my original “Case for Dennis Rodman.” If so, that link has a helpful guide.] The downside of this, I assumed, is that the extra material should open up many angles of attack. It was a conscious trade-off, knowing that individual parts in the argument would be more vulnerable, but the Case as a whole would be thorough and redundant enough to survive any battles I might end up losing. Ultimately, however, I’ve been a bit disappointed in the critical response. Most reactions I’ve seen have been either extremely complimentary or extremely dismissive. So a while ago, I decided that if no one really wanted to take on the task, I would do it myself. In one of the Rodman posts, I wrote: Give me an academic who creates an interesting and meaningful model, and then immediately devotes their best efforts to tearing it apart! And thus The Case Against the Case for Dennis Rodman is born. Before starting, here are a few qualifying points: 1. I’m not a lawyer, so I have no intention of arguing things I don’t believe. I’m calling this “The Case Against the Case For Dennis Rodman,” because I cannot in good faith (barring some new evidence or argument I am as yet unfamiliar with) write The Case Against Dennis Rodman. 2. Similarly, where I think an argument is worth being raised and discussed but ultimately fails, I will make the defense immediately (much like “Objections and Replies”). 3. I don’t have an over-arching anti-Case hypothesis to prove, so don’t expect this series to be a systematic takedown of the entire enterprise. Rather, I will point out weaknesses as I consider them, so they may not come in any kind of predictable order. 4. If you were paying attention, of course you noticed that The Case For Dennis Rodman was really (or at least concurrently) about demonstrating how player valuation is much more dynamic and complicated than either conventional or unconventional wisdom gives it credit for. But, for now, The Case Against the Case will focus mainly on the Dennis Rodman part. Ok, so with this mission in mind, let me start with a bit of what’s out there already: ### A Not-Completely-Stupid Forum Discussion I admit, I spend a fair amount of time following back links to my blog. Some of that is just ego-surfing, but I’m also desperate to find worthy counter-arguments. As I said above, that search is sometimes more fruitless than I would like. Even the more intelligent discussions usually include a lot of uninspired drivel. For example, let’s look at a recent thread on RealGM. After one person lays out a decent (though imperfect) summary of my argument, there are several responses along the lines of poster “SVictor”s: I won’t pay attention to any study that states that [Rodman might be more valuable than Michael Jordan]. Actually, I’m pretty sympathetic to this kind of objection. There can be a bayesian ring of truth to “that is just absurd on its face” arguments (I once made a similar argument against an advanced NFL stat after it claimed Neil O’Donnell was the best QB in football). However, it’s not really a counter-argument, it’s more a meta-argument, and I think I’ve considered most of those to death. Besides, I don’t actually make the claim in question, I merely suggest it as something worth considering. A much more detailed and interesting response comes from poster “mysticbb.” Now, he starts out pretty insultingly: The argumentation is biased, it is pretty obvious, which makes it really sad, because I know how much effort someone has to put into such analysis. I cannot say affirmatively that I have no biases, or that bias never affects my work. Study after study shows that this is virtually impossible. But I can say that I am completely and fundamentally committed to identifying it and stamping it out wherever I can. So, please—as I asked in my conclusion—please point out where the bias is evident and I will do everything in my power to fix it. Oddly, though, mysticbb seems to endorse (almost verbatim) the proposition that I set out to prove: Let me start with saying that Dennis Rodman seems to be underrated by a lot of people. He was a great player and deserved to be in the HOF, I have no doubt about that. He had great impact on the game and really improved his team while playing. (People get so easily distracted: You write one article about a role-player maybe being better than Michael Jordan, and they forget that your overall claim is more modest.) Of course, my analysis could just be way off, particularly in ways that favor Rodman. To that end, mysticbb raises several valid points, though with various degrees of significance. Here he is on Rodman’s rebounding: Let me start with the rebounding aspect. From 1991 to 1998 Rodman was leading the league in TRB% in each season. He had 17.7 ORB%, 33 DRB% and overall 25.4 TRB%. Those are AWESOME numbers, if we ignore context. Let us take a look at the numbers for the playoffs during the same timespan: 15.9 ORB%, 27.6 DRB% and 21.6 TRB%. Still great numbers, but obviously clearly worse than his regular season numbers. Why? Well, Rodman had the tendency to pad his rebounding stats in the regular season against weaker teams, while ignoring defensive assignments and fighting his teammates for rebounds. All that was eliminated during the playoffs and his numbers took a hit. Now, I don’t know how much I talked about the playoffs per se, but I definitely discussed—and even argued myself—that Rodman’s rebounding numbers are likely inflated. But I also argued that if that IS the case, it probably means Rodman was even more valuable overall (see that same link for more detail). He continues: Especially when we look at the defensive rebounding part, during the regular season he is clearly ahead of Duncan or Garnett, but in the playoffs they are all basically tied. Now imagine, Rodman brings his value via rebounding, what does that say about him, if that value is matched by players like Duncan or Garnett who both are also great defenders and obviously clearly better offensive players? Now, as I noted at the outset Rodman’s career offensive rebounding percentage is approximately equal to Kevin Garnett’s career overall rebounding percentage, so I think Mystic is making a false equivalency based on a few cherry-picked stats. But, for a moment, let’s assume it were true that Garnett/Duncan had similar rebounding numbers to Rodman, so what? Rodman’s crazy rebounding numbers cohere nicely with the rest of the puzzle as an explanation of why he was so valuable—his absurd rebounding stats make his absurd impact stats more plausible and vice versa—but they’re technically incidental. Indeed, they’re even incidental to his rebounding contribution: The number (or even percent) of rebounds a player gets does not correlate very strongly with the number of rebounds he has actually added to his team (nor does a player’s offensive “production” correlate very strongly with improvement in a team’s offense), and it does so the most on the extremes. But I give the objection credit in this regard: The playoff/regular season disparity in Rodman’s rebounding numbers (though let’s not overstate the case, Rodman has 3 of the top 4 TRB%’s in playoff history) do serve to highlight how dynamic basketball statistics are. The original Case For Dennis Rodman is perhaps too willing to draw straight causal lines, and that may be worth looking into. Also, a more thorough examination of Rodman’s playoff performance may be in order as well. On the indirect side of The Case, mysticbb has this to say: [T]he high difference between the team performance in games with Rodman and without Rodman is also caused by a difference in terms of strength of schedule, HCA and other injured players. I definitely agree that my crude calculation of Win % differentials does not control for a number of things that could be giving Rodman, or any other player, a boost. Controlling for some of these things is probably possible, if more difficult than you might think. This is certainly an area where I would like to implement some more robust comparison methods (and I’m slowly working on it). But, ultimately, all of the factors mysticbb mentions are noise. Circumstances vary and lots of things happen when players miss games, and there are a lot of players and a lot of circumstances in the sample that Rodman is compared to: everyone has a chance to get lucky. That chance is reflected in my statistical significance calculations. Mysticbb makes some assertions about Rodman having a particularly favorable schedule, but cites only the 1997 Bulls, and it’s pretty thin gruel: If we look at the 12 games with Kukoc instead of Rodman we are getting 11.0 SRS. So, Rodman over Kukoc made about 0.5 points. Of course, if there is evidence that Rodman was especially lucky over his career, I would like to see it. But, hmm, since I’m working on the Case Against myself, I guess that’s my responsibility as well. Fair enough, I’ll look into it. Finally, mysticbb argues: The last point which needs to be considered is the offcourt issues Rodman caused, which effected the outcome of games. Take the 1995 Spurs for example, when Rodman refused to guard Horry on the perimeter leading to multiple open 3pt shots for Horry including the later neck-breaker in game 6. The Spurs one year later without Rodman played as good as in 1995 with him. I don’t really have much to say on the first part of this. As I noted at the outset, there’s some chance that Rodman caused problems on his team, but I feel completely incompetent to judge that sort of thing. But the other part is interesting: It’s true that the Spurs were only 5% worse in 95-96 than they were in 94-95 (OFC, they would be worse measuring only against games Rodman played in), but cross-season comparisons are obviously tricky, for a number of reasons. And if they did exist, I’m not sure they would break the way suggested. For example, the 2nd Bulls 3-peat teams were about as much better than the first Bulls 3-peat as the first Bulls 3-peat was better than the 93-95 teams that were sans Michael Jordan. That said, I actually do find multi-season comparisons to be a valid area for exploration. So, e.g., I’ve spent some time looking at rookie impact and how predictive it is of future success (answer: probably more than you think). Finally, a poster named “parapooper” makes some points that he credits to me, including: He also admits that Rodman actually has a big advantage in this calculation because he missed probably more games than any other player due to reasons other than health and age. I don’t actually remember making this point, at least this explicitly, but it is a valid concern IMO. A lot of the In/Out numbers my system generated include seasons where players were old or infirm, which disadvantages them. In fact, I initially tried to excise these seasons, and tried accounting for them in a variety of ways, such as comparing “best periods” to “best periods”, etc. But I found such attempts to be pretty unwieldy and arbitrary, and they shrunk the sample size more than I thought they were worth, without affecting the bottom line: Rodman just comes out on top of a smaller pile. That said, some advantage to Rodman relative to others must exist, and quantifying that advantage is a worthy goal. A similar problem that “para” didn’t mention specifically is that a number of the in/out periods for players include spots where the player was traded. In subsequent analysis, I’ve confirmed what common sense would probably indicate: A player’s differential stats in trade scenarios are much less reliable. Future versions of the differential comparison should account for this, one way or another. The differential analysis in the series does seem to be the area that most needs upgrading, though the constant trade-off between more information and higher quality information means it will never be as conclusive as we might want it to be. Not mentioned in this thread (that I saw), but what I will certainly deal with myself, are broader objections to the differential comparisons as an enterprise. So, you know. Stay tuned. ## Championship Experience Matters! (Un-Sexy Version) So in Monday’s post, I included my “5-by-5” method (I probably shouldn’t call it a “model”) for picking NBA champions. In case you missed it, here it is again: 1. If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent winner. 2. Otherwise, pick the team with the best record. In the 28 seasons since the NBA moved to a 16-team playoff format, this method correctly picked the eventual champion 18 times (64%), comparing favorably to the 10/28 (36%) success rate of the team with the league’s best record. Henry Abbott blogged about it on ESPN yesterday, raising the obvious follow-up: The question is, why? Why are teams that have won before so much better at winning again? I’ll kick off the brainstorming: • Maybe most teams fall short of their potential because of team dynamics of selfishness — and maybe champions are the teams that know how to move past that. • Maybe there are only a few really special coaches, and these teams have them. • Maybe there are only a few really special teams, and these teams are them. • Maybe there are special strategies to the playoffs that only some teams know. Not even sure what I’m talking about here — Sleep schedules? Nutrition? Injury prevention? • Maybe champions get better treatment from referees. Anyway, it’s certainly fascinating. UPDATE: John Hollinger with a good point that fits this and other data: Maybe title-winning team don’t value the regular season much. Though I think some of these ideas are more on point than others, I won’t try to go parse every possibility. On balance, I’m sympathetic to the idea that “winning in the playoffs” has its own skillset independent of just being good at winning basketball games. Conceptually, it’s not too big a leap from the well-documented idea that winning games has its own skillset independent of scoring and allowing points (though the evidence is a lot more indirect). That said, I think the biggest factor behind this result may be a bit less sexy: It may simply be a matter of information reliability. ### Winning Championships is Harder than Winning Games In stark contrast to other team sports, the NBA Playoffs are extremely deterministic. The best team usually wins (and, conversely, the winner is usually the best team). I’ve made this analogy many times before, but I’ll make it again: The NBA playoffs are a lot more like a Major tournament in men’s tennis than any other crowning competition in popular sports. This is pretty much a function of design: A moderately better team becomes a huge favorite in a 7 game series. So even if the best team is only moderately better than the 2nd best team, they can be in a dominant position. Combine this with an uneven distribution of talent (which, incidentally, is probably a function of salary structure), and mix in the empirical reality that the best teams normally don’t change very much from year to year, and its unsurprising that “dynasties” are so common. On the other side of the equation, regular season standings and leaderboards—whether of wins or its most stable proxies—are highly variable. Note that a 95% confidence interval on an 82 game sample (aka, the “margin of error”) is +/- roughly 10 games. If you think of the NBA regular season as a lengthy 30-team competition for the #1 seed, its structure is much, much less favorable to the best teams than the playoffs: It’s more like a golf tournament than a tennis tournament. ### The Rest is Bayes Obviously better teams win more often and vice-versa. It’s just that these results have to be interpreted in a context where all results were not equally likely ex ante. For example, the teams who post top records who also have recent championships are far more likely than others to actually be as good as their records indicate. This is pure bayesian inference. Quick tangent: In my writing, I often reach a point where I say something along the lines of: “From there, it’s all bayesian inference.” I recognize that, for a lot of readers, this is barely a step up from an Underpants Gnomes argument. When I go there, it’s pretty much shorthand for “this is where results inform our beliefs about how likely various causes are to be true” (and all that entails). There was an interesting comment on Abbott’s ESPN post, pointing out that the 5-by-5 method only picked 5/14 (35.7%) of champions correctly between 1967 and 1980. While there may be unrelated empirical reasons for this, I think this stat may actually confirm the underlying concept. Structurally, having fewer teams in the playoffs, shorter series lengths, a smaller number of teams in the league—basically any of the structural differences between the two eras I can think of—all undermine the combined informational value of [having a championship + having a top record]. To be fair, there may be any number of things in a particular season that undermine our confidence in this inference (I can think of some issues with this season’s inputs, obv). That’s the tricky part of bayesian reasoning: It turns on how plausible you thought things were already. ## Stat Geek Smackdown 2012, Round 1: Odds and Ends So in case any of you haven’t been following, the 2012 edition of the ESPN True Hoop Stat Geek Smackdown  is underway.  Now, obviously this competition shouldn’t be taken too seriously, as it’s roughly the equivalent of picking a weekend’s worth of NFL games, and last year I won only after picking against my actual opinion in the Finals (with good reason, of course).  That said, it’s still a lot of fun to track, and basketball is a deterministic-enough sport that I do think skill is relevant. At least enough that I will talk shit if I win again. To that end, the first round is going pretty well for me so far.  Like last year, the experts are mostly in agreement. While there is a fair amount of variation in the series length predictions, there are only two matchups that had any dissent as to the likely winner: the 6 actual stat geeks split 4-2 in favor of the Lakers over the Nuggets, and 3-3 between the Clippers and the Grizzlies.  As it happens, I have both Los Angeles teams (yes, I am from Homer), as does Matthew Stahlhut (though my having the Lakers in 5 instead of 7 gives me a slight edge for the moment).  No one has gained any points on anyone else yet, but here is my rough account of possible scenarios: On to some odds and ends: ## The Particular Challenges of Predicting 2012 Making picks this year was a bit harder than in years past.  At one point I seriously considered picking Dallas against OKC (in part for strategic purposes), before reason got the better of me.  Abbott only published part of my comment on the series, so here’s the full version I sent him: Throughout NBA history, defending champions have massively over-performed in the playoffs relative to their regular season records, so I wouldn’t count Dallas out.  In fact, the spot Dallas finds itself in is quite similar to Houston’s in 1995, and this season’s short lead -time and compressed schedule should make us particularly wary of the usual battery of predictive models. Thus, if I had to pick which of these teams is more likely to win the championship, I might take Dallas (or at least it would be a closer call).  But that’s a far different question from who is most likely to win this particular series: Oklahoma City is simply too solid and Dallas too shaky to justify an upset pick. E.g., my generic model makes OKC a >90% favorite, so even a 50:50 chance that Dallas really is the sleeping giant Mark Cuban dreams about probably wouldn’t put them over the top. That last little bit is important: The “paper gap” between Dallas and OKC is so great that even if Dallas were considerably better than they appeared during the regular season, that would only make them competitive, while if they were about as good as they appeared, they would be a huge dog (this kind of situation should be very familiar to any serious poker players out there). But why on earth would I think Dallas might be any good in the first place? Well, I’ll discuss more below why champions should never be ignored, but the “paper difference” this year should be particularly inscrutable.  The normal methods for predicting playoff performance (both my own and others) are particularly ill-suited for the peculiar circumstances of this season: 1. Perhaps most obviously, fewer regular season games means smaller sample sizes.  In turn, this means that sample-sensitive indicators (like regular season statistics) should have less persuasive value relative to non-sensitive ones (like championship pedigree).  It also affects things like head to head record, which is probably more valuable than a lot of stats people think, though less valuable than a lot of non-stats people think.  I’ve been working on some research about this, but for an example, look at this post about how I thought there seemed to be a market error w/r/t Dallas vs. Miami in game 6, partly b/c of the bayesian value of Dallas’s head to head advantage. 2. Injuries are a bigger factor. This is not just that there are more of them (which is debatable), but there is less flexibility to effectively manage them: e.g., there’s obv less time to rehab players, but also less time to develop new line-ups and workarounds or make other necessary adjustments. In other words, a very good team might be hurt more by a role-player being injured than usual. 3. What is the most reliable data? Two things I discussed last year were that (contra unconventional wisdom) Win% is more reliable for post-season predictions than MOV-type stats, and that (contra conventional wisdom) early season performance is typically more predictive than late season performance.  But both of these are undermined by the short season.  The fundamental value of MOV is as a proxy for W% that is more accurate for smaller sample sizes. And the predictive power of early-season performance most likely stems from its being more representative of playoff basketball: e.g., players are more rested and everyone tries their hardest.  However, not only are these playoffs not your normal playoffs, but this season was thrown together so quickly that a lot of teams had barely figured out their lineups by the quarter-pole. While late-season records have the same problems as usual, they may be more predictive just from being more similar to years past. 4. Finally, it’s not just the nature of the data, but the nature of the underlying game as well. For example, in a lockout year, teams concerned with injury may be quicker to pull starting players in less lopsided scenarios than usual, making MOV less useful, etc. I won’t go into every possible difference, but here’s a related Twitter exchange: Which brings us to the next topic: ## The Simplest Playoff Model You’ll Never Beat The thing that Henry Abbott most highlighted from my Smackdown picks (which he quoted at least 3 times in 3 different places) was my little piece of dicta about the Spurs: I have a ‘big pot’ playoff model (no matchups, no simulations, just stats and history for each playoff team as input) that produces some quirky results that have historically out-predicted my more conventional models. It currently puts San Antonio above 50 percent. Not just against Utah, but against the field. Not saying I believe it, but there you go. I really didn’t mean for this to be taken so seriously: it’s just one model.  And no, I’m not going to post it. It’s experimental, and it’s old and needs updating (e.g., I haven’t adjusted it to account for last season yet). But I can explain why it loves the Spurs so much: it weights championship pedigree very strongly, and the Spurs this year are the only team near the top that has any. Now some stats-loving people argue that the “has won a championship” variable is unreliable, but I think they are precisely wrong.  Perhaps this will change going forward, but, historically, there are no two ways to cut it: No matter how awesomely designed and complicated your models/simulations are, if you don’t account for championship experience, you will lose to even the most rudimentary model that does. So case in point, I came up with this 2-step method for picking NBA Champions: 1. If there are any teams within 5 games of the best record that have won a title within the past 5 years, pick the most recent. 2. Otherwise, pick the team with the best record. Following this method, you would correctly pick the eventual NBA Champion in 64.3% of years since the league moved to a 16-team playoff in 1984 (with due respect to the slayer, I call this my “5-by-5” model ). Of course, thinking back, it seems like picking the winner is sometimes easy, as the league often has an obvious “best team” that is extremely unlikely to ever lose a 7 game series.  So perhaps the better question to ask is: How much do you gain by including the championship test in step 1? The answer is: a lot. Over the same period, the team with the league’s best record has won only 10/28 championships, or ~35%. So the 5-by-5 model almost doubles your hit rate. And in case you’re wondering, using Margin of Victory, SRS, or any other advanced stat instead of W-L record doesn’t help: other methods vary from doing slightly worse to slightly better. While there may still be room to beef up the complexity of your predictive model (such as advanced stats, situational simulations, etc), your gains will be (comparatively) marginal at best. Moreover, there is also room for improvement on the other side: by setting up a more formal and balanced tradeoff between regular season performance and championship history, the macro-model can get up to 70+% without danger of significant over-fitting. In fairness, I should note that the 5-by-5 model has had a bit of a rough patch recently—but, in its defense, so has every other model. The NBA has had some wacky results recently, but there is no indication that stats have supplanted history. Indeed, if you break the historical record into groups of more-predictable and less-predictable seasons, the 5-by-5 model trumps pure statistical models in all of them. ## Uncertainty and Series Lengths Finally, I’d like to quickly address the complete botching of series-length analysis that I put forward last year. Not only did I make a really elementary mistake in my explanation (that an emailer thankfully pointed out), but I’ve come to reject my ultimate conclusion as well. Aside from strategic considerations, I’m now fairly certain that picking the home team in 5 or the away team in 6 is always right, no matter how close you think the series is. I first found this result when running playoff simulations that included margin for error (in other words, accounting for the fact that teams may be better or worse than their stats would indicate, or that they may match up more or less favorably than the underlying records would suggest), but I had some difficulty getting this result to comport with the empirical data, which still showed “home team in 6” as the most common outcome.  But now I think I’ve figured this problem out, and it has to do with the fact that a lot of those outcomes came in spots where you should have picked the other team, etc. But despite the extremely simple-sounding outcome,  it’s a rich and interesting topic, so I’ll save the bulk of it for another day. ## Sports Geek Mecca: Recap and Thoughts, Part 1 So, over the weekend, I attended my second MIT Sloan Sports Analytics Conference. My experience was much different than in 2011: Last year, I went into this thing barely knowing that other people were into the same things I was. An anecdote: In late 2010, I was telling my dad how I was about to have a 6th or 7th round interview for a pretty sweet job in sports analysis, when he speculated, “How many people can there even be in that business? 10? 20?” A couple of months later, of course, I would learn. A lot has happened in my life since then: I finished my Rodman series, won the ESPN Stat Geek Smackdown (which, though I am obviously happy to have won, is not really that big a deal—all told, the scope of the competition is about the same as picking a week’s worth of NFL games), my wife and I had a baby, and, oh yeah, I learned a ton about the breadth, depth, and nature of the sports analytics community. For the most part, I used Twitter as sort of my de facto notebook for the conference.  Thus, I’m sorry if I’m missing a bunch of lengthier quotes and/or if I repeat a bunch of things you already saw in my live coverage, but I will try to explain a few things in a bit more detail. For the most part, I’ll keep the recap chronological.  I’ve split this into two parts: Part 1 covers Friday, up to but not including the Bill Simmons/Bill James interview.  Part 2 covers that interview and all of Saturday. ## Opening Remarks: From the pregame tweets, John Hollinger observed that 28 NBA teams sent representatives (that we know of) this year.  I also noticed that the New England Revolution sent 2 people, while the New England Patriots sent none, so I’m not sure that number of official representatives reliably indicates much. The conference started with some bland opening remarks by Dean David Schmittlein.  Tangent: I feel like political-speak (thank everybody and say nothing) seems to get more and more widespread every year. I blame it on fear of the internet. E.g., in this intro segment, somebody made yet another boring joke about how there were no women present (personally, I thought there were significantly more than last year), and was followed shortly thereafter by a female speaker, understandably creating a tiny bit of awkwardness. If that person had been more important (like, if I could remember his name to slam him), I doubt he would have made that joke, or any other joke. He would have just thanked everyone and said nothing. ## The Evolution of Sports Leagues Featuring Gary Bettman (NHL), Rob Manfred (MLB), Adam Silver (NBA), Steve Tisch (NYG) and Michael Wilbon moderating. This panel really didn’t have much of a theme, it was mostly Wilbon creatively folding a bunch of predictable questions into arbitrary league issues.  E.g.: ” “What do you think about Jeremy Lin?!? And, you know, overseas expansion blah blah.” I don’t get the massive cultural significance of Jeremy Lin, personally.  I mean, he’s not the first ethnically Chinese player to have NBA success (though he is perhaps the first short one).  The discussion of China, however, was interesting for other reasons. Adam Silver claimed that Basketball is already more popular in China than soccer, with over 300 million Chinese people playing it.  Those numbers, if true, are pretty mind-boggling. Finally, there was a whole part about labor negotiations that was pretty well summed up by this tweet: ## Hockey Analytics Featuring Brian Burke, Peter Chiarelli, Mike Milbury and others. The panel started with Peter Chiarelli being asked how the world champion Boston Bruins use analytics, and in an ominous sign, he rambled on for a while about how, when it comes to scouting, they’ve learned that weight is probably more important than height. Overall, it was a bit like any scene from the Moneyball war room, with Michael Schuckers (the only pro-stats guy) playing the part of Jonah Hill, but without Brad Pitt to protect him. When I think of Brian Burke, I usually think of Advanced NFL Stats, but apparently there’s one in Hockey as well.  Burke is GM/President of the Toronto Maple Leafs. At one point he was railing about how teams that use analytics have never won anything, which confused me since I haven’t seen Toronto hoisting any Stanley Cups recently, but apparently he did win a championship with the Mighty Ducks in 2007, so he clearly speaks with absolute authority. This guy was a walking talking quote machine for the old school. I didn’t take note of all the hilarious and/or non-sensical things he said, but for some examples, try searching Twitter for “#SSAC Brian Burke.” To give an extent of how extreme, someone tweeted this quote at me, and I have no idea if he actually said it or if this guy was kidding. In other words, Burke was literally too over the top to effectively parody. On the other hand, in the discussion of concussions, I thought Burke had sort of a folksy realism that seemed pretty accurate to me.  I think his general point is right, if a bit insensitive: If we really changed hockey so much as to eliminate concussions entirely, it would be a whole different sport (which he also claimed no one would watch, an assertion which is more debatable imo).  At the end of the day, I think professional sports mess people up, including in the head.  But, of course, we can’t ignore the problem, so we have to keep proceeding toward some nebulous goal. Mike Milbury, presently a card-carrying member of the media, seemed to mostly embrace the alarmist media narrative, though he did raise at least one decent point about how the increase in concussions—which most people are attributing to an increase in diagnoses—may relate to recent rules changes that have sped up the game. But for all that, the part that frustrated me the most was when Michael Schuckers, the legitimate hockey statistician at the table, was finally given the opportunity to talk.  90% of the things that came out of his mouth were various snarky ways of asserting that face-offs don’t matter.  I mean, I assume he’s 100% right, but just had no clue how to talk to these guys.  Find common ground: you both care about scoring goals, defending goals, and winning.  Good face-off skill get you the puck more often in the right situations. The question is how many extra possessions you get and how valuable those possessions are? And finally, what’s the actual decision in question? ## Baseball Analytics Featuring Scott Boras, Scott Boras, Scott Boras, some other guys, Scott Boras, and, oh yeah, Bill James. In stark constrast to the Hockey panel, the Baseball guys pretty much bent over backwards to embrace analytics as much as possible.  As I tweeted at the time: Scott Boras seems to like hearing Scott Boras talk.  Which is not so bad, because Scott Boras actually did seem pretty smart and well informed: Among other things, Scott Boras apparently has a secret internal analytics team. To what end, I’m not entirely sure, since Scott Boras also seemed to say that most GM’s overvalue players relative to what Scott Boras’s people tell Scott Boras. At this point, my mind wandered: How awesome would that be, right? Anyway, in between Scott Boras’s insights, someone asked this Bill James guy about his vision for the future of baseball analytics, and he gave two answers: 1. Evaluating players from a variety of contexts other than the minor leagues (like college ball, overseas, Cubans, etc). 2. Analytics will expand to look at the needs of the entire enterprise, not just individual players or teams. Meh, I’m a bit underwhelmed.  He talked a bit about #1 in his one-on-one with Bill Simmons, so I’ll look at that a bit more in my review of that discussion. As for #2, I think he’s just way way off: The business side of sports is already doing tons of sophisticated analytics—almost certainly way more than the competition side—because, you know, it’s business. E.g., in the first panel, there was a fair amount of discussion of how the NBA used “sophisticated modeling” for many different lockout-related analyses (I didn’t catch the Ticketing Analytics panel, but from its reputation, and from related discussions on other panels, it sounds like that discipline has some of the nerdiest analysis of all). Scott Boras let Bill James talk about a few other things as well:  E.g., James is not a fan of new draft regulations, analogizing them to government regulations that “any economist would agree” inevitably lead to market distortions and bursting bubbles.  While I can’t say I entirely disagree, I’m going to go out on a limb and guess that his political leanings are probably a bit Libertarian? Featuring Jeff Van Gundy, Mike Zarren, John Hollinger, and Mark Cuban Dean Oliver. If every one of these panels was Mark Cuban + foil, it would be just about the most awesome weekend ever (though you might not learn the most about analytics). So I was excited about this one, which, unfortunately, Cuban missed. Filling in on zero/short notice was Dean Oliver.  Overall, here’s Nathan Walker’s take: This panel actually had some pretty interesting discussions, but they flew by pretty fast and often followed predictable patterns, something like this: 1. Hollinger says something pro-stats, though likely way out of his depth. 2. Zarren brags about how they’re already doing that and more on the Celtics. 3. Oliver says something smart and nuanced that attempts to get at the underlying issues and difficulties. 4. Jeff Van Gundy uses forceful pronouncements and “common sense” to dismiss his strawman version of what the others have been saying. E.g.: Zarren talked about how there is practically more data these days than they know what to do with.  This seems true and I think it has interesting implications. I’ll discuss it a little more in Part 2 re: the “Rebooting the Box Score” talk. There was also an interesting discussion of trades, and whether they’re more a result of information asymmetry (in other words, teams trying to fleece each other), or more a result of efficient trade opportunities (in other words, teams trying to help each other).  Though it really shouldn’t matter—you trade when you think it will help you, whether it helps your trade partner is mostly irrelevant—Oliver endorsed the latter.  He makes the point that, with such a broad universe of trade possibilities, looking for mutually beneficial situations is the easiest way to find actionable deals.  Fair enough. ## Coaching Analytics Featuring coaching superstars Jeff Van Gundy, Eric Mangini, and Bill Simmons.  Moderated by Daryl Morey. OK, can I make the obvious point that Simmons and Morey apparently accidentally switched role cards?  As a result, this talk featured a lot of Simmons attacking coaches and Van Gundy defending them.  I honestly didn’t remember Mangini was on this panel until looking back at the book (which is saying something, b/c Mangini usually makes my blood boil). There was almost nothing on, say, how to evaluate coaches, say, by analyzing how well their various decisions comported with the tenets of win maximization.  There was a lengthy (and almost entirely non-analytical) discussion of that all-important question of whether an NBA coach should foul or not up by 3 with little time left.  Fouling probably has a tiny edge, but I think it’s too close and too infrequent to be very interesting (though obviously not as rare, it reminds me a bit of the impassioned debates you used to see on Poker forums about whether you should fast-play or slow-play flopped quads in limit hold’em). There was what I thought was a funny moment when Bill Simmons was complaining about how teams seem to recycle mediocre older coaches rather than try out young, fresh talent. But when challenged by Van Gundy, Simmons drew a blank and couldn’t think of anyone.  So, Bill, this is for you.  Here’s a table of NBA coaches who have coached at least 1000 games for at least 3 different teams, while winning fewer than 60% of their games and without winning any championships: Note that I’m not necessarily agreeing with Simmons: Winning championships in the NBA is hard, especially if your team lacks uber-stars (you know, Michael Jordan, Magic Johnson, Dennis Rodman, et al). ## Part 2 coming soon! Honestly, I got a little carried away with my detailed analysis/screed on Bill James, and I may have to do a little revising. So due to some other pressing writing commitments, you can probably expect Part 2 to come out this Saturday (Friday at the earliest). ## Bayes’ Theorem, Small Samples, and WTF is Up With NBA Finals Markets? Seriously, I am dying to post about something non-NBA related, and I should have my Open-era tennis ELO ratings by surface out in the next day or so.  But last night I finally got around to checking the betting markets to see how the NBA Finals—and thus my chances of winning the Smackdown—were shaping up, and I was shocked by what I found.  Anyway, I tossed a few numbers around, and thought you all might find them interesting.  Plus, there’s a nice little object-lesson about the usefulness of small sample size information for making Bayesian inferences.  This is actually one area where I think the normal stat geek vs. public dichotomy gets turned on its head:  Most statistically-oriented people reflexively dismiss any empirical evidence without a giant data-set.  But in certain cases—particularly those with a wide range of coherent possibilities—I think the general public may even be a little too conservative about the implications of seemingly minor statistical anomalies. # Freaky Finals Odds: First, I found that most books seem to see the series as a tossup at this point.  Here’s an example from a European sports-betting market: Intuitively, this seemed off to me.  Dallas needs to win 1 out of the 2 remaining games in Miami.  Assuming the odds for both games are identical (admittedly, this could be a dubious assumption), here’s a plot of Dallas’s chances of winning the series relative to Miami’s expected winrate per home game: So for the series to be a tossup, Miami needs to be about a 71% favorite per game.  Even at home in the playoffs, this is extremely high.  Depending on what dataset you use, the home team wins around 60-65% of the time in the NBA regular season and about 65%-70% of the time in the postseason.  But that latter number is a bit deceptive, since the playoffs are structured so that more games are played in the homes of the better teams: aside from the 2-3-2 Finals, any series that ends in an odd number of games gives the higher-seeded team (who is often much better) an extra game at home.  In fact, while I haven’t looked into the issue, that extra 5% could theoretically be less than the typical skill-disparity between home and away teams in the playoffs, which would actually make home court less advantageous than in the regular season. Now, Miami has won only 73% of their home games this season, and it was against below-average competition (overall, they had one of the weakest schedules in the league).  Counting the playoffs, at this point Dallas actually has a better record than Miami (by one game), and they played an above-average schedule.  More importantly, the Mavs won 68% of their games on the road (compare to the league average of 35-40%).  Not to mention, Dallas is 5-2 against the Heat overall, and 2-1 against them at home (more on that later). So how does the market tilt so heavily to this side?  Honestly, I have no idea. Many people are much more willing to dismiss seemingly incongruent market outcomes than I am.  While I obviously think the market can be beaten, when my analytical results diverge wildly from what the money says, my first inclination is to wonder what I’m doing wrong, as the odds of a massive market failure are probably lower than the odds that I made a mistake. But, in this case, with comparatively few variables, I don’t really get it. It is a well-known phenomenon in sports-betting that huge games often have the juiciest (i.e., least efficient) lines.  This is because the smart money that normally keeps the market somewhat efficient can literally start to run out.  But why on earth would there be a massive, irrational rush to bet on the Heat?  I thought everyone hated them! # Fun With Meta-Analysis: So, for amusement’s sake, let’s imagine a few different lines of reasoning (I’ll call them “scenarios”) that might lead us to a range of different conclusions about the present state of the series: 1. Miami won at Home ~73% of the time while Dallas won on the road (a fairly stunning) 68% of the time.  If these values are taken at face value, a generic Miami Home team would be roughly 5% better than a generic Dallas road team, making Miami a 52.5% favorite in each game. 2. The average home team in the NBA wins about 63% of the time.  Miami and Dallas seem pretty evenly matched, so Miami should win each game ~63% of the time as well. 3. Let’s go with the very generous end of broader statistical models (discounting early-season performance, giving Miami credit for championship experience, best player, and other factors), and assume that Miami is about 5-10% better than Dallas on a neutral site.  The exact math on this is complicated (since winning is a logistic function), but, ballpark, this would translate into about a 65.5% chance at home. 4. Markets rule!  Approximate Market Price for a Miami series win is ~50%, translating into the 71% chance mentioned above above. Here’s a scatter-plot of the chances of Dallas winning the series based on those per-game estimates: Ignore the red dots for now—we’ll get back to those.  The blue dots are the probability of Dallas winning at least one of the next two games (using the same binomial formula as the function above).  Now, hypothetically, let’s assume you thought each of these analyses were equally plausible, your overall probability for Dallas winning the title would simply be the average of the four scenario’s results, or right around 60%.  Note: I am NOT endorsing any of these lines of reasoning or any actual conclusions about this series here—it’s just a thought experiment. # A Little Bayesian Inference: As I mentioned above, the Mavericks are 5-2 against the Heat this season, including 2-1 against them in Miami.  Let’s focus on the second stat: Sticking with the assumption that you found each of these 4 lines of reasoning equally plausible prior to knowing Dallas’s record in Miami, how should your newly-acquired knowledge that they were 2-1 affect your assessment? Well, wow! 3 games is such a miniscule sample, it can’t possibly be relevant, right?  I think most people—stat geek and layperson alike—would find this statistical event pretty unremarkable.  In the abstract, they’re right: certainly you wouldn’t let such a thing invalidate a method or process built on an entire season’s worth of data. Yet, sometimes these little details can be more important than they seem.  Which brings us to perhaps the most ubiquitously useful tool discovered by man since the wheel: Bayes’ Theorem. Bayes’ Theorem, at it’s heart, is a fairly simple conceptual tool that allows you to do probability backwards:  Garden-variety probability involves taking a number of probabilistic variables and using them to calculate the likelihood of a particular result.  But sometimes you have the result, and would like to know how it affects the probabilities of your conditions: Bayesian analysis makes this possible. So, in this case, instead of looking at the games or series directly, we’re going to look at the odds of Dallas pulling off their 2-1 record in Miami under each of our scenarios above, and then use that information to adjust the probabilities of each.  I’ll go into the detail in a moment, but the relevant Bayesian concept is that, given a result, the new probability of each precondition will be adjusted proportionally to its prior probability of producing that result.  Looking at the red dots above (which are technically the cumulative binomial probability of Miami winning 0 or 1 out of 3 games), you should see that Dallas is far more likely to go 2-1 or better on Miami’s turf if they are an even match than if Miami is a huge favorite—over twice as likely, in fact.  Thus, we should expect that scenarios suggesting the former will become much more likely, and scenarios suggesting the latter will become much less so. In its simplest form, Bayes’ Theorem states that the probability of A given B is equal to the probability of B given A times the prior probability of A (probability before our new information), divided by the prior probability of B: $P(A|B)= \frac{P(B|A)*P(A)} {P(B)}$ Though our case looks a little different from this, it is actually a very simple example.  First, I’ll treat the belief that the four analyses are equally likely to be correct as a “discrete uniform distribution” of a single variable.  That sounds complicated, but it simply means that there are 4 separate options, one of which is actually correct, and each of which is equally likely. Thus, the odds of any given scenario are expressed exactly as above (B is the 2-1 outcome): $P(S_x)= \frac{P(B|S_x)*P(S_x)} {P(B)}$ The prior probability for Sx is .25.  The prior probability of our result (the denominator) is simply the sum of the probabilities of each scenario producing that result, weighted by each scenario’s original probability.  But since these are our only options and they are all equal, that element will factor out, as follows: $P(B)= P(S_x)*(P(B|S_1)+P(B|S_2)+P(B|S_3)+P(B|S_4))$ Since P(Sx) appears in both the numerator and the denominator, it cancels out, leaving our probability for each scenario as follows: $P(S_x)= \frac{P(B|S_x)} {P(B|S_1)+P(B|S_2)+P(B|S_3)+P(B|S_4)}$ The calculations of P(B|Sx) are the binomial probability of Dallas winning exactly 2 out of 3 games in each case (note this is slightly different from above, so that Dallas is sufficiently punished for not winning all 3), and Excel’s binom.dist() function makes this easy.  Plugging those calculations in with everything else, we get the following adjusted probabilities for each scenario: Note that the most dramatic changes are in our most extreme scenarios, which should make sense both mathematically and intuitively: going 2-1 is much more meaningful if you’re a big dog. Our new weighted average is about 62%, meaning the 2-1 record improves our estimate of Dallas’s chances by 2%, making the gap between the two 4%: 62-38 (24% difference) instead of 60-40. That may not sound like much, but a few percentage points of edge aren’t that easy to come by.  For example, to a gambler, that 4% could be pretty huge: you normally need a 5% edge to beat the house (i.e., you have to win 52.5% of the time), so imagine you were the only person in the world who knew of Dallas’s miniature triumph—in this case, that info alone could get you 80% of the way to profit-land. # Making Use: I should note that, yes, this analysis makes some massively oversimplifying assumption—in reality, there can be gradients of truths between the various scenarios, with a variety of interactions and hidden variables, etc.—but you’d probably be surprised by how similar the results are whether you do it the more complicated way or not. One of the things that makes Bayesian inference so powerful is that it often reveals trends and effects that are relatively insulated from incidental design decisions.  I.e., the results of extremely simplified models are fairly good approximations of those produced by arbitrarily more robust calculations.  Consequently, once you get used to it, you will find that you can make quick, accurate, and incredibly useful inferences and estimates in a broad range of practical contexts.  The only downside is that, once you get started on this path, it’s a bit like getting Tetrisized: you start seeing Bayesian implications everywhere you look, and you can’t turn it off. Of course, you also have to be careful: despite the flexibility Bayesian analysis provides, using it in abstract situations—like a meta-analysis of nebulous hypotheses based on very little new information—is very tricky business, requiring good logical instincts, a fair capacity for introspection, and much practice.  And I can’t stress enough that this is a very different beast from the typical talking head that uses small samples to invalidate massive amounts of data in support of some bold, eye-catching and usually preposterous pronouncement. Finally, while I’m not explicitly endorsing any of the actual results of the hypo I presented above, I definitely think there are real-life equivalents where even stronger conclusions can be drawn from similarly thin data.  E.g., one situation that I’ve tested both analytically and empirically is when one team pulls off a freakishly unlikely upset in the playoffs: it can significantly improve the chances that they are better than even our most accurate models (all of which have significant error margins) would indicate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34761783480644226, "perplexity": 1958.8683320569837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049272349.32/warc/CC-MAIN-20160524002112-00040-ip-10-185-217-139.ec2.internal.warc.gz"}
http://en.wikisource.org/wiki/Page:The_European_Concert_in_the_Eastern_Question.djvu/22
# Page:The European Concert in the Eastern Question.djvu/22 6 GREECE. the ulterior negotiations with the Ottoman Porte, which may be the consequence of that mediation, should be determined hereafter by the common consent of the Governments of His Britannic Majesty and His Imperial Majesty. 3. If the mediation offered by His Britannic Majesty should not have been accepted by the Porte, and whatever may be the nature of the relations between His Imperial Majesty and the Turkish Government, His Britannic Majesty and His Imperial Majesty will still consider the terms of the arrangement specified in No. 1 of this Protocol, as the basis of any reconciliation to be effected by their intervention, whether in concert or separately, between the Porte and the Greeks; and they will avail themselves of every favourable opportunity to exert their influence with both parties, in order to effect this reconciliation on the above-mentioned basis. 4. That His Britannic Majesty and His Imperial Majesty should reserve to themselves to adopt hereafter the measures necessary for the settlement of the details of the arrangement in question, as well as the limits of the territory, and the names of the islands of the Archipelago to which it shall be applicable, and which it shall be proposed to the Porte to comprise under the denomination of 'Greece.' Self-denying clause 5. That, moreover, His Britannic Majesty and His Imperial Majesty will not seek in this arrangement any increase of territory, nor any exclusive influence nor advantage in commerce for their subjects, which shall not be equally attainable by all other nations. 6. That His Britannic Majesty and His Imperial Majesty being desirous that their allies should become parties to the definitive arrangements of which this Protocol contains the outline, will communicate this instrument confidentially to the Courts of Vienna, Paris, and Berlin, and will propose to them that they should, in concert with the Emperor of Russia, guarantee the Treaty by which the reconciliation of Turks and Greeks shall be effected, as His Britannic Majesty cannot guarantee such a Treaty. Done at St. Petersburgh, ${\mathrm {\tfrac {March23}{April4,}}}$ 1826. (Signed) WELLINGTON. NESSELRODE. LIEVEN. The mediation thus offered was refused by the Porte, in a manifesto of 9th June, 1827[1]. The Governments of Austria 1. Brit. and For. State Papers, xiv. p. 1042.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.53008633852005, "perplexity": 5746.394443847209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010527022/warc/CC-MAIN-20140305090847-00084-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/going-from-cylindrical-to-cartesian-coordinates.645327/
# Going from cylindrical to cartesian coordinates 1. Oct 19, 2012 ### Niles 1. The problem statement, all variables and given/known data Hi The expression for the magnetic field from an infinite wire is $$\boldsymbol B(r) = \frac{\mu_0I}{2\pi}\frac{1}{r} \hat\phi$$ which points along $\phi$. I am trying to convert this into cartesian coordinates, and what I get is $$\boldsymbol B(x, y) = \frac{\mu_0I}{2\pi}\frac{1}{\sqrt{x^2+y^2}} \hat\phi$$ where $$\hat\phi = -\sin\phi \hat x + \cos\phi \hat y$$ I am trying to make a phase plot of this expression, so what I have done is to say that $\phi = \arctan(y/x)$, so $\hat\phi = -\sin(\arctan(y/x))\hat x + \cos(\arctan(y/x))\hat y$. However I don't get the desired result. Have I missed something in my approach? 2. Oct 19, 2012 ### voko sin (arctan a) and cos (arctan a) can be simplified. Let z = arctan a, that means tan z = a, and tan z = sin z/cos z = a, so you can express sin z and cos z in terms of a. 3. Oct 19, 2012 ### Niles Thanks, so I know that $$\frac{y}{x} = \frac{\sin(\phi)}{\cos(\phi)}$$ I can't see how this enables me to rewrite e.g. $\sin(\arctan(y/x))$. 4. Oct 19, 2012 ### voko $\sin (\arctan a) = \sin z$. Since $\tan z = \sin z/\cos z = a$, $\sin^2 z = a^2 \cos^2 z = a^2(1 - \sin^2 z)$. So you can find $\sin z$ as a function of $a$; ditto for $\cos z$. Then substitute $a = y/x$. 5. Oct 19, 2012 ### Niles Ah, I see. So I get $$\sin z = \frac{a}{\sqrt{1+a^2}} \\ \cos z = \frac{1}{\sqrt{1+a^2}}$$ But I still have my original problem: That when I plot B using these for negative x, then I don't see the correct magnetic field. I thought that I was perhaps missing a term $\pi/2$, but that didn't solve it either. 6. Oct 19, 2012 ### voko What do you get and what is your expectation? 7. Oct 19, 2012 ### Niles I have attached a plot of what I see (the axes are (x, y), the current 1A and the units on the axis in meters), it is called "negative_x". If I only plot for positive x-values I get "positive_x", and there I see what I expect (as shown here, on the top: http://www.netdenizen.com/emagnet/solenoids/frommaxwellonly.htm). My code in Mathematica for plotting is: VectorPlot[(mu0/2 pi)* current*(1/(x^2 + y^2)^(1/2))*{-(y/x)/(1 + y^2/x^2)^(1/2), 1/(1 + y^2/x^2)^(1/2)}, {x, -0.01, 0.01}, {y, -0.01, 0.01}] #### Attached Files: File size: 19.2 KB Views: 72 • ###### negative_x.jpeg File size: 9.3 KB Views: 75 8. Oct 19, 2012 ### voko You should be able to simply the formula very significantly. Note that $\frac 1 {\sqrt {1 + y^2/x^2}} = \frac x {\sqrt {x^2 + y^2}}$, and the radical in the denominator nicely couples with that in the common factor. But even then your formula is not wrong, I am not sure why Mathematica does not plot it correctly. 9. Oct 19, 2012 ### Niles I don't know either. Strange, but nice to know that I have the correct exprssion. Thanks! 10. Oct 19, 2012 ### Niles OK, I just plotted it in MatLAB, and it *isn't* correct. For x<0 the y-coordinates all have to change sign. So the expression is not correct. EDIT: I have attached the plot. #### Attached Files: • ###### untitled.jpg File size: 12.8 KB Views: 55 Last edited: Oct 19, 2012 11. Oct 19, 2012 ### voko Have you tried the simplified formula as I suggested? 12. Oct 19, 2012 ### Niles Yes, it didn't change anything. It shouldn't either, since it is just a different way of expressing it. 13. Oct 20, 2012 ### voko Your formula is $(\mu_0/2 \pi) I \frac 1 {(x^2 + y^2)^{1/2}} \left(\frac {-y/x } {(1 + y^2/x^2)^{1/2}}, \frac 1 {(1 + y^2/x^2)^{1/2}} \right)$ Observe that the y-component is always positive, which is incorrect. If you transform it the way I suggested, you will get $(\mu_0/2 \pi) I \frac 1 {x^2 + y^2} \left(-y, x\right)$, which restores the correct sign. 14. Oct 21, 2012 ### Niles thanks! I must have made an error somewhere then when I tried Similar Discussions: Going from cylindrical to cartesian coordinates
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387807250022888, "perplexity": 1206.3451502577168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823350.23/warc/CC-MAIN-20171019160040-20171019180040-00507.warc.gz"}
http://mathhelpforum.com/differential-equations/72677-solved-initial-value-differential-equation.html
# Thread: [SOLVED] initial value differential equation 1. ## [SOLVED] initial value differential equation (x)dy/dx + 2y = 4x^2 y(1) = 2 Solve the initial value problem. Once I get the variable separated I know how to solve the problem. I just can't figure out how to get the x out of the left side. Please help!! 2. Hello, mathprincess24! $x\frac{dy}{dx} + 2y \:=\: 4x^2,\;\;y(1) = 2$ Solve the initial value problem. We can't separate the variables . . . is that the only method you know? Divide by $x\!:\;\;\frac{dy}{dx} + \frac{2}{x}\,y \;=\;4x$ Integrating factor: . $I \;=\;e^{\int\frac{2}{x}dx} \;=\;e^{2\ln x} \:=\:e^{\ln x^2} \:=\:x^2$ Multiply by $I\!:\;\;x^2\frac{dy}{dx} + 2xy \:=\:4x^3$ And we have: . $\frac{d}{dx}\left(x^2y\right) \;=\;4x^3$ Integrate: . $x^2y \;=\;x^4 + C\quad\Rightarrow\quad y \;=\;x^2 + \frac{C}{x^2}$ Since $y(1) = 2$, we have: . $2 \:=\:1^2 + \frac{C}{1^2} \quad\Rightarrow\quad C \,=\,1$ Therefore: . $y \;=\;x^2 + \frac{1}{x^2}$ 3. Originally Posted by mathprincess24 (x)dy/dx + 2y = 4x^2 y(1) = 2 Solve the initial value problem. Once I get the variable separated I know how to solve the problem. I just can't figure out how to get the x out of the left side. Please help!! You can't. That's not a "separable" equation. It is, however, a linear equation. If you divide by x, you have dy/dx+ 2y/x= 4x. Now you can find an "integrating factor", a function m(x), such that multiplying by it makes the left side a "complete" derivative: d(my)/dx= m dy/dx + (dm/dx)y which much be equal to m dy/dx+ (2/x)y. That means we must have dm/dx= m(2/x) which IS a separable equation: dm/m= 2dx/x which integrates to ln(m)= 2 ln(x)= ln(x^2) or y= x^2 (you can ignore the "constant of integration" because we just want one possible solution). If you multiply the entire equation by x^2 we get x^2 dy/dx+ 2xy= d(x^2y)/dx= 4x^3 which we can now write as d(x^2y)= 4x^4 dx and integrate to get x^2y= x^4+ C. Set x= 1, y= 2 to find C. Once again Soroban beat me! And now I have been beaten by AIR by a nose! 4. thank you so much. that really helps. we learned the integrating factor. i just have a hard time sometimes recognizing when to use it if the problem isnt in the exact general form. thanks again!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580679535865784, "perplexity": 1121.6696323020144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544678.42/warc/CC-MAIN-20161202170904-00337-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/58841-sinusoidal-functions-word-problem.html
# Math Help - Sinusoidal Functions-word problem 1. ## Sinusoidal Functions-word problem I really need help on this problem: A rodeo performer spins a lasso in a circle perpendicular to the ground. The height of the knot from the ground is modeled by h= -3 cos (5 pi/3 t) +3.5, where t is the time measured in seconds. a. What's the highest point reached by the knot b. Lowest point reached by the knot c. the period of the model d. According to the model, find the height of the knot after 25 seconds. 2. $h=-3cos(\frac{5\pi}{3}t)+3,5$ a) the highest point reached by the knot is the maximum of this function: $cos(\frac{5\pi}{3}t) \in <-1,1>$ $-3cos(\frac{5\pi}{3}t) \in <-3,3>$ $-3cos(\frac{5\pi}{3}t)+3,5 \in <0,5;6,5>$ the highest value we can get is $h=6,5$ b) and the lowest $h=0,5$ c) cosine function $cos t$ is periodic with period $2\pi$, so $cos(\frac{5\pi}{3}t)$ is periodic with period $\frac{2\pi}{\frac{5\pi}{3}}=\frac{6}{5}$ d) $t=25$ $h=-3cos (\frac{5\pi}{3}*25)+3,5=-3cos (\frac{125\pi}{3})+3,5=$ $-3cos(\frac{5\pi}{3})+3,5$ converting into degrees $\frac{5*180^o}{3}=300^o $ $h=-3cos(270^o+30^o)+3,5=-3*\frac{1}{2}+3,5=2$ 3. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432385563850403, "perplexity": 1242.4827491981348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704362/warc/CC-MAIN-20140313024504-00075-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/102869-complex-numbers-equation-print.html
# complex numbers equation • September 17th 2009, 03:30 PM Jones complex numbers equation Hi, I have $(z+4i)^3 + 2(z+4i)^2 - 16 = 0$ I also know that z = -2-2i is one root, and since this is an equation with real coefficients another root is -2+2i. So i need to find the third root Is there an easier way of solving this, other than polynomial division? • September 17th 2009, 03:49 PM Plato 1 Attachment(s) Quote: Originally Posted by Jones I have $(z+4i)^3 + 2(z+4i)^2 - 16 = 0$ I also know that z = -2-2i is one root, and since this is an equation with real coefficients another root is -2+2i. So i need to find the third root Look at this. • September 18th 2009, 01:10 AM Jones So just expand the brackets and replace z with a+bi ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901001214981079, "perplexity": 1123.3322483074721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535925433.20/warc/CC-MAIN-20140901014525-00262-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.freemathhelp.com/forum/threads/75717-f(x)-is-defined-to-be-ODD-and-g(x)-is-defined-to-be-EVEN-complete-table
# Thread: f(x) is defined to be ODD and g(x) is defined to be EVEN, complete table 1. ## f(x) is defined to be ODD and g(x) is defined to be EVEN, complete table I need some help here. Given two functions f(x) and g(x), which are only defined at x=(-3,-2,-1,0,1,2,3) but not at any intermediate values. And f(x) is defined to be an ODD function and g(x) is defined to be EVEN function. I need to complete the following table; x f(x) g(x) f+g g-f fg f/g g/f -3 1 6 -2 0 -2 -1 -1 1 0 4 0 1 2 3 I've been pulling my hair out here working to figure this out. It's probably simple but after working a problem for days, I'm shot. Can somebody provide some guidance/help here? It would be much appreciated. Thanks. Hunter 2. You should have definitions of Odd and Even? Do you have them? Please write them and let's have a look. 3. The data in the table is the only information provided. ODD and EVEN functions are already known and defined outside of the problem. The issue is what are the functions that provide the values for f(x) and g(x) with the given values of X (-3,-2,-1,0,1,2,3). 4. Hello, hunter! Given two functions $f(x)$ and $g(x)$, which are defined at $x=\{\text{-}3,\text{-}2,\text{-}1,0,1,2,3\}$ only. And f(x) is defined to be an ODD function and g(x) is defined to be EVEN function. I need to complete the following table; $\begin{array}{c|c|c||c|c|c|c|c|}x & f(x) & g(x) & f+g & g-f & fg & f/g & g/f \\ \hline \text{-}3 & 1 & 6 \\ \text{-}2 & 0 & \text{-}2\\ \text{-}1 & \text{-}1 & 1 \\ 0 & 4 & 0 \\ 1 \\ 2 \\ 3\\ \hline \end{array}$ An ODD function is defined as: for all $x,\:f(\text{-}x) \,=\,-f(x).$ . . Baby talk: If you change the sign of $x$, you change the sign of $f(x).$ An EVEN function is defined as: for all $x,\:f(\text{-}x) \,=\,f(x)$ . . Baby talk: If you change the sign of $x,\:f(x)$ is unchanged. Knowing this, you can complete the next two columns: $\begin{array}{c|c|c ||c|c|c|c|c|}x & f(x) & g(x) & f+g & g-f & fg & f/g & g/f \\ \hline \text{-}3 & 1 & 6 \\ \text{-}2 & 0 & \text{-}2\\ \text{-}1 & \text{-}1 & 1 \\ 0 & 4 & 0 \\ 1 & {\color{red}1} & {\color{red}1} \\ 2 & {\color{red}0} & {\color{red}{\text{-}2}} \\ 3 & {\color{red}{\text{-}1}} & {\color{red}{6}} \\ \hline \end{array}$ Now you can complete the table: $\begin{array}{c|c|c|c|c|c|c|c|}x & f(x) & g(x) & f+g & g-f & fg & f/g & g/f \\ \hline \text{-}3 & 1 & 6 & {\color{blue}7} & {\color{blue}5} & {\color{blue}6} & {\color{blue}{\frac{1}{6}}} & {\color{blue}6} \\ \text{-}2 & 0 & \text{-}2\\ \text{-}1 & \text{-}1 & 1 \\ 0 & 4 & 0 \\ 1 & 1 & 1 \\ 2 & 0 & \text{-}2 \\ 3 & \text{-}1 & 6 \\ \hline \end{array}$ 5. Thanks for your help and explanation. I was able to figure this out after stepping back and understanding the tests for ODD functions. It makes sense now. Don't know why I couldn't see it earlier. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746545314788818, "perplexity": 392.97701784075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654440/warc/CC-MAIN-20140305060734-00099-ip-10-183-142-35.ec2.internal.warc.gz"}
https://cms.math.ca/cmb/kw/Schroeder-Bernstein%20problem
location:  Publications → journals Search results Search: All articles in the CMB digital archive with keyword Schroeder-Bernstein problem Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2009 (vol 53 pp. 278) Galego, Elói M. Cantor-Bernstein Sextuples for Banach Spaces Let $X$ and $Y$ be Banach spaces isomorphic to complemented subspaces of each other with supplements $A$ and $B$. In 1996, W. T. Gowers solved the Schroeder--Bernstein (or Cantor--Bernstein) problem for Banach spaces by showing that $X$ is not necessarily isomorphic to $Y$. In this paper, we obtain a necessary and sufficient condition on the sextuples $(p, q, r, s, u, v)$ in $\mathbb N$ with $p+q \geq 1$, $r+s \geq 1$ and $u, v \in \mathbb N^*$, to provide that $X$ is isomorphic to $Y$, whenever these spaces satisfy the following decomposition scheme $$A^u \sim X^p \oplus Y^q, \quad B^v \sim X^r \oplus Y^s.$$ Namely, $\Phi=(p-u)(s-v)-(q+u)(r+v)$ is different from zero and $\Phi$ divides $p+q$ and $r+s$. These sextuples are called Cantor--Bernstein sextuples for Banach spaces. The simplest case $(1, 0, 0, 1, 1, 1)$ indicates the well-known Pełczyński's decomposition method in Banach space. On the other hand, by interchanging some Banach spaces in the above decomposition scheme, refinements of the Schroeder--Bernstein problem become evident. Keywords:Pel czyński's decomposition method, Schroeder-Bernstein problemCategories:46B03, 46B20 top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506214022636414, "perplexity": 1069.3388431060093}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00494.warc.gz"}
http://asme-orc2013.fyper.com/program/show_slot/14
Hosted by A NOVEL AUTO-CASCADE RANKINE CYCLE (ARC) FOR IMPROVING THE PEFERMANCE OF ORGANIC RANKINE CYCLE Bao Junjiang, Zhao Li Abstract: Organic Rankine cycles (ORC) have received increasing attention for power generation purposes due to their potential for utilizing heat from low-temperature sources and their favourable characteristics for integration into future distributed energy systems. Due to the relative low efficiency for ORC, many researchers are currently working on the design and development of new thermodynamic cycles and the improvement of existing ones. A novel auto-cascade Rankine cycle (ARC) is proposed to reduce thermodynamics irreversibility and improve energy utilization. Like the Kalina cycle, the working fluid for the ARC is zeotropic mixture, which can improve the system efficiency due to the temperature slip that zeotropic mixtures exhibit during phase change. Unlike the Kalina cycle, two expanders are included in the ARC rather than an expander and a throttling valve in the Kalina cycle, which means more work can be obtained. The main advantages of the ALSRC system is that heat from the exhaust stream of the expanders are reclaimed twice, once using an IHE and another time using a regenerator. Using the exhaust gas as the heat source and water as the heat sink, a program is written by Matlab 2010a to carry out exergy analysis and parameter study on the ARC. Results show that the R245fa mass fraction in the primary circuit exists an optimum value with respect to the minimum total cycle irreversibility. The largest exergy loss occurs in evaporator, followed by the superheater, condenser, regenerator and IHE (Internal heat exchanger). As the R245fa mass fraction increases, the exergy losses of different components vary diversely. With the evaporation pressure rises, the total cycle irreversibility decreases and work output increases. Separator temperature has a greater influence on the system performance than superheating temperature. Compared with ORC (organic Rankine cycle) and Kalina cycle in the literature, the ARC has proven to be thermodynamically better. DEVELOPMENT AND EXPERIMENTAL STUDY ON SINGLE SCREW EXPANDERS FOR SMALL CAPACITY ORC POWER SYSTEM Yu-Ting Wu, Wei Wang, Ye-Qiang Zhang, Jing-Fu Wang, Chong-Fang Ma Abstract: Screw machines include single screw machine and twin screw machine. The screw compressors have been worldwide used in refrigeration, air conditioning and industrial gas compression. Twin screw machine as an expander have been developed for the application in energy conservation and renewable energy. However, the use of single screw machine as an expander is a new concept and no any prototype of single screw expander except our team is reported in the world. Single-screw expander can be used as expansion power machine in small capacity ORC power system with output power from 1 to 500 kW. It has many advantages, such as long working life, balanced loading of the main screw, high volumetric efficiency, low noise, low leakage, low vibration and simple configuration, etc. it is suited to superheated steam, saturated steam or wet steam. A single screw expander prototype with 117m Screw Diameter was developed. Compressed air and steam experiment system were built to test the performance of single screw expander. By adjusting the clearances from screw, gate rotor to shell and improving manufacture precision, the total efficiency increased from 30% of first prototype to above 60% of modified prototype. Other three types of single-screw expander prototypes, which are 155 mm screw diameter, 175 mm screw diameter and 195 mm screw diameter, have also been developed. The performances of these prototypes with rotating speed and pressure have been tested. From the results, maximum output power of each type of prototypes is 5kW, 9.9kW, 22.4kW and 51.8kW, and maximum total efficiency of each is 66%, 58.3%, 70% and 63%. An organic Rankine cycle (ORC) experimental system was also built. The preliminary experimental results were obtained. Experimental results indicate that the maximum efficiency of expander reached 80% and that of ORC reached 6%。 AN ORC BASED DISTRIBUTED CCHP SYSTEM IN CHINA Xingyang Yang, Li Zhao Abstract: In this paper, a distributed combined cooling heating and power (CCHP) generation system based on organic Rankine cycle (ORC) and parabolic trough solar collectors (PTSC) is introduced. This project is the first distributed CCHP system driven by solar energy partially in China and will be finished by 2014. This CCHP system is mainly consisted of parabolic solar collectors, a natural gas boiler, an organic Rankine cycle, a heating process heat exchanger and a LiBr absorption chiller. In this project, 1000m2 parabolic trough solar collectors are used to collect solar energy. The heat transfer fluid (HTF) in the solar subsystem is heated to 330 ℃ from 245 ℃ through PTSC and natural gas boiler and then is cooled to 245℃ after the heat exchange with the evaporator of the ORC. This system is designed to produce 200kW electricity and the electrical efficiency of the ORC is more than 10%, using an organic turbine whose isentropic effectiveness is more than 70%. The evaporating temperature of the ORC is 290℃ and the super-heat temperature is 10℃. The exhaust gas is used to fulfill the heat load in the winter and the cold load in the summer for a 1500m2 residential building, which is about 75kW and 120kW separately. A LiBr absorption chiller is used to produce cooling energy. In the end, a software developed by our group is introduced and it aims at providing users an optimum design after optimizing of the system both economically and technically. The system, a single system or an integrated system, is totally or partially driven by solar energy. OPTIMIZATION OF ORGANIC RANKINE CYCLE TO RECOVER WASTE HEAT OF MARINE DIESEL ENGINE Dongkil Lee, Ho Ki Lee Abstract: Because of IMO MEPC(International Maritime Organization Marine Environmental Protection Committee) 62nd Session, emission control and efficiency improvements of ships are became more important issue for current marine business. One of ways to achieve the efficiency improvement in the ships is recovering unused source of energy in the ships. Typically, current marine engines use only 50% of fuel energy for the shaft power and dump 30% to 40% as waste heat. For the low grade waste heat, the Organic Rankine Cycle(ORC) is one of the promising heat recovery power generatin cycles. ORC is a rankine cycle that uses organic fluid (High molecular mass fluid) with a liquid-vapor change occurring at a lower temperature than the water-steam phase change. The present work focuses on the heat transfer loop of Organic Rankine Cycle - Waste Heat Recovery System(ORC-WHRS) for the vessel. The considered ship type and engine type were Suez-Max Tanker and MAN Diesel & Turbo 6S70ME-C8.1-TII. The heat transfer loops were evaluated based on the power output of thermal cycle. Performances of ORC were calculated at the different temperature conditions of thermal loop. Calculated results were compared in terms of cycle and system efficiency. The result shows additional electricity of Maximum 660kWe con be produced by using ORC-WHRS, and the system has 9~13% of cycle eiifciency depends on the heat transfer loop design and pinch condition of heat exchanger. In this work show, ORC-WHRS con produce 60~73% of required electricity of Suez-max COT at the normal operation running condition and this lead to fuel saving effect. And Addition of evaporator and pre-heater were studied to maximize output power of ORC-WHRS. Exhaust gas, scavenge air and jacket cooling water were considered as possible heat sources to be recovered. Dual loop system which has multiple heat transfer loops for each waste heat source shows better performance than the single loop system which has only one heat transfer loop. By changing ORC evaporator and preheater, the output of ORC increased by 6~27% OPERATION OPTIMIZATION OF 10 KW ORGANIC RANKINE CYCLE IN CHINA STEEL CORPORATION Tiao Yuan Wu, Pai Hsiang Wang, Chun Da Chen Abstract: Organic Rankine Cycle (ORC) generation system can convert heat to electricity and has been widely applied to in various heat sources with different temperature ranges. This technology helps to save energy consumption, reduce cost, and make more benefit. Especially, it is one of the few technologies that could efficiently recover low-temperature waste heat. In China Steel Corporation (CSC), some technologies such as co-generation have been employed to recover about 40% of waste heat, but no suitable systems could be used in the low-temperature region. Therefore, CSC developed a 10 kW ORC pilot plant and investigated the optimal operation to get higher efficiencies of power generation and net output power. The operation of ORC system involves some controllable and un-controllable parameters, and there are some constraints in the operation, for instance the maximum power generation capacity, and the maximum temperature of working fluid. Besides, some power consumptions from working fluid pump, hot water pump, cooling pump, and cooling tower fan should be considered to get the maximum efficiency of net output power; these will make the problems much complex. To get the maximum efficiencies of power generation and net output power, CSC employed a set of optimization methodology, including Design of Experiments (DOE), Response Surface Methodology (RSM) and Sequential Quadratic Programming (SQP), in a 10 kW ORC pilot plant. The DOE can help to plan the experimental parameters which will be used to effectively build the response surface. The response surface is a model that can predict the performance accurately and is usually used to parametric analysis or optimization analysis. The SQP is an optimization programming and can be employed to search global optimal values with some specified constraints in the response surface model. In the present study, the controllable parameters and the parametric boundaries are working fluid flow rate (20~40 kg/min), hot water temperature (100~120oC), hot water flow rate (70~150 LPM), and cooling water flow rate (130~310 LPM), respectively. By using this set of optimization methodology under the constraints of parametric boundaries and power generation < 10700 W (maximum power generation capacity), the maximum efficiency of power generation increases from 7.48% to 8.03%, about 0.5% better; the maximum efficiency of net output power increases from 3.88% to 4.48%, about 0.6% better. These improvement can be achieved easily only with changing the operating parameters and without any more cost. PROCESS INTEGRATION AND ECONOMIC OPTIMIZATION OF ORGANIC RANKINE CYCLES BY USIING PICNH TECHNOLOGY Seyed Masoud Haji Seyedi, Seyed Majid Hashemian, Seyed Mohammad Reza Abolhassani, Fiete Dubberke, Amir Mohammad Haddad Momeni Abstract: In this paper a new methodology is proposed for appropriate integration and optimization of an ORC as a cogeneration process with the background process to generate shaft-work. Hot source and cold sink of ORC cycle has been used as a hot and cold stream respectively in Pinch Design Method (PDM) for both retrofit and grass-root project. The considered working fluids are R245fa, Solkatherm SES36, 1234ze and HDR-14. First, a pre-design model of the ORC and process flow diagram was built and simulations ran with different working fluids. In second step, components and system cost models were built and simulations carried out to evaluate the cost effectiveness of systems associated with different fluids. It has been illustrated that the choice of cycle configuration for appropriate integration with the background process depends on the heat rejection profile of the background process (i.e., the shape of the below pinch portion of the process grand composite curve). Results also indicate that for the same fluid, the point of high performance and that cost-effectiveness does not match. The operating point for maximum power doesn’t correspond to the total specific income. The benefits of integrating ORC with the background process have been demonstrated through illustrative examples. Keywords: Process integration, Pinch Technology, Organic Rankine Cycle, Economic Optimization. ANALYSIS OF SOME ALKANES FOR HIGH-TEMPERATURE ORC BY THERMAL STABILITY Xiaoye Dai, Qingsong An, Lin Shi Abstract: Organic Rankine Cycle(ORC) was a suitable promising technology for the mid-and high-grade(180~350℃) heat energy, especially industrial waste heat. And high-temperature ORC gain more interest presently. In recent study, Alkanes were considered as suitable working fluids for high-temperature ORC by analysis of thermodynamic property. In fact, there were more aspects for the choice of working fluids and thermal stability is the important one. The thermal stability of some alkanes were studied by experiments. So the basic data about the thermal stability could be got in different temperatures. But thermal decomposition was not unacceptable certainly. There were different mechanisms of the harm of thermal decomposition, including noncondensable gas, carbon deposits and others. The effect of thermal decomposition of some alkanes was analysed concretely in the base of data about the thermal stability of them. So the suggested use temperature was given finally. MODELING PLATFORM FOR ORC-PROCESSES BASED ON MODELICA Adrian Rettig, Ulf Christian Müller Abstract: Generating electricity in an economically reasonable way by utilising waste heat at lower temperature is one of the major challenges in ecological and efficient use of energy. One supporting key technology is the Organic Rankine Cycle (ORC). Integration of this into complex systems such as geothermal plants, biomass combustion or industrial processes to reuse the waste heat needs a sophisticated analysis of the whole process. To ensure an optimized performance of the combined technologies, accurate models of the coupled thermodynamic behaviour are crucial. The ORC performance including all components is of special interest since it mainly influences the investment decision. Thus, a modular simulation platform for ORC-processes based on Modelica and freely available libraries such as the Modelica Standard Library and ThermoPower has been devised. Two ORC-applications in Switzerland will be investigated using the modeling platform: a large scale application in cement industry (MW-scale) and a small bio gas CHP-application (double digit KW-scale). Currently, the focus is on characterizing the steady state behavior of these plants and the validation of the simulation results by on-site measurements. The tool will mainly be used to confirm the design and operation of the ORC-processes including all components and analyzing any unexpected deviations. In a next step, transient models are implemented that allow e.g. the analysis of control systems as well as start-up and shut-down procedures. It is planned to extend the tool by assessing more upcoming ORC-applications. Thus, libraries for components like different expanders or different working fluids will extend and boost the prediction capabilities of the platform. This offers the opportunity of supporting the evaluation of new ORC-processes and core components to contribute to an ecological and efficient use of energy. INITIAL RESULTS AND EXPERIENCE FROM OPERATION OF LABORATORY SCALE CO2 RANKINE CYCLE Maria Justo Alonso, Yves Ladam, Trond Andresen Abstract: This work describes the small-scale setup (ROMA) installed at the SINTEF Energy Research/ NTNU (Norwegian University of Science and Technology) laboratory, designed to generate electricity from low temperature heat (120 ºC). The system operates a CO2 power cycle from a hot gas heat source with similar temperature to those of an aluminium production cell. The ROMA set-up was capable of producing up to 0.5 kW electrical power with a maximum turbine efficiency of 40 %. Laboratory test rig construction and results for prototype operation are presented in this paper. HEAT USE IN CONCENTRATED PHOTOVOLTAIC THERMAL SYSTEMS Stephan Paredes, Patrick Ruch, Chin Lee Ong, Brian Burg, Bruno Michel Abstract: Waste heat is recovered from high concentration photovoltaic thermal (HCPVT) systems with the aim to enable multi-generation of electricity, cooling, and fresh water. This concept involves 80–90°C waste heat recovery from a low thermal resistance multi PV chip receiver package, 120°C from the optics, and thermal energy storage. The system recovers ~80% of the solar irradiation comprising ~30% as electrical energy and ~50% as heat. HCPVT systems provide a higher exergetic output than concentrated solar power installations due to the good conversion efficiency of triple junction photovoltaic cells (up to 44% in laboratory demonstrations) and their low thermal coefficient. A >25% system-level electrical efficiency can be reached while still having 50% medium grade heat. Conversion of the heat into cooling and desalinated water has been demonstrated using adsorption chillers and multi-effect vacuum membrane distillation systems, respectively [1]. We have estimated the economic value of heat with regard to its consumer and observed that this may differ markedly from its thermodynamic value depending on the system location. Using the generated heat in addition to the electricity boosts the economic value of the overall generated output by more than 20% [2]. Conversion of the heat into additional electrical output, however, is lacking an efficient low grade heat conversion process e.g. an organic Rankine process. Exergetic yields are compared between photovoltaic systems, concentrated solar power (CSP) systems, and HCPVT systems with medium grade heat output. From an exergy point of view, direct heat utilization from HCPVT systems for cooling and desalination is beneficial for key locations. Overall exergetic yields, flexibility, optimal plant size, and cost are neither optimal in photovoltaic nor CSP systems but HCPVT systems can compensate the disadvantages of both pure systems. For a successful power station application HCPVT systems require the conversion of low grade heat to electricity with an efficiency of >10%. With this combination an overall electrical system efficiency of >35% becomes possible – more than with any other solar installation. Combinations of HCPVT systems with Rankine processes using different working fluids are modelled. Since electrical power and cooling are in high demand in areas with high direct normal irradiance a combination of power generation and cooling has also been studied (Kalina and Goswami cycles). Finally, economic and technical modelling is carried out to determine the optimal size for HCPVT plants and match them to available heat conversion devices for high-efficiency multi-generation. REFERENCES [1] C.L. Ong, W. Escher, S. Paredes, A.S.G. Khalil, and B. Michel, “A novel concept of energy reuse from high concentration photovoltaic thermal (HCPVT) system for desalination”, Desalination 295, 70-81 (2012). [2] W. Escher, S. Paredes, S. Zimmermann, C.L. Ong, P. Ruch, B. Michel. Thermal management and overall performance of a high concentration PV. Proc. 8th Intl. Conference on Concentrating Photovoltaic Systems CPV8, 11477 (2012) 239-243. MODULAR ORC DESIGN FOR WASTE HEAT RECOVERY WITH REGARD TO THE CHEMICAL CLASS OF THE WORKING FLUID Markus Preißinger, Theresa Weith, Florian Heberle, Dieter Brüggemann Abstract: The Organic Rankine Cycle (ORC) is a widespread technology for geothermal applications and biomass fired power plants [1,2]. Due to challenging boundary conditions, like fluctuating heat transfer rates and a broad heat source temperature range, ORC units for waste heat recovery are still rare. Therefore, the adjustment of the ORC unit to the heat source is realized by choosing different working fluids and/or adapting the working pressure of the process. However, by changing the fluid, safety issues, plant specific aspects and thermodynamic conditions can change dramatically, especially when the new working fluid corresponds to a different chemical class [3]. From that point of view, the behavior of chemical classes instead of single working fluids is of great interest. In this study, homologous series of alkanes, alkylbenzenes and siloxanes are investigated for heat source temperatures of 300 °C to 600 °C. Firstly, the heat source temperature is varied and the influence of the working pressure on the exergetic efficiency is regarded for each fluid and temperature step. Secondly, the maximum exergetic efficiency and the corresponding fluid are determined from the gained results for each chemical class and temperature step. From these data a correlation for the maximum exergetic efficiency depending on the heat source temperature can be educed. For the homologous series n-pentane (C5) to n-undecane (C11) a polynomial dependency is found which predicts the maximum exergetic efficiency with a relative deviation of less than 2 % for the whole temperature range. Due to the location of the pinch-point at the beginning of the preheater, net power output depends linearly on the heat source temperature. However, pressure ratio in the turbine shows a polynomial dependency on the number of C-atoms. Additionally, main correlations for further thermodynamic and constructional parameters are deduced from simulation results and expressed by known physico-chemical input parameters (e.g. critical temperature, pressure and volume) or boundary conditions (e.g. heat source temperature). The prediction accuracy is better than 5 % for all investigated parameters. In summary, the above mentioned results are a first step towards a fluid-to-fluid modeling technique and, therefore, modular designed ORC power plants for the benefit of reduced simulation efforts for further scientific and industrial investigations. PERFORMANCE ANALYSIS OF DISTRIBUTED GENERATION SYSTEM BASED ON ORGANIC RANKINE CYCLE Wei Wang, Yu-Ting Wu, Chong-Fang Ma, Jing-Fu Wang, Guo-Dong Xia Abstract: Nowadays, the contradiction between continued growths in energy demand and gradually exhaustion of fossil energy become increasingly sharp, so energy saving has become the most urgent task. Among various waste energy resources, low temperature waste heat has a large proportion and no effective utilization, so people need develop distributed generation system (DGS) to recovery that energy. How to improve the performance of DGS is the key issue because of many technical bottlenecks. In this paper, different performance indexes of DGS based on Organic Rankine Cycle were analyzed by relevant thermodynamic principles. The thermodynamic model of Organic Rankine Cycle was described firstly, in which the thermodynamic performances of R134a, R245fa, R123, R600, R600a and R290 were compared, then the impact of expanders on ORC system was discussed, and finally potential improvement of ORC system using single screw expanders was evaluated. From the calculation results, it was found that there existed maximum net generation and highest thermal efficiency for certain heat sources and working fluids, and the optimized evaporation temperature in the former case was lower than that in the latter case. It was indicated that there existed different choices for difference types of heat sources. Ignoring the limitation of expanders, R245fa had the better thermal efficiency and the worse net generation than those of dry fluids R600a, for the same temperature difference between evaporation and condensation. However, for the same expansion ratio, both net generation and thermal efficiency of R245fa were worse compared with R600a. It was found that adiabatic efficiency would significantly influence the thermodynamic performance of power systems. For existent ORC experimental systems, as compared with refrigeration system, the corresponding indexes of efficiencies had a visible distance. However, it was also indicated that there is still huge room to improve. THERMAL PERFORMANCE AND ECONOMIC EVALUATION OF MEDIUM AND HIGH TEMPERATURE WASTE HEAT RESOURCES RECOVERED BY ORC SYSTEM Huixing Zhai, Qingsong An, Lin Shi Abstract: Rational utilization of waste heat resources has great significance for energy saving and environmental protection. At present, medium and high (150-350℃) temperature waste heat from geothermal water, biomass fuel cannot use efficiently and waste heat from industry progress has generally been discarded. However, the most effective utilization is to generate power. Organic Rankine cycle (ORC) is an effective way to convert low-grade waste heat into power. This work studies the type, capacity and utilization situation of the 150-350℃ temperature waste heat in China, mainly including geothermal water, biomass fuel and waste heat from industry progress. The influence of the heat source characters on the ORC system’s payback time is studied based on the present technical status. Industry waste heat recovery system has the lowest cost while geothermal system needs an electricity price subsidy from the government. Finally, an approach to construct suitable heat sources for ORC systems in thermal dynamic and economic perspective is given, thus providing reference for the following ORC system research. WASTE HEAT RECOVERY VIA ORGANIC RANKINE CYCLE: RESULTS OF A ERA-SME TECHNOLOGY TRANSFER PROJECT Bruno Vanslambrouck, Sergei Gusev, Tobias Erhart, Michel De Paepe, Martijn van den Broek Abstract: The main goal of the EraSME project “Waste heat recovery via an Organic Rankine Cycle”, completed by partners Howest (Belgium), Ghent University (Belgium) and University of Applied Sciences Stuttgart (Germany) between 1 January 2010 and 31 December 2012, was to find an entrance in Flanders for the Organic Rankine Cycle (ORC) technology in applications with sufficient amounts of waste heat at high enough temperatures. The project was preceded by a similar study that focused on renewable energy sources. Several tools were developed to aid in the viability assessment, the selection, and the sizing of ORC installations. With these methods, a fast determination of feasibility is possible. The outcome is based on the size, nature and temperature of the waste heat stream as well as the electricity price. An estimate can be given of the net power output, the investment costs and the economic feasibility. The tool is linked to a database of ORC manufacturer specifications. Another objective of the project was to keep track of the evolution in ORC market supply, both commercial and precommercial. We also looked beyond the product line of the main manufacturers. Some ORCs are developed for specific applications. ORC technology was benchmarked against alternatives for waste heat recovery, such as: steam turbines, heat pumps and absorption cooling. ORC in or as a combined heat and power (CHP) system was also examined. A laboratory test unit of 10kWe nominal power was installed during the project, which is now used in further research on dynamic behavior and control. It is still the only ORC demonstration unit in Flanders and has been very instructive in introducing representatives from industry, researchers and students to the technology. A considerable part of the project execution consisted of case studies in response to industrial requests from several sectors. Detailed and concrete feasibility studies allowed us to define the current application area of waste heat recovery ORC in a better way. A knowledge center for waste heat recovery (www.wasteheat.eu) was initiated to consolidate the know-how and to advise potential users. SENSITIVITY ANALYSIS AND ECONOMIC OPTIMIZATION OF THE “PETERSON” CYCLE AS ORGANIC RANKINE CYCLE Stefano Briola, Seyed Majid Hashemian, Seyed Sajad Mousavi, Amir Mohammad Haddad Momeni, Seyed Masoud Haji Seyedi Abstract: In this paper, an economic optimization of the “Peterson” thermodynamic cycle with a two-phase fluid expander, employed in lieu of a traditional Joule-Thomson (J-T) valve, is performed. A two-phase fluid expander is able to work with chemical species in the wet vapor phase in order to convert its thermodynamic energy in mechanical energy, by means of simultaneous expansion of the two phases. “Peterson” cycle can produce electrical and cooling power by using low temperature heat source. In this paper, some working fluids types (R245fa, Solkatherm SES36, 1234ze and HDR-14) and some two-phase fluid expanders types (scroll, screw and radial) are considered. First, pre-design model of the “Peterson” cycle and process flow diagram was built and simulations ran with different working fluids and two-phase fluid expanders types. In the second step, components and system cost models were built and simulations carried out to evaluate the cost effectiveness of the systems associated with different working fluids and two-phase fluid expanders' types. The operating point for maximum power doesn’t correspond to that of the minimum specific investment cost. The mismatch aforementioned is due to the thermodynamic properties which significantly influence system performance and components sizes. Finally, seeking for profitable environmental solutions, economic optimization has been performed. Keywords: Peterson Cycle, Economic Optimization, Joule-Thomson (J-T) valve DEVELOPEMENT OF ORGANIC RANKINE CYCLE POWER SYSTEM WITH 2-STAGE TURBO-EXPANDER FOR WASTE HEAT RECOVERY Hyun Dong Kim, Eun Koo Yoon, Kui Soon Kim, Jang Mok Kim, Sang Youl Yoon, Bum Suk Choi, Sangjo Han, Yang Bum Jeong, Kyung Chun Kim Abstract: This study demonstrates the realization and performance test of an organic Rankine cycle (ORC) power generation system for waste heat recovery. The ORC system consists of two shell and tube heat exchangers for the evaporator and the condenser, a multi-stage centrifugal pump to feed R-245fa refrigerant into the evaporator, a turbo-expander module expected to deliver a power about 250kW and an electric generator. For the turbo-expander, concept of back-to-back two-stage expansion was adopted to increase expansion ratio up to 9.5 to achieve high thermal efficiency of the ORC system. The design point of the rotational speed of the expander was 15,000 rpm. Principal performance parameters of the whole system and of each component have been investigated on the experimental test bench comprised with two heat transfer loops. Thermal energy was provided into the evaporator through pressurized hot water circulating between 2MW electrical heater and evaporator. The influence of temperature of the heat source on the net power output, thermal efficiency, power consumption, mass flow rate and expander outlet temperature at a given pinch point temperature have been analyzed. From the result of performance test of heat exchanger, it is confirmed that the absolute pressure attained to 20bar at the exit of evaporator and amount of exchanged heat between hot water and refrigerant is about 1,700kWth at the design condition of cycle which is 140°C of hot water temperature and 16.5kg/s of water mass flow rate. Moreover, electric power output of 110kWe from the generator is achieved at the condition of mass flow rate of refrigerant of 5kg/s and isentropic efficiency of turbine expander and thermal efficiency of ORC system is about 70% and 8%, respectively. Acquired performance parameters and efficiencies were compared to those expected from the thermodynamic cycle analysis. Base on these results, numerical simulation of the ORC system was conducted using Matlab Simulink capable to calculate thermodynamic cycles in steady and transient conditions. The simulation results could be used to predict the main working parameters and system performance and choose a suitable operation strategy for the entire system. EXPERIMENTAL STUDY ON AN ORGANIC RANKINE CYCLE SYSTEM APPLYING MULTI-EXPANDER IN PARALLEL Eunkoo Yun, Hyun Dong Kim, Sang Youl Yoon, Kyung Chun Kim Abstract: Organic Rankine cycle (ORC) system has good potential for heat recovery from low-temperature heat sources. But, in some applications including industrial facilities, marine engines, solar thermal heat, and other sources, the heat fluctuation from heat source is very large. Due to the large heat fluctuation, the efficiency of single-expander system could be severely reduced, and the system might be non-operational. In order to overcome the limitation, this study proposes an ORC system with multi-expander in parallel which can actively respond to the large heat fluctuation. This study aims to evaluate the performance of an organic Rankine cycle (ORC) power system adopting multi-expanders in parallel by simulation and experiment. The ORC system consists of two scroll expanders installed in parallel, a hydraulic diaphragm type pump to feed and pressurize the working fluid, two plate heat exchangers for the evaporator and the condenser. The two scroll expanders were modified from two oil-free air scroll compressors (Kyungwon Co., Ltd., Korea) with same specifications, and were tested in the ORC loop with R245fa. The hot water was used as heat source and the temperature was controlled up to 150 °C by the 150 kW-class electric heater. In order to determine the static performance of the system, efficiencies and shaft powers for both single and dual operation modes were measured under various heat source temperature conditions ranged from 110 °C to 140 °C. The maximum isentropic efficiency of each expander was measured about 70%, and the shaft power was reached to about 3 kW at the turbine inlet temperature of 140 °C. In addition, dynamic performance tests were conducted with oscillating heat flux conditions. The characteristics and overall efficiencies of the dual parallel expander ORC system with regards to various heat source conditions and operation modes will be addressed. MODELING AND SIMULATION OF SOLAR ORC SYSTEM FOR REGIONAL FEASIBILITY STUDY Taehong Sung, Hyun Dong Kim, Sang Youl Yoon, Kyung Chun Kim Abstract: This study aims to develop a hydro-thermal model and simulation code of solar ORC (Organic Rankin Cycle) system, and to predict capable efficiency of energy conversion from solar to electrical and thermal energy for a regional feasibility study. Solar ORC system is one of solar power systems including photovoltaic power generation. The hydro-thermal performance of solar ORC system should be addressed for operating conditions with environment variables, working fluids and mechanical components including solar collector, expander, generator, pump, condenser, ducting, refrigerant storage tank, etc. Recent researches have focused on the system configuration, design point simulation and the suitability of various working fluids. However, the system performance strongly depends on operating conditions, especially the ambient temperature and amount of the solar radiation. In our literature review, the available energy efficiency with operating scenario under regional specific conditions has not been fully studied. We are developing a model of solar ORC system and an in-house simulation code for various classes of solar ORC system, for various working fluids, and for different mechanical components. In the seminar, the hydro-thermal model of solar ORC system and simulation results under various regional daily and annual environmental conditions will be presented. The simulation result includes available electrical and thermal energy efficiencies. ANALYSIS OR ORGANIC RANKINE CYCLE WITH SINGLE SCREW EXPENDER SYSTEM IN LOW-MEDIUM TEMPERATURE GEOTHERMAL POWER GENERATION Jing-Fu Wang, Yongzhi Zhang, Yong Zhang, Jifen Liu, Wei Wang Abstract: The rapid development of social economy is restricted by the energy shortage and the environment deterioration. The development and utilization of renewable energy has been listed as priority areas for energy development. As one of the alternative energy, geothermal energy gets more and more attention. Usually high temperature geothermal energy is the most suitable energy in power generation, but most geothermal energy is low-medium temperature heat source. The key technology which limits the development of low-medium temperature geothermal power generation system is its low efficiency. In this paper, a new low-medium temperature geothermal power generation system with Organic Rankine cycle (ORC) and single screw expander as power engine was developed, which aims to improve the efficiency of low-medium temperature geothermal power generation. Firstly, based on the principles of thermodynamics, the basic operation principle of this power system was analyzed. Then the method of determining the main parameters of this system was proposed. On the basis of theoretical analysis, two circulation types of this system, that are saturated ORC and vapor-liquid two-phase ORC, respectively, were analyzed. In this system, two kinds of refrigerant R601 and R134a were used as the working fluid. The results show that the vapor-liquid two-phase ORC is better than saturated ORC, and the performance of R601 is better than R134a on the basis of comprehensive comparisons at geothermal fluid temperature between 80℃~120℃. However, R134a is better at geothermal fluid temperature between120℃~150℃. In this temperature ranges, the evaporation temperature should be less than critical temperature of working fluid. A regenerative organic Rankine cycle system for low-medium temperature geothermal power generation was also studied in this paper. The influences of evaporation temperature, condensing temperature and extraction pressure on regenerative organic Rankine cycle system, respectively, were analyzed. The properties of ORC system with and without regenerator were compared. It is found that the thermal efficiency of ORC system with regenerator was higher compared to the one without regenerator. PARAMETERS OPTIMIZATION OF SUPERCRITICAL GEOTHERMAL POWER SYSTEM Yuanwei Lu, Guanglin Liu, Yu-Ting Wu, Chong-Fang Ma Abstract: Organic Rankin cycle for power generation can make an effective use of geothermal heat. Organic working fluids with low boiling point can take advantage of low-to-medium temperature of geothermal fluid for power generation, which has less pollution to the environment than other forms of power generation. Therefore, people pay more attentions on it. Supercritical organic Rankin cycle system can theoretically form a "triangular" shape of cycle, during which the working fluids can change directly from sub-critical to supercritical state in the evaporator and the temperature of it can change continuously with no phase change. Researches showed that exergy efficiency of supercritical organic rankine system can reach 50%. However, there is few research on the effect of expander inlet temperature and pressure on the net power at different geothermal temperature. The working fluids are also need to be studied in supercritical geothermal power generation system. In this paper, a supercritical organic Rankin cycle with heat recovery process (as shown in Fig.1) using medium-temperature geothermal fluid (150 ℃ - 180 ℃) as heat source was built to study the effect of different parameters on the net power. In order to form a supercritical organic Rankin cycle, the critical temperature of working fluids should be lower than the heat source temperature. For the fluorocarbons working fluids can be decomposed when it contact with oil, steel or iron at the temperature above 122℃. Therefore, in this paper the R290 (propane) was choiced as working fluid. The experimental results showed that at the different geothermal temperature there existed an optimal temperature and at the optimal temperature condition there existed the optimal evaporating pressure. THE ORC-BASED UNIT MODELING AND SIMULATION USING BOND GRAPH APPROACH Mohammad Kordi, Vahid Esfahanian Abstract: The invention of bond graph was driven by the need for a common language to model complex systems involving different energetic domains. Bond Graph is a graphical representation of a physical and energy system model. It is based on the representation of the flow of the different types of energy that are involved. An important advantage of this modelling process is its simplicity lends itself to be used for wider variation of system parameters. In this paper, an ORC-based unit for modelling and simulating its performance has been modelled in unsteady state. Our model subject find out the state space differential equations of the system which contains different subsystems such as burner, pumps, heat exchangers, turbine, generator, fuel injection system, nozzle and shaft dynamics. We use bond graph method for modelling this system especially because of complexity of components and stages in addition to nonlinearity performance of the subsystem. Whereas this system is base on energy distribution in all elements. First we draw our system bond graph then according to bond graph we find our state equations that has been simulated with use of initial conditions. Finally, the variability of pressure, temperature, rotational speed and pressure history in each stage according to time have been showen. The effects of variations for some significant parameters including the main component's pressures, temperatures and mass flow and main shaft inertia and velocity have been presented. The results are validated against published literatures. REFERENCES [1] D. Karnopp, D. Margolis, R. Rosenberg, Systems Dynamics a Unified Approach, John Wiley and Sons, 2000. [2] P.C. Breeveld, Multibond Graph Elements in Physical Systems Theory, J. Franklin Ins., 1985. EXERGY, EXERGOECONOMIC AND EXERGOENVIRONMENTAL ANALYSES OF A BIOMASS-ORC UNIT Vahid Esfahanian, Mohammad Kordi, Kamran Mahootchi Saeed Abstract: A comprehensive thermodynamic simulation and modelling of a biomass system for heating, electricity generation and hot water production are needed to design and evaluate a biomass system. This biomass system consists of a gas turbine cycle, an Organic Rankine Cycle (ORC) and a domestic water heater. Energy, exergy, exergoeconomic analyses and environmental impact assessments and related parametric studies of each thermodynamic unit helps designers to find ways to improve the performance of the system in a cost effective way, and parameters that affect environmental impact and sustainability. The objective of this paper is to evaluate irreversibility of a Biomass-ORC Unit and it's thermodynamic modelling. The effect of changes in main parameters on the exergetic efficiency and exergy destruction in the biomass-ORC unit has been evaluated. The exergy destruction and exergy loss of each component of this ORC unit have been estimated. Moreover, the effects of the load variations and ambient temperature have been calculated in order to obtain a good insight into this analysis. The exergy efficiencies of the Burner, ORC-turbine, pump, evaporator and the condenser have been estimated at different ambient temperatures. Additionally, the exergoeconomic and exergoenvironmental analysis have been performed for each component of the ORC unit in order to calculate the cost of exergy destruction. The biomass-ORC unit include of a burner, heat exchanger, evaporator, ORC-turbine and condenser. The design parameters of the unit chose as: ORC-turbine pressure ratio, ORC-turbine isentropic efficiency, combustion chamber inlet temperature, and turbine inlet temperature. The main program has been developed in MATLAB Software programming. In order to find the optimal system design parameters, a exergoeconomic and exergoenvironmental approach also have been followed. In this article, the effect of some thermodynamic or system parameters in a biomass-ORC unit for one region in Iran, with hot–dry climate conditions have been conducted, as a case study. REFERENCES [1] Tj. Kotas, The Exergy Method of Thermal Plant Analysis, Butterworths: London, 1985. [2] M.S. Peters and K.D. Timmerhaus, Plant design and economics for chemical engineers, 4th Edition, McGraw Hill, 1991. HEAT TRANSFER CHARACTERISTICS OF PRIMARY SURFACE TYPE HEAT EXCHANGER FOR GEOTHERMAL ORC Joon Ahn, Min J. Sung, Byung-Sik Park Abstract: A series of numerical simulation has been carried out to study thermo-hydraulic characteristics of a primary surface type heat exchanger, which is designed for the evaporator and condenser of a geothermal ORC. Working fluid is geothermal water at hot side and R-245fa, which is a refrigerant designed for ORC, is at cold side. Effects of aspect ratio and amplitude are considered as design parameters. Nusselt number is presented for the Reynolds number ranging from 50 to 150 and compared to existing correlations. The result shows that higher aspect ratio channel gives better heat transfer performance within the range of investigation. PERFORMANCE STUDY ON DISTRIBUTED TROUGH SOLAR POWER SYSTEM BASED ON SINGLE SCREW EXPANDER AND ORGANIC RANKINE CYCLE Ye-Qiang Zhang, Hang Guo, Yu-Ting Wu, Guo-Dong Xia, Chong-Fang Ma Abstract: In order to meet the demand of power in islands and remote regions without electricity, a new distributed trough solar power system based on single expander and organic Rankin cycle (ORC) is proposed. Contrast with popular trough solar power, this type of trough solar power is characterized by small output power from 1 to 500 kW and low temperature of working fluid. Traditional turbines are mainly designed for larger-scale applications and could not be used in so small power plants. Single screw expander can realize 1-200 kW range of output power and has many advantages, such as balanced loading of the main screw, long working life, high volumetric efficiency, high partial loading, low leakage, low noise, low vibration and simplify configuration, etc. R245fa is used as working fluid. A thermodynamic model was set up to calculate the output power and efficiency of ORC system based on the basis of adiabatic efficiency and mechanical efficiency of the single screw expander tested at previous experiment. The results indicate that the efficiency is improved with the increase of expansion ratio at the condition of 3.4MPa evaporating pressure. The efficiency with two-stage expansion is litter higher than single-stage expansion. The ORC efficiencies with regenerator markedly increase as the increase of superheat temperature and significantly higher than that without regenerator. The efficiency of concentrating collector is 72% at top tested before, and usually is 60%. The efficiency of ORC is 19.5%, then the peak efficiency of the trough solar power system is 14.04% and average efficiency is 11.7%. MICRO CHP ORC SYSTEMS FUELLED BY GEOTHERMAL AND SOLAR ENERGY WITH PRELIMINARY DESIGN OF TURBO-EXPANDER Giampaolo Manfrida, Daniele Fiaschi, Francesco Maraschiello, Duccio Tempesti Abstract: Organic Rankine Power Cycles (ORC) are well proven and reliable technology for energy conversion, particularly for exploiting low-temperature heat source. Nowadays, ORCs are increasing in popularity with several manufacturers of equipment available on the market. A lot of research has been dedicated to this subject, either on the heat source or on the system design and analysis, either on the criteria for selection of optimal working fluid or on the design of the expander (scroll, screw, micro-turbine, etc.) [1-3] In this paper, a micro combined heat and power (CHP) plant operating through an Organic Rankine Cycle (ORC) using geothermal energy at low temperature (80-100°C) and solar energy is presented. The system is designed to produce 50 kWe with a single turbine, while the solar field is composed only by evacuated solar collectors. The CHP plant is designed using meteorological data for a city in the centre of Italy and it is optimized in terms of cycle efficiency by varying the upper cycle pressure. Starting from the results of the energy analysis of the CHP-ORC system, a preliminary step by step design of a radial micro-turbine is carried out. The main innovative features of proposed design are the use of real fluid properties instead of ideal gases, and the estimation of turbine losses [4-7]. Six different fluids suitable for low-temperature energy conversion are investigated: R134a, R236fa, R245fa, R1234yf, n-pentane, cyclohexane. All the calculations are carried out with Engineering Equation Solver (EES®). The results show that the system can reach interesting first law efficiency (17% with cyclohexane and 15% with R245fa). Concerning the design of the turbine, for all the working fluids values of turbine efficiency within 0.78 and 0.85 are obtained, with R134a showing the maximum value (0.85). In addition, all the fluids present the same distribution of turbine losses, with friction and secondary flow losses accounting for approximately 70% of all the losses. REFERENCES [1] Schuster A, Karellas S, Kakaras E, Spliethoff H. Energetic and economic investigation of Organic Rankine Cycle applications. Applied Thermal Engineering 2009;29:1809–1817 [2] Heberle F, Brüggemann D. Exergy based fluid selection for a geothermal Organic Rankine Cycle for combined heat and power generation. Applied Thermal Engineering 2006;30: 1326-1332 [3] Tchanche BF, Lambrinos Gr, Frangoudakis A, Papadakis G. Low-grade heat conversion into power using organic Rankine cycles – A review of various applications. 2011;15:3963-3979 [4] Dixon S.L.,” Fluid Mechanics and Thermodynamics of Turbomachinery”, 1998; Butterworth, New York. [5] Whitfield A., Baines N.C.,”Design of radial turbomachines”, 1990; Longman, New York [6] Whitfield, A., “The Preliminary Design of Radial Inflow Turbines”, ASME J. of Turbomachinery, 1990;112:51-57. [7] Benson S. Rowland. " A Review of Methods For Assessing Loss Coefficients In Radial Gas Turbines". Int. J. mech. Sci. Pergamon Press. 1970;12:905-932. ACKNOWLEDGMENT The results here presented have been obtained within the framework of the project BT GEO H&P, funded by Regione Toscana, using European Social Fund (FSE) resources. NOVEL CONFIGURATION OF ORGANIC RANKINE CYCLE FOR WASTE HEAT USE Omar Al-Ani, Patrick Linke, Mirko Stijepovic, Athanasios Papadopoulos Abstract: Organic Rankine Cycles (ORC) received a lot of attention in recent years as a promising technology to convert low grade heat to power. Numerous researches have shown that ORC offers substantial advantage over conventional Rankine cycle. The ORC may be employed to produce power from variety low grade heat sources. The performances of ORC depend of heat source and employed working fluid. Multiple studies have been conducted on different techniques to increase the efficiency of the ORC. Most of studies are devoted to finding of optimal working fluid with favorable thermodynamic and thermo-physical properties. Utilization of working fluids composed of two or more fluids seems to be promising path for enhancing ORC performances. From the other side a few studies are performed with intention to explore possible configuration modifications of ORC. In this study, we examine a novel configuration of an Organic Rankine Cycle (ORC). The use of a novel configuration allows for higher thermal efficiency by decreasing the pinch point between the heat source and the working fluid. Exergy analysis is performed to examine the various thermodynamic performance measures in such a configuration and compared to a simple ORC configuration. The exergy destruction, thermal efficiency, and second law efficiency are compared for each configuration. Also, each component is examined to locate where significant irreversibilities occur. A case study will be used to illustrate the benefits of using the novel configuration. DEVELOPMENT OF A SMALL-SCALE ORC FOR WASTE HEAT RECOVERY Theresa Weith, Dieter Brüggemann, Andreas P. Weiß, Gerd Zinn Abstract: Waste heat of biogas cogeneration units as well as plenty kinds of industrial waste heat provide high potential for power generation by applying Organic Rankine Cycles (ORCs). In this field of application, where high heat source temperatures can occur, conventional ORCs normally comprise an additional thermal oil loop in order to prevent the ORC working fluid from decomposition as well as to avoid self-ignition of the fluid when getting into contact with hot exhaust gas due to leakage. Moreover, when regarding small-scale plants with a power output of less than 30 kWel, scroll or screw expanders are mostly used as an expansion device. The described state-of-the-art systems suffer from several disadvantages, like high investment costs and complex plant design in case of thermal oil loop together with low efficiencies of common expansion devices. Therefore, the present work deals with the development of a 15 kWel ORC plant for waste heat recovery with a direct evaporator and a micro-expansion turbine. Working fluids that come into consideration are preselected by taking into account property data as well as non-thermodynamic issues as for example economic aspects and the fluid’s hazardous potential to health, water and environment. Aside from this, steady-state process simulations have been performed for heat source temperatures in the range of 573 K to 673 K and the effect of fluid specific turbine efficiency on the overall electric efficiency of the ORC has been investigated for three selected fluids. The results show an increase in ORC efficiency as the fluid-specific turbine efficiencies exceed the primarily assumed turbine efficiency of 60 %. For the example of cyclopentane, a relative improvement in overall electric efficiency of up to around 16 % could be observed. Moreover, the maximum of the overall ORC efficiency is shifted towards lower pressures. Turbine optimal design and efficiency strongly depend on several parameters, e.g. operating pressure ratio, volumetric flow ratio or specific speed, which are mainly determined by the applied ORC fluid. Hence, it affects the choice of the proper working fluid as well as the design point of the ORC plant. Based on the theoretical results, cyclopentane was chosen as promising working fluid for the present ORC plant. Process simulations with cyclopentane predict a maximum electric efficiency of 11.65 % for a heat source temperature of 623 K, an upper pressure of 20 bar and an isentropic efficiency of the turbine of 64.3 %. Currently, a pilot plant is under construction consisting of a plate-and-shell heat exchanger for direct evaporation, an axial impulse turbine, a piston diaphragm pump, an air-cooled condenser as well as a gas burner as heat source. The authors gratefully acknowledge financial support by Bayerische Forschungsstiftung. EXPERIMENTAL TESTING AND CFD SIMULATION OF A SCROLL EXPANDER USED IN ORC Deng Pan, Naiping Gao, Feibo Xie, Tong Zhu, Hai Wang, Haiying Wang, Wei An Abstract: The performance of a scroll expander modified from compressor was tested in an organic Rankine cycle (ORC) system. The ORC system mainly consisted of evaporator, scroll expander, plunger pump and condenser. The test rig employed R123 as working fluid. A natural gas burner was employed as the heat source whose flue-gas temperature was about 250℃. The performance of the scroll expander was investigated under different pump capacity and different electric loads. The maximum output power of the expander was 2.56kw and the maximum isentropic efficiency was 78.75% under the testing conditions. The evolutions of output power and isentropic efficiency with the pressure ratio and rotation speed were obtained. In the second part, a 2-D model was established to simulate the working process of the scroll expander by the approach of computational fluid dynamics (CFD). The pressure distribution in the scroll expander and its variation with the crank angle were analyzed. HEAT TRANSFER AND PRESSURE DROP CHARACTERISTICS IN WAVY-CORRUGATED CHANNEL OF THE PRIMARY SURFACE HEAT EXCHANGER Jang-Won Seo, Sanghyuk Woo, Chanyong Cho, Byung-Sik Park Abstract: A wavy-corrugated primary surface heat exchanger was tested under two-phase flow conditions by using a water/r245fa as the hot and cold stream respectively. Performance experiments for a wavy-corrugated primary surface heat exchanger (PSHE) of high-performance and high-effectiveness on the technologies of high-density fin folding were performed in this study. The wavy-corrugated PSHE were experimentally investigated for Reynolds number in range of 1 ~ 600 under various flow conditions on the hot side and the cold side. The inlet temperatures of the hot side were conducted in a range of 70˚C ~ 80˚C while that of the cold side were fixed at 12˚C. The average heat transfer rate, heat transfer performance and pressure drop increases with increasing Reynolds number in all experiments. Increasing inlet temperature in the experiment range causes the heat transfer performance to increase while the pressure drop decrease slightly. The experimental correlations to the heat transfer coefficient and pressure drop factor as a function of the Reynolds number have been suggested for the wavy-corrugated PSHE. ECONOMIC STUDY OF ORGANIC RANKINE CYCLE AT GAS STATION USING PINCH TECHNOLOGY Amir Mohammad Haddad Momeni, Mohammad Reza Jaffari Nasr, Seyed Masoud Haji Seyedi, Venus Shaker Abstract: A.M. Haddad Momeni*, M.R. Jaffari Nasr† S.M. Haji Seyedi‡ and V. Shaker‡ *Moshanij Consulting Engineers Co, No. 20, 107 St, Golsar, Rasht, I. R. Iran e-mail: haddad_momeni@moshanij.com Web page: http://www.moshanij.com ABSTRACT In this paper, pinch technology is applied in a transition gas station to decrease energy cost in Organic Rankine Cycle (ORC). A gas station which used gas engine as a driving part of compressor is considered. There are potentials for heat recovery available from waste heats at inner and after cooler of compressor, also at the flue gas, water and oil cooler of gas engine. In addition, there are available cold utilities as air cooler, cooling tower and hybrid cooling tower. So, optimization of process to select the best cold utilities is performed. The considered working fluids are R245fa, Solkatherm SES36, 1234ze and HDR-14. In this study, as a first step, a pre-design model of ORC and the process flow diagram of gas station streams were built and simulated with different working fluids runs. In the second step, components and system cost models were built and the simulations again carried out to evaluate the cost effectiveness of systems associated with different fluids. The results indicated that for the same fluid, the point with high performance and the cost-effectiveness is not match. The operating point for maximum power is not corresponded to the total specified revenue. The benefits of integrating ORC and the applicability of the proposed methodology have been demonstrated through illustrative examples in one of Iranian gas station as a case study. Keywords: Gas Station, Pinch Technology, Organic Rankine Cycle, Economic Optimization. REFERENCES [1] Nishith B. Desai, Santanu Bandyopadhyay, “Process integration of organic Rankine cycle”, Elsevier. Energy Press, Vol. 34, pp. 1674–1686, (2009). [2] R. Smith, Chemical Process Design and Integration, 2nd Edition, John Wiley & Sons, 2005 ORGANIC RANKINE CYCLES: AN ECONOMIC APPROACH Sanne Lemmens, Aviel Verbruggen Abstract: A large technical potential for ORC deployment exists of smaller scale systems, but most commercial applications are in the MW range, and only a few in the kW power range. Today, most research is spent on technical ORC improvements, and cost aspects are treated as an ex-post add-on. Recognizing high investment costs as the main impediment for wide application, this research considers minimization of life-cycle expenses on ORC projects as the main objective. EXPERIMENTAL AND NUMERICAL ANALYSIS OF BRAZED PLATE HEAT EXCHANGERS FOR ORGANIC FLUIDS Lars Bennov, Jorrit Wronski, Wiebke Brix Markussen, Fredrik Haglind Abstract: n-Pentane is a suitable working fluid for ORC applications exploiting temperatures around 180 ◦C. This work investigates the heat transfer process in brazed plate heat exchangers (BPHE) for n-Pentane. It provides more accurate information regarding the boiling process, which is not much discussed in literature, yet. According to Roser et al., two-phase heat transfer is significantly influenced by mass velocity and is therefore dominated by convective boiling. Whereas Dario et al. conclude that nucleate boiling dominates due to a strong heat flux a dependency. We present a preliminary experimental analysis carried out with a test rig consisting of a plate-type preheater and evaporator as well as an expansion valve, a condenser and a pump. First tests were made with a maximum temperature and pressure of 145 ◦C and 5 bar, respectively, with a mass flow of approximately 0.05 kg/s. A numerical model is developed to compare experimental results with established heat transfer and pressure drop correlations from literature. Based on experimental and modelling results, the influence of nucleate and convective boiling is identified alongside other important parameters. Correlation from Focke et al. correlates the experimental single-phase heat transfer coefficient, whereas correlation from Khan et al. correlates the two-phase data. New correlations for single- and two-phase heat transfer of n-Pentane in BPHE, suitable for small-scale ORC, are developed from existing correlations. The molecular structural similarity of alkanes suggests that results can also be relevant for other alkanes, which yet is to be proven. THEORETICAL INVESTIGATION ON ADVANCED ORCs Alessio Tafone, Andrea De Pascale, Jorrit Wronski, Lisa Branchini Abstract: This work investigates the use of advanced organic Rankine cycle designs to exploit a low temperature and a medium grade energy source represented by a solar application and waste heat from a marine diesel engine, respectively. Regarding the latter, we consider two different operating points: full load and at 60 % load. To improve the ORC efficiency and net power output of the classic one stage cycle, different cycle configurations have been considered, such as double stage (DS) and two pressure levels (2PL) systems. The thermodynamics and processes of the different organic Rankine cycles, as well as the heat source models, are described in details and the many assumptions and constrains are pointed out. A thermodynamic optimization and fluid comparison has been carried out for each configuration by means of the numerical software EES, which allows to compute the thermo-physical properties of the considered fluids throughout the whole cycle. Heat exchange is described with the pinch point approach. The ORC performances of each system are compared in terms of different indices such as cycle efficiency, total energy output and power output. Moreover, in order to partially take into account an economic evaluation of the investigated power plants, we have introduced two more parameters: the volumetric expansion ratio (VER) and the total heat transfer capacity (ΣUAtot). The results show a slight superiority of advanced systems compared to single stage configurations in terms of thermal efficiency and power/energy output for both heat sources. Yet, taking into account the economic parameters like the complexity of the advanced power plants due to the introduction of more than one expander and additional heat exchangers, one stage systems appear to be the better way to utilize both low-grade thermal energy sources. INVESTIGATION OF LOW GWP HYDROFLUOROOLEFINS AS PONENTIAL WORKING FLUIDS IN ORGANIC RANKINE CYCLES Wei Liu, Dominik Meinel, Christoph Wieland, Hartmut Spliethoff Abstract: Working fluids, i.e. different refrigerants, with a variety of thermodynamic properties are of great interest for Organic Rankine Cycles (ORC) operating with different low temperature heat sources. However, most of the currently used working fluids have destructive effects on the environment, e.g. depletion of the ozone layer or global warming. To reduce the enviromental effects, refrigerants have been progressed from Chlorofluorocarbons (CFCs), Hydrochlorofluorocarbons (HCFCs) to Hydrofluorocarbons (HFCs) in the past decades. Hydrofluoroolefins (HFOs), i.e. derivatives of propene, have emerged in recent years as the fourth generation of refrigerants which are considered to be one of the most promising replacements for third generation refrigerants like HFCs as it possesses considerably lower effects on the environment. This work will present a study concerning the thermodynamic performances of eight different HFOs as working fluids in ORC applications. The thermodynamic properties of the HFOs are estimated using the Peng-Robinson equation of state in combination with the group contribution method. Simulations are carried out in Matlab, in which the self-developed calling functions are added for calculation of thermodynamic parameters. In this study the heat source is represented by geothermal brine. The temperature region around 140 \degree C is of especial interest for Germany, since these are common temperatures obtained in the Molasse-basin in Germany. On this account, the inlet temperature of the heat source is varied from 120 to 200 \degree C, while the operating system pressure of the ORC-system is increased from 15 to 30 bar. At a constant system pressure the system efficiency is strongly increasing for low geothermal temperatures and changes only slightly beyond a specific temperature. This temperature indicates the point, where the pinch-point changes from evaporator to preheater. These points are determined for the investigated ORC fluids at corresponding system pressures. Depending on the level of the inlet temperature of the heat source the operating pressure influences the system efficiencies in different ways. Taking R1225yeE as an example, at a lower inlet temperature (<=130 \degree C) the system efficiencies reach maxima at intermediate operating pressures (between 15 and 17.5 bar). On the other hand, at a higher inlet temperature (> 130 \degree C) the system efficiencies increase monotonically with increasing system pressures. ANALYTIC DERIVATION OF EQUATION OF STATE TO EXPRESS ACCURATE THERMODYNAMIC PROPERTIES IN PROCESS SIMULATORS Eiichi Sakaue, Katsuya Yamashita Abstract: In response to growing interest in the global environment, many low-GWP hydro-fluorocarbon fluids have been developed. Some of their accurate physical properties are disclosed to public from their suppliers or from public organizations, such as National Institute of Standards and Technology (NIST). To evaluate the performance of the ORC system, process simulators are often used. In case a new fluid is used as the ORC’s working fluid, its physical properties need to be input to the simulators. Equation of State (EOS), which models the relationship of pressure P, volume V and temperature T, is commonly used to express the thermodynamic properties of the fluid. Many simulators accommodate an option to select the type of EOS and require us to input the parameters for the selected EOS. Cubic EOSs, such as Peng-Robinson, are most popular type and only critical pressure Pc, critical temperature Tc and the eccentric factor ω are required to be input. However they do not perform well near the critical points and do not show accurate ORC performance even if we obtain accurate thermodynamic data of the working fluid. On the other hand, virial types of EOS, which have large degree of freedom, are so complicated formula that they require large computational resources to conduct their parameters while they have the risk to fall into local minimum. To solve above problems, simplified BWR (Benedict–Webb–Rubin) EOS is proposed here. BWR [1] is a virial type EOS. Taylor expansion is applied here to achieve its simplified formula. Accordingly this enables us to use least-square method for curve-fitting from accurate thermodynamic data. Hence the EOS parameters can be solved analytically with small computational power. The proposed method expresses the PVT relationship well near the critical point with providing plenty of neighborhood data to curve fitting. The deviation from the original data decreases to 1/6 compared to the estimation from Peng-Robinson EOS. This will help to evaluate the ORC system which comprises a new fluid as its working fluid. REFERENCES [1] K.E. Starling, Fluid Properties for Light Petroleum Systems, Gulf Publishing,1973 IMPLEMENTATION OF A CONTROL SYSTEM ON AN EXPERIMENTAL ORC SETUP Marcio Verhulst, Andres Hernandez, Bruno Vanslambrouck, Martijn van den Broek, Michel De Paepe Abstract: As Organic Rankine Cycle (ORC) systems are designed by means of parametric calculations and simulations [1,2], tests should be performed to check if the real setup can deliver the promised specifications. Therefore the research group has built a test bench for ORC systems which is capable of delivering thermal oil, Therminol 66, at a maximum temperature of 350°C and with heat exchange capacity of 250kW. For the cold side a cooling loop was built with a cooling capacity of 480kW at an average coolant exchange temperature of 80°C and outside air temperature of 20°C. Because almost every ORC setup is customer-specific, the heat source simulator can provide a wide variety of standard load patterns such as steady state with added distortion signals, block wave functions, etc. If required, custom load patterns can easily be uploaded and simulated through the LabVIEW [3] control application which was designed by our own team. As the simulator is built to simulate even very dynamic heat sources, it is capable of making large heat supply jumps within seconds (positive and negative). This way ORCs can be tested in both steady state and dynamic behaviour and control strategies can be designed and tested in a fast, easy manner. Considering the large amount of energy for a laboratory environment, and to ensure a stable and safe operation, a Programmable Logical Controller (PLC, in this setup a Siemens S7 1200 series) takes care of the execution of the IO from the LabVIEW control application and has built-in safety procedures. When designing new control strategies for an ORC application, not only the heat and cold source are controlled by this LabVIEW and PLC configuration, also the ORC system is controlled by the latter, offering direct control over the various ORC components and ensuring optimal measurement data. Another benefit of this system is the continuous safety monitoring of the components and complete system. Whilst designing a controller / control strategy, a variety of errors can occur during tests, not always keeping the ORC within its design limits. Therefore we have implemented an algorithm in the PLC which automatically switches to a standard controller, bringing back the ORC to a steady and safe state if the application is going out of design limits. This is a poster abstract. OPTIMIZING THE COOLING OF GEOTHERMALLY DRIVEN LOW TEMPERATURE ORC POWER PLANTS Stephanie Frick, Stefan Kranz, Ali Saadat Abstract: ORC power plants using low temperature heat sources (approx. 100 to 200 °C) are characterized by relatively low conversion efficiencies and high amounts of waste heat. Since low temperature ORC are typically located close to the heat source (e.g. waste heat from industrial processes or low temperature geothermal resources), once-through-cooling typically is not applicable so that wet cooling towers or air-cooled condensers have to be used. The net power output of low temperature ORC power plants is hence significantly depending on the condensation temperature as well as the auxiliary power demand of the cooling equipment. The reason is that both gross power output and auxiliary power demand for the cooling equipment increase with decreasing condensing temperatures. Since geothermal driven ORC power plants - in comparison to other ORC applications - are especially dependent on an improved plant design in order to come up for the technical and financial effort when accessing a deep geothermal reservoir, the optimization of the cooling system is part of geothermal research. Experience from running geothermal power plants as well as the planning of the GFZ geothermal research power plant shows that optimization potential exists for the planning and operation of the cooling system. By means of numerical simulation studies in the software environment DYMOLA/Modelica the influence of changing ambient / cold source conditions on the performance of low temperature ORC power plants with different cooling system set-ups and operation strategies has been studied. Based on the study results, the contribution will present and discuss different aspects of optimizing the design and operation of wet cooling towers and air-cooled condensers. Recommendations how an improved cooling system design could be realized will also be addressed. CHOSEN PROBLEMS OF THE DYNAMIC HEAT SOURCE MODULATION IN ORC SYSTEMS Piotr Kolasiński, Zbigniew Gnutek Abstract: The ORC systems used for waste heat recovery are mainly powered from industrial waste heat sources. The industrial waste heat sources are characterized by a set of specific characteristics resulting mainly from their nature. Sources with the large thermal power and steady output characteristics can be used directly for ORC system powering. But, in the industry, an large group of dynamic waste heat sources exists. Such sources often have large thermal potential but their appearance is periodic. They appearing in all industrial energy conversion processes, but practically are not used. The commonness and large potential of such sources are interesting to consider their for the ORC system powering. In case of standard ORC systems dynamic working conditions are inadvisable. The design of special ORC system that can be adaptive to the heat source characteristic fluctuations would be very difficult and expensive as the special heat exchangers, expander and proper control system have to be worked out. Therefore interesting, in authors opinion, is to carry out the theoretical analysis and to find out the possible methods of dynamic heat source characteristic modulation. Application of modulation method will result in steady heat source characteristic which will be acceptable for the standard ORC system powering. This paper presents a theoretical study on a different methods of the dynamic heat source characteristic modulation. The main objectives of this work were therefore proposals of modulation methods and their comparative analysis. Moreover the proposals of configuration of the ORC power systems powered by dynamic heat sources are presented here together with a set of parameters useful for the system work quality assessment. The analysis presented in this paper indicates that dynamic characteristic modulation can be an option for application of the standard ORC system to the dynamic heat source. APPLICATION AND OPTIMIZATION OF ORGANIC RANKINE CYCLE POWER PLANTS IN GAS COMPRESSOR STATIONS Venus Shaker, Mohammad Reza Jaffari Nasr, Seyed Masoud Haji Seyedi, Amir Mohammad Haddad Momeni Abstract: This study examines the performance of a gas compressor station combined with an organic Rankine cycle (ORC) to optimize energy efficiency. Two gas stations with different drivers associated with reciprocating compressor and axial type turbine are considered. The waste heat can be recovered and used from the compressor’s exhaust gas and from the compressed gas that have to be cooled in order to push a higher volume of gas through the pipeline. A thermodynamic analysis on the exhaust gas is performed to determine likely adequate recoverable heat exist used in a Rankine power cycle. Individual models are developed for each component through applications of the first and second laws of thermodynamics. The effects of working fluid types and operating parameters such as compressor pressure ratio, evaporator temperature and temperature difference in the evaporator, on the first and second-law efficiencies for the cycle exergy destruction rate is studied. Finally the cycle thermodynamically is optimized to achieve the best efficiency. THE EFFECT OF MASS FLOW RATE AND PINCH POINT ON ORGANIC RANKINE CYCLE USING LOW GRADE HEAT SOURCE Byung-Sik Park, Man-Ki Heo, Dong-Hyun Lee Abstract: Using Orgarnic Rankine Cycle (ORC) power generators is one of the most efficient way to generate electricity from relatively low-grade heat sources. For a given heat source and heat sink conditions, performances of ORC power generaters are strongly depends on the mass flow rate of the working fluids. In this study, simulations were conducted to optimize the cycle with varying the mass flow rate of the working fluid. DYNAMIC SIMULATION OF A SOLAR APPLICATION OF THE ORGANIC RANKINE CYCLE FOR SMALL-SCALE DISTRIBUTED GENERATION Melissa Ireland, Adriano Desideri, Matthew Orosz, Sylvain Quoilin, J.G. Brisson Abstract: Organic Rankine cycle (ORC) systems are gaining ground as a means of effectively providing sustainable energy. Coupling small-scale ORCs powered by scroll expander-generators with solar thermal collectors and storage can provide combined heat and power to underserved rural communities. Simulation of such systems is instrumental in optimizing the control strategy. Several authors have simulated solar thermal ORC systems in steady-state [1,2], or focused on either ORC or thermal storage dynamics in isolation [1,2], or simulated system dynamics assuming a fixed electrical load and a solar loop modeled as a single lumped component [3]. In this work, a model for the dynamics of the solar ORC system is developed to evaluate the impact of variable heat sources and sinks, thermal storage, and the variable loads associated with distributed generation. This model can then be used to assess control schemes that adjust operating conditions for diurnal to annual environmental variation. The Modelica programming language is used to capture the important dynamics of the system, mainly in the storage tank, solar collectors, plate heat exchangers, and air-cooled condenser. Detailed steady-state component models are first developed in Engineering Equation Solver and serve as guides for the dynamic models. In particular, a detailed simulation of the fin-tube condenser with hexagonal tube array provides a better understanding of the influence of the moving liquid boundary with various working conditions. Measurements on a pilot system at Eckerd College are currently underway to validate the steady-state models to ensure an appropriate baseline for the prospective dynamic optimization. Ultimately, the goal of this work is to identify “optimal” control schemes for a small-scale solar ORC. Operating conditions will be controlled through the variation of the heat transfer fluid mass flow rate through the solar array, the speed of the ORC expanders, and the speed and number of operating ORC condenser fans. The control strategy will focus on maintaining the pressure ratio across the fixed volume ratio scroll expanders necessary to avoid both over- and under-expansion of the working fluid under variable ambient conditions. DEVELOPMENT OF A TOOL FOR THE SIMULTANEOUS OPTIMIZATION OF PROCESS AND WORKING FLUID OF ORC POWER SYSTEMS Akshay Hattiangadi, Tiemo Mathijssen, Matthias Lampe, David Pasquale, Joachim Gross, André Bardow, Piero Colonna Abstract: The selection of a working fluid is key to the design of an Organic Rankine Cycle (ORC) system, given the energy source, sink and the power capacity. Up to now the selection of the working fluid is mainly guided by experience and the use of several system simulations. In the attempt to approach the engineering problem in a more systematic way, a software tool has been developed which simultaneously optimizes the energy conversion process and selects the optimum working fluid for a given heat source. The program is based on a framework that uses a continuous-molecular targeting approach which allows for an integrated working fluid and system design \cite{bardow2010continuous} \cite{lampe2012}. The steady-state process is simulated with an in-house program for thermodynamic analysis and optimization of energy conversion systems\cite{cycletempo}. The system model includes a simple model of a radial turbine. Given constrained operating conditions, the ORC system is optimized simultaneously with the molecular parameters defining the fluid properties according to PC-SAFT equation of state\cite{gross2001perturbed}. The optimizer is provided by a state-of-the-art optimization suite \cite{nexus}. The working fluid is selected by comparing the optimized molecular parameters to the ones of real fluids.\\ The procedure has been preliminarily tested using as an example the specifications of a waste heat recovery ORC turbogenerator for truck engines \cite{Lang2013AssessmentWasteHeat}. The choice of the working fluid is restricted at the moment to siloxanes. The preliminary design of the turbine governs the optimization. The turbine has been modeled by applying the methodology described by Whitfield and Baines \cite{whitfield1990}. Future work will be devoted to the implementation of refined component models and to the extension of the fluid selection to other organic fluid classes. RESEARCH ON THE POWER GENERATION TECHNOLOGY WHICH COMBINED A DIFFERENT LOW HEAT SOURCE Syunichi Mishima, Yasuyuki Ikegami Abstract: Kalina cycle is a power generation system for waste heat source of low temperature. This cycle is mainly used to a single heat source. By the way, there are extremely large potential of waste heat having difference temperature region simultaneously. Therefore, this research examined the ORC system which can use two or more heat source effectively by a single system. Four different systems for two waste heat sources were proposed. Their four ORC system were evaluated for waste water (70℃) and exhaust gas(300℃) as hot heat source. As the result, the power output of system which warming the separator latter part of a cycle directly by an exhaust gas was increased about 40 percent than one of conventional system. DESIGN OF ORGANIC RANKINE CYCLE SYSTEM AND RADIAL INFLOW TURBINE FOR RECOVERY OF REFINERY LOW-GRADE WASTE HEAT Hyung-Chul Jung, Susan Krumdieck Abstract: In the present study, design and analysis of an organic Rankine cycle (ORC) system and radial inflow turbine is presented for a 250 kW pilot binary cycle power plant to recapture low-grade waste heat released from a petroleum refining process. A total of 12 pure fluids and mixtures are investigated. The refinery heat source is in a kerosene liquid stream with a flow rate of approximately 6000 ton/day at a temperature range of 105 – 140C. The thermodynamic analysis of the ORC is first performed to determine its operating conditions. They are then used as requirements for the preliminary aerodynamic design of a radial turbine for the system. The aerodynamic analysis is based on both the dimensionless parameters, such as the specific velocity and the specific diameter, and the stage loading and flow coefficients. The numerical turbine model developed is validated against experimental data from published literature. Turbine stage efficiency is estimated by means of the rotor flow loss models. Results show that the kerosene stream flow rate needed and the turbine size significantly vary according to working fluids – the flow rates range from about 3070 to 3730 t/d and the rotor blade tip diameter about 0.20 to 0.43 m. Overall, less flow rates and smaller sizes are required for the mixtures than the pure fluids. The turbine design results obey the geometric, flow, structural and vibration design criteria proposed by researchers. The determined geometric and aerodynamic parameters of the turbine stage are considered beneficial for a detailed analysis of the turbine. Potentials for increased cost efficiency of Modified Rankine Cycle plants using two-phase expansion for Power generation from Low Temperatures Henrik Ohman, Per Lundqvist Abstract: The task of reducing global carbon dioxide emissions leads to a need to reduce the average CO2-emission in power generation. A more energy efficient mix of power generation on national, or regional level, will require the re-use of waste heat and use of primary, low temperature heat for power generation purposes. Modified Rankine Cycles (MRC), such as Organic Rankine Cycles, Trilateral Flash Cycles, Kalina Cycles are types of Low Temperature Power Cycles (LTPC,s) offering a large degree of freedom in finding technical solutions for such power generation. Theoretical understanding of MRC’s advance rapidly though practical achievements in the field show very humble improvements at a first glance. Cost of applying the new knowledge in real applications seems to be an important reason for the discrepancy. As LTPC’s generally are small scale power plants, less than 3MWe, an obvious cost driver is size itself. However, another strong reason for the high cost level is the diversity of process fluids required and consequently the lack of standardization and industrialization. Uses of supercritical power cycle technology tend to cause the same dilemma. New, upcoming regulations prohibiting the use of several process fluids could also lead to remedies increasing plant cost. By using 2-phase turbine inlet conditions in MRC’s the need to use many different process fluids is believed to be reduced, allowing simpler and more cost efficient LTPC’s by simplified matching of heat source temperature characteristics. This article explains the opportunities accordingly. Definitions of different sample applications for LTPC’s have been made in order to simulate the different power generation opportunities using fundamentally different process fluids in the particular applications. The methodology is suitable for optimization in specific cost, Net Power Out or efficiency. The results indicate a potential to design LTPC’s with good efficiency in significantly wider thermal conditions than previously, without changing the fluid. Conclusions are made that cost optimization of LTPC’s is possible through the use of 2-phase turbine inlet fluid conditions, allowing cheaper process fluids and standardization of the power plant architecture. Sensitivity to choice of fluid is reduced to 10% in cost and <5% in FractionOfCarnot and Net Power Out when optimization of 2-phase turbine inlet conditions is allowed. Consequences of the conclusions are that LTPC’s can be made more commercially attractive and thereby contribute in decreasing the average carbon emissions from power generation. Simulations of compressible flows in the liquid-vapour critical point region using non-classical scaling laws Tiemo Mathijssen, Alberto Guardone, Piero Colonna Abstract: As it is well known, thermodynamic models based on analytic equations of state fail to reproduce the singular behaviour at the vapour-liquid critical point. For example, cubic equations of state provide inaccurate value of all properties close to the critical point [1]. Multi-parameter equations of state provide accurate estimations of the primary properties thanks to the inclusion of so-called critical terms in the functional form, but derived quantities are a ffected by the inherently incorrect functional form and departure from physical behaviour becomes apparent, especially if fi rst and second order derivative of primary properties are considered [2]. Balfour and collaborators formulated an equation of state using the method of non-classical scaling that is capable to accurately predict the thermodynamic properties at and in the close proximity of the critical point [3]. In order to simulate compressible flows in the vicinity of the critical point, we implemented the non-classical scaling thermodynamic model in our in-house thermodynamic library [4]. A comparison is reported between the values of relevant primary and derived thermodynamic properties for CO2 obtained with the scaling-laws model, the Span-Wagner [5] equation of state and measurement data close to critical conditions, to assess the predictive capabilities of the non-classical scaling model. In particular, the divergence of the fundamental derivative of gasdynamics Gamma to minus infinity approaching the critical point from the two-phase region is predicted by the non-classical scaling model [2]. The negative value of the fundamental derivative of gasdynamics Gamma heralds possible non-classical gasdynamic behaviour in the two-phase critical region. To investigate these phenomena, our in-house real-gas solver, coupled to the thermodynamic library, is used. In particular, numerical simulations of the formation and propagation of non-classical two-phase rarefaction shock waves are carried out. The computed shock velocity and strength are assessed against the exact theory of Rankine-Hugoniot. Non-classical gasdynamic behaviour at the critical point is predicted to impact the design of fluid devices operating in the close proximity of the critical region, such as expanders for advanced Organic Rankine Cycle power systems [6]. [1] M. M. Abbott, "Cubic equations of state", AIChE J., vol. 19, p. 596, 1973. [2] N. Nannan, A. Guardone, and P. Colonna, "On the fundamental derivative of gas dynamics in the vapor-liquid critical region of single-component typical fluids," Fluid Phase Equilibria, vol. 337, pp. 259-273, 2013. [3] F. W. Balfour, J. V. Sengers, M. R. Moldover, and J. M. H. L. Sengers, "Universality, revisions and corrections to scaling in fluids", Phys. Lett. A, vol. 65, pp. 223-225, 1978. [4] P. Colonna, T. P. van der Stelt, and A. Guardone, "FluidProp (Version 3.0): A program for the estimation of thermophysical properties of fluids." http://www.fluidprop.com/, 2010. A program since 2004. [5] R. Span and W. Wagner, "A new equation of state for carbon dioxide covering the fluid region from the triple-point temperature to 1100 K at pressures up to 800 MPa," J. Phys. Chem. Ref. Data, vol. 25, no. 6, pp. 1509-1596, 1996. [6] E. Casati, A. Galli, and P. Colonna, "Thermal energy storage for solar powered organic Rankine cycle engines", Solar Energy, 2013. Submitted for publication. A 10KW SOLAR POWER PLANT FOR RURAL ELECTRIFICATION Remi Daccord, Vincent Rieu Abstract: The MICROSOL project, initiated by Schneider Electric, aims at developing a 10kW solar power plant for rural electrification in line with projects led by University of Liege [1], EPFL [2] and the STG NGO [3]. One of the two solutions chosen by Schneider Electric is based on two French companies’ know-how: Exosun and Exoès. Their system is based on parabolic trough concentrators, a pressurized water storage, a R245fa Rankine cycle, a scroll turbine, a dry cooler and a recycling water module. The role of Exoès is to convert the thermal power available from the solar plant in electricity, then converted by Schneider Electric’s power electronics. This paper relates the design of the power plant optimizing components, followed by test results and control optimization. GENERAL OVERVIEW The power plant specifications require 24h electricity production: 10kW during the day and 3kW in the night. The source of energy is water at 180°C - 16bars produced by a 600 m² field of parabolic troughs and stored in a 20m3 water tank. The heat is transferred through plate heat exchangers to a R245fa Rankine cycle to produce vapor at 10 to 30 bars. The expander are two 325cc and 108cc scroll turbines operating a volumetric expansion ratio of less than 3. A cold loop condensates the vapor and evacuate the waste heat through a dry cooler. The power output of the plant is controlled by the power demand on the grid. According to the available hot and cold temperatures, the feed pumps must quickly adjust the inlet pressure to reach the power output required while a super capacitor and a few batteries instantly supply the difference. Thanks to a typical load curve and weather files coupled to dynamic solar modeling, the Rankine cycle has been designed to produce 10kW during the day with high outside temperature and high hot source temperature (50% of operating time). During the night, lower temperatures do not enable the expander to produce more than only a third of its maximal load, matching the required 3kW (25% of operating time).The cycle won’t be able to produce the required power when it is too hot or if the hot source is too cold. A load shedding system is thus foreseen. COMPONENTS OPTIMIZATION We chose components providing the best efficiency. All auxiliaries have been designed to reduce their consumptions. Concerning the cold loop, only brushless motors drive pumps and fans. It enables us to reach a higher efficiency than conventional asynchronous motors over a wide range of speeds. On top of that, the dry cooler is large enough to cool down the system even with an extreme 45°C ambient temperature and it has few pressure losses. So that fans run slowly most of time avoiding cubic progression of the consumption. We swapped one turbine working 24h/24 for two parallel turbines, each having its own reserved power range. In both cases, the same 325cc turbine runs during the day so that the power plant can produce 10kWe. During the night, we chose to stop it and start a smaller 108cc turbine to have better efficiency avoiding to empty the storage quickly. This study concludes that this expander optimization leads to the characteristics below that are far more interesting to reach competitive electricity costs of the power plant. TEST AND CONTROL After a year of modeling and prototyping, lab tests by Exoès began in early 2013. Expander isentropic efficiency and filling factor can be compared to the state of the art. A cycle efficiency of the power plant can be pointed out. In this paper, we described both the difficulties we faced to start and run the Rankine cycle and the different ways to reach better performance of the power plant based on the experiment. This project is pursued by field tests in mid 2013 near Marseilles-France that are conducted by CEA (French atomic energy committee). A second test phase will then take place in Africa in 2015. LITERATURE [1] Design and experimental investigation of a small-scale organic Rankine cycle using a scroll expander, Sébastien DECLAYE, Sylvain QUOILIN, Vincent LEMORT, The 20th International Compressor Engineering Conference, Purdue, 2010 [2] Integration and optimization of thermoeconomic & environomics hybrid solar thermal power plants, thesis, Malik KANE, EPFL, 2002 [3] Solar Turbine Group (STG)’ NGO program in Lesotho, 2004 SYSTEM AND COMPONENT MODELLING FOR AN EFFICIENT 10kWe ORC UTILISING A TURBO-EXPANDER Martin White, Abdulnaser Sayma Abstract: Despite increasing interest in ORC over recent years, small scale systems have yet to make their mark due to the lack of an efficient, ORC specific expander and high costs. However, with careful component selection and design, an efficient and economical system could see widespread use within applications such as solar power and waste heat recovery. For small outputs, volume expanders such as screw and scroll machines have typically been preferred over turbo-expanders due to lower rotational speeds and ease of conversion from compression machines. However, for 10kWe output screw devices experience high leakage flows, whilst scroll machines remain untested, and are limited in efficiency. An efficient, well designed radial expander could therefore bridge the current gap between the output of scroll and screw based cycles. This paper describes the development of a steady-state ORC sizing and optimisation tool integrated with real fluid properties. The program, implemented in FORTRAN, advances on current models by combining detailed component models, including off-design performance, with multi-objective optimisation. For a pre-defined set of components the objective function is the maximisation of work output which results in an optimal solution which couples component and system performance. Comparatively, the model can also be used for component sizing through the use of an objective function which couples performance with system complexity. A modular modelling approach allows the interchanging of different objective functions in addition to different component models. An initial case study is explored, and R-245fa or R-123 are found to be the most suitable working fluids for an experimental system. This selection is based on thermodynamic, environmental and design considerations, in addition to the practicalities of the available lab space. These results will be used to size the rig and construct a prototype expander. After model validation, more novel working fluids will be explored. MEETING THE CHALLENGE OF RANKINE CYCLE BASED WASTE HEAT RECOVERY SIMULATION IN AUTOMOTIVE APPLICATIONS Stephen Streater, Zhiqiu Pan Abstract: The ability to quickly and accurately model small scale vapour cycle systems is of increasing importance to virtually all sectors of the Automotive industry, and indeed other industries where the internal combustion engine is widely utilised. This is especially the case for heavy commercial vehicles, as well as small scale power generation applications, where duty cycles include prolonged periods at high engine load conditions. The realtively high capital cost and service life of machinery used in these applications makes them particularly suited to maximising the fuel economy benefits associated with Waste Heat Recovery (WHR). Concepts for automotive WHR are tending to focus on systems that use water-steam, and/or Organic Rankine Cycle (ORC) fluids, to recover heat from the vehicle’s exhaust, EGR cooler, or liquid cooling system. These small-scale Rankine Cycle systems are aimed at recovering at least some of the 60-70% of fuel energy that is normally lost to the surroundings. The recovered energy is used to heat the working fluid to a superheated vapour which is then expanded using either a turbine or a piston machine to extract useful work. This is then returned to the vehicle powertrain as either mechanical or electrical energy. The study shows how Flowmaster has extended its existing vapour cycle modelling capabilities, originally developed for water-steam systems in the Power Generation industry, to produce an ORC capability for automotive Vehicle Thermal Management System (VTMS) engineers. Important new numerical models have been developed in order to represent the key components used in the increasingly important application of automotive WHR simulation. These single component models can be successfully used to build Flowmaster system level networks and thereby allow the complete Rankine cycle to be simulated. The resulting system level model uses a recently developed solver that is based on energy conservation at every network node, thus allowing the behaviour of the entire WHR system to be predicted. In addition to the conventional water-steam cycle, the modelling approach has been successfully applied to two of the more commonly used ORC fluids to provide a better reflection of current small scale WHR concepts. Proper calibration of the component numerical models produces an excellent correlation with measured test data, thus validating their use for the design layout and development of small scale WHR systems. The study concludes that this newly developed approach to modelling automotive WHR can effectively meet the new challenges facing VTMS engineers at this time of increased powertrain electrification and engine downsizing. HIGH EFFICIENCY LOW TEMPERATURE ORC SYSTEM Errol Yuksek, Parsa Mirmobin Abstract: EXTENDED ABSTRACT Surveying global heat sources available for generation of electric power both in industrial applications and from natural sources, it is clear that the vast majority of such sources are at the lower end of the temperature spectrum (180 F to 220 F). Access Energy has developed a high efficiency, low temperature organic Rankine cycle (ORC) system to specifically address the largely untapped heat sources available for power production. Access Energy has leveraged its existing ORC products and technologies to develop this extra low temperature (XLT) ORC system. At the heart of the new design is the integrated power module (IPM) - a hermetically sealed, high speed expander coupled to a permanent magnet (PM) generator supported by magnetic bearings. Power from the IPM is fed to an advanced power converter that converts variable frequency, variable voltage power to constant frequency constant voltage grid quality output power with efficiencies in excess of 94%. These key features combine to create a robust, high efficiency, maintenance free power generator. In order to maintain high system efficiency across the wide source temperature range, the turbine is designed to operate across a large range of pressure ratios (2:1 to 8:1). This significant goal has been realized through advanced real gas CFD analysis of the entire IPM assembly together with results from the successful fielded ThermapowerTM 125MT system. The design of the XLT ORC system is most heavily influenced by the heat source conditions, the cooling source and ambient conditions, heat exchanger design, working fluid selection, and turbo-generator design. Results of the turbine and CFD analysis were fed into a high fidelity ASPEN Plus plant model. The ASPEN model incorporates real fluid, pump, turbine and heat exchanger characteristics. Together with advanced solver algorithms, a highly accurate performance prediction for the entire ORC system has been achieved. LITERATURE [1] Hawkins, L., Zhu, L., Blumber, E., et al., 2012. Heat-to-Electricity with High-Speed Magnetic Bearing/Generator System. In: Geothermal Resources Council, Annual Meeting. Reno, NV, USA, 1-3 October 2012. COOLPROP: AN OPEN-SOURCE REFERENCE-QUALITY THERMOPHYSICAL PROPERTY LIBRARY Ian Bell, Sylvain Quoilin, Jorrit Wronski, Vincent Lemort Abstract: Modeling and simulation of thermodynamic cycles requires access to thermodynamic and transport properties of the working fluids. It is especially true in the case of Organic Rankine Cycles (ORC), for which the properties of organic fluids are not as easily available as those of water (e.g. when simulating traditional steam cycles) or air (e.g. when simulating gas turbines). Therefore, libraries of thermodynamic and transport properties based on high accuracy equations of states are needed. This work presents a new open-source and computationally efficient thermodynamic properties library named CoolProp. This library has been successfully tested for the simulation of refrigeration and ORC systems in steady-state as well as in dynamic models. %For all manner of analysis, it is useful to have access to thermodynamic and transport properties for fluids. In truth, it is not possible to conduct research in thermal sciences without access to accurate thermophysical properties. It is for that reason that a library of thermodynamic and transport properties has been developed which covers 86 pure and pseudo-pure fluids, and 21 brines and incompressible liquids. The working fluids available in CoolProp include all the most significant Organic Rankine Cycle working fluids, including R245fa, the siloxanes (MM, MDM, MD2M, MD3M, MD4M, D4, D5, D6), water, and many others. Wrappers have been developed that allow the use of CoolProp with Modelica, MATLAB, Python, C\#, Octave, Microsoft Excel, Labview, and EES. CoolProp is cross-platform and can be used on Linux/Unix, Mac OSX and Microsoft Windows. For Organic Rankine Cycles, the ability to capture the transient behavior of the system is very important, and it is here that the routines developed in CoolProp excel. Dynamic modeling involves numerous calls to the thermodynamic properties with p and h as inputs variables, both during the initialization phase and during the integration phase. Advanced lookup table methods have been developed (based on the Tabular Taylor Series Expansion) that allow for computationally efficient evaluation of the thermophysical properties. {\bf HELMHOLTZ ENERGY BASED EQUATION OF STATE} {\bf Core formulation} All the working fluids that are implemented in CoolProp are based on Helmholtz energy equations of state. The total non-dimensionalized Helmholtz energy can be given as the sum of two components, the residual- and ideal-gas components to the Helmholtz energy. Thus the non-dimensionalized Helmholtz energy can be given by $$\alpha = \alpha^0+\alpha^r.$$ The elegance of this formulation is that all other thermodynamic properties can be obtained through analytic derivatives of the terms $\alpha^0$ and $\alpha^r$. For instance, the other fundamental thermodynamic properties can be obtained from $$\frac{p}{\rho RT}=1+\delta \left( \frac{\partial \alpha^r}{\partial \delta} \right)_{\tau}$$ $$\frac{h}{RT}=\tau\left[\left( \frac{\partial \alpha^0}{\partial \tau} \right)_{\delta} + \left( \frac{\partial \alpha^r}{\partial \tau} \right)_{\delta} \right]+\delta \left( \frac{\partial \alpha^r}{\partial \delta} \right)_{\tau}+1$$ $$\frac{s}{R}=\tau\left[\left( \frac{\partial \alpha^0}{\partial \tau} \right)_{\delta} + \left( \frac{\partial \alpha^r}{\partial \tau} \right)_{\delta} \right]-\alpha^0-\alpha^r$$ where $\delta=\rho/\rho_c$, $\tau=T_c/T$, and $\rho_c$ is the critical density and $T_c$ is the critical temperature. The exact form of the non-dimensional Helmholtz energy terms is fluid dependent, but a canonical example is the propane equation of state \citep{Lemmon-2009}. Analytic derivatives of $\alpha^0$ and $\alpha^r$ with respect to $\tau$ and $\delta$ are presented in the paper of Lemmon \citeyearpar{Lemmon-2009}. Additionally, other thermodynamic parameters (speed of sound, specific heats, etc.) can be obtained analytically. As the equations of state use temperature and density as the fundamental properties, if other inputs are desired, it is necessary to employ numerical solvers to obtain temperature and density given the set of inputs. {\bf Saturation curve} In the two-phase region, as well as along the saturation curves, it is necessary to evaluate the phase equilibrium between the saturated liquid and the saturated vapor. For a pure fluid, it is known that at equilibrium the temperatures, pressures and Gibbs free energy in each phase are the same. A number of numerical methods can be used to carry out the necessary equilibrium calculations, but the algorithm implemented in CoolProp is that of Akasaka \citeyearpar{Akasaka-2008}. When this solver begins at the values from the ancillary equations, this solver generally can yield convergence for temperatures below 0.1 K less than critical temperature. In the near vicinity of the critical point, the behavior of the saturation solvers becomes significantly less robust, even with good guess values for the saturation densities. As a result, it is necessary to employ other methods to extend the saturation curves all the way up to the critical temperature. In CoolProp, the saturation solver of Akasaka is used to get as close to the critical temperature as possible. Beyond that point, a spline curve is used for the saturation curve, where the value and derivative constraints can be obtained directly. This yields a smooth ($C_1$ continuous) transition from the EOS to the critical region spline. {\bf TABULAR TAYLOR SERIES EXPANSION INTERPOLATION} While the evaluation of the thermodynamic properties using CoolProp has been optimized in order to achieve computational speeds better than the state of the art thermophysical property databases \cite{Lemmon2010}, the evaluation of thermodynamic properties using the full equation of state is too slow for use in dynamic simulations. For that reason, the tabular Taylor series expansion method has been extended to all the fluids in the CoolProp database. This method was originally proposed for the evaluation of the thermodynamic properties of water \citep{Miyagawa-2001}, but it works just as well for other fluids. This TTSE method is based on a two-dimensional Taylor expansion around each point in a grid of tabulated data points. Thus, the expansion of temperature in terms of pressure and enthalpy can be expressed as $$T = T_{i,j}+\Delta h\left(\frac{\partial T}{\partial h}\right)_{p}+\Delta p\left(\frac{\partial T}{\partial p}\right)_{h}+\frac{\Delta h^2}{2}\left(\frac{\partial^2 T}{\partial h^2}\right)_{p}+\frac{\Delta p^2}{2}\left(\frac{\partial^2T}{\partial p^2}\right)_{h}+\Delta h\Delta p\left(\frac{\partial^2T}{\partial p\partial h}\right)$$ where each of the partial derivatives are the values evaluated at the $i,j$ grid point. Thus if the values of $\Delta h = h-h_i$ and $\Delta p = p-p_j$ are known, it is then possible to evaluate the dependent variable ($T$ in this case). The same form of expansion can be carried out with entropy or density as the dependent variable. Pressure and enthalpy are used as the independent variables as they are one of the most computationally expensive pairs of input values, and are most commonly used as the state variables in dynamic modeling. In principle this tabular method can be used with any pair of independent variables. Furthermore, a similar methodology can be employed for the saturation properties, which can be evaluated based on a tabular one-dimensional Taylor series with pressure as the independent variable. As with the single-phase tables, pressure is used as the independent variable as it is the independent variable that requires the most computational effort in the two-phase region. Speed Comparison For pressure and enthalpy as inputs, the TTSE method is extremely fast. For the same configuration in Modelica (a dynamic modeling programming language), the computational time of CoolProp using the TTSE method is 8.1 times less than that when using the full equation of state. Figure \ref{fig:speeds} compares several different thermophysical property packages in Modelica on the same configuration. Transport properties (viscosity and thermal conductivity) are not calculated. This benchmark example can be found in the CoolProp2Modelica package for Modelica. For the sake of the benchmark, the thermodynamic properties are called 20,000 times along an isobar with various libraries spanning the three regions (sub-cooled, two-phase, superheated). One grid point corresponds to one call to the library. The Modelica library CoolProp2Modelica is derived from the ExternalMedia library developed by Casella \citep{Casella-2008}. A speed comparison on a complete ORC model was also performed. The selected ORC model is the one proposed by Quoilin \citeyearpar{Quoilin-2011}, comprising two discretized heat exchangers (20 cells) and pump/expander models based on efficiency curves, plus a control system with variable set point temperature based on two PI controllers. The simulation length is 1669 seconds, it is solved in 142 seconds with TILMedia, 91 seconds with CoolProp and 13.5 seconds with the CoolProp/TTSE method. These results should be considered to be representative, but they are not one-to-one comparisons due to the vagaries of the integrator in Dymola. MODELLING OF SCROLL MACHINES: GEOMETRIC, THERMODYNAMICS AND CFD METHODS Mirko Morini, Claudio Pavan, Michele Pinelli, Eva Romito, Alessio Suman Abstract: The scroll fluid machine has gained popularity since the 1970s as a compressor in air conditioning and refrigeration applications. Its main advantages are the small number of moving parts and its reduced noise and vibrations. Recently, this technology has gained renewed interest due to its potential adaptability to be used as an expander in micro ORC systems. The ever increasing request for higher efficiency in machine operation (e.g. eco-design) has led to the need for designers to thoroughly investigate the kinematic and thermodynamic behavior of these machines by means of geometric, thermodynamic and, very recently, CFD methods. The relationship between the scroll spiral profiles, and, therefore, scroll pockets evolution, and the machine overall performances both in terms of energy and mechanics is the first step towards understanding scroll machine working behavior. In [1], a method for the design of spiral profiles for performance enhancing of the whole refrigeration plant is presented. In [2], particular attention is paid to the stress to which the scroll profiles are subjected as a function of the geometry of the pockets in order to minimize the thickness of the spiral by saving the mechanical integrity of the scroll. Scroll machine performance evaluation as a function of spiral geometry can be performed by means of thermodynamic models by taking into consideration volumetric loss due to leakage flows [3,4,5]. The use of CFD methods for the evaluation of scroll machine performance is not widespread in literature. In [6] an analysis oriented to the evaluation of the pressure distribution in the pockets and in the leakages through the flank gap is presented. In this paper, geometric, thermodynamic and CFD methods for the modeling of scroll machines are presented. The comparison between two geometric models for the design of the scroll spiral profiles is presented. The two methods are then compared by evaluating overall performances by means of a simplified thermodynamic model. Finally, a CFD transient Dynamic Mesh (DM) strategy is implemented and a sensitivity analysis in terms of grid, boundary conditions and time step performed. A SIMPLE APPROACH TO HEAT EXCHANGER SIZING OPTIMISATION BY MEANS OF ENTROPY GENERATION MINIMISATION Jarosław Mikielewicz, Dariusz Mikielewicz, Jan Wajs Abstract: In the paper an attempt is presented to find the method of optimization of microtube diameter with respect to optimal thermal-hydraulic conditions in the single-phase shell-and-tube heat exchangers. The approach is based on consideration of pumping power at the condition of maximum heat transfer by the heat exchanger tube system. In the optimization method the tube diameter is first specified and then appropriate calculations are executed, showing that from the point of heat transfer the lower the diameter of the tube the better the heat transfer, however at the expense of higher pressure drop. INCREASE OF ORC EFFICIENCY BY MAXIMUM HEAT USE FROM A LOW TEMPERATURE SOURCE Dariusz Mikielewicz, Jarosław Mikielewicz Abstract: In the present study cooperation of the ORC cycle with the heat source available as a single phase or phase changing fluids is considered. The analytical heat balance models have been developed, which enable in a simple way calculation of heating fluid temperature variation as well as the ratio of flow rates of heating and working fluids in ORC cycle. The developed analytical expressions enable also calculation of the outlet temperature of the heating fluid. DESIGN AND TESTING OF A RADIAL-AXIAL MICROTURBINE Jarosław Mikielewicz, Dariusz Mikielewicz, Jan Wajs, Krzysztof Kosowski, Robert Stepien Abstract: In the paper presented is a new design of the radial-axial microturbine of 3-4kW capacity for operation with ethanol as working fluid PROPOSED ORC SYSTEM WITH TWO STAGE SCREW EXPANDER FOR DUAL HIGH & LOW TEMPERATURE HEAT SOURCE OPERATION Hideharu Yanagi, Michael Khong, Gayle Tan Abstract: Current available ORCs on the market are in MW power range and very few are actually made for the kW scale. For ocean voyage ship application, a moderate scale of ORCs is demanded. Current available ORC machines are generally composed of turbo expander or scroll Expander (1-10kWe), shell & tube evaporator and condenser. They are not suitable for installing on board ships under accelerating load of pitching and rolling under marine condition. Turbo expanders are considered to be inappropriate to use and also evaporators with flooded working fluid are not suitable. Employment of screw expander is key issue in an ORC unit for marine use. Authors are planning to develop a modular ORC unit of 200kW which can be recovered from the marine vessel exhaust of both the main engine and auxiliary engine at about 250°C and released at 180°C, thus its electric output of 200kW can replace the use of an auxiliary generator or about 500,000 litres of diesel annually. For a ship with a 1MW auxiliary generator, this represents a 20% increase in electrical efficiency or a potential fuel savings of 20%. This paper presents a proposal system of 20kW for bench scale testing equipment with two stage screw expander for dual purpose operation under high or low temperature heat source. Fig.1 shows the ORC system applying for a SES 36 as a working fluid with cycle efficiency of 18% in high temperature (pressure) operation under an evaporating temperature of 170°C, an inlet pressure of 1st stage of 25.07bar,an outlet of 2nd stage expander of 1.12bar and a condensing temperature of 35.64°C, respectively. Figure also shows a method for operating an ORC system with a high-pressure expander and a low-pressure expander. Therein, the ORC system comprises a bypass line that extends, in a flow direction of the working fluid, from a branching point before the high-pressure expander to the low-pressure expander. In a high temperature operating mode the bypass line is closed and the input control valve of 1st stage expander is opened. In a low temperature operating mode the bypass line is opened and the input control valve of 1st stage expander is closed, and is operated in 2nd stage expander. STUDIES ON THE FACTORS INFLUENCING THE DESIGN OF A SMALL ORC TURBINE Abdul Nassar, Leonid Moroz, Avinash S. Ravi, Oleg M Guryev Abstract: The Organic Rankine Cycle (ORC) named for its use of an organic, high molecular mass fluid with a liquid-vapor phase change or boiling point, occurring at a lower temperature than the water-steam phase change. Due to its usage of low temperature heat sources, the applications of organic rankine cycle are enumerable such as geothermal power generation, industrial waste heat, power generation using solar troughs etc…. Though the cycle efficiencies of ORC are lower, they still are a viable choice when the heat source is of low grade. When used in generic cycle they complement the overall cycle’s efficiencies by generating power from waste heat. The turbine / expander being major equipment plays a vital role in increasing the overall cycle efficiency. Due to the low heat source and use of organic fluids make the flow path design is interesting and challenging. The designer is always at the cross road in deciding on whether to choose a radial or axial flow path for the turbine. This project involves preliminary and detailed design of a radial and axial turbine for given specifications and details on the issues related to sizing, performance, limitation and viability of each of these machines for the given application. In the detailed design many geometrical parameters are selected for optimizing and the influence of different geometrical parameters on the performance is discussed in detail. This paper describes the procedure for developing an ORC turbine from conceptual stages to detailed flow-path design and development of 3D blades SMALL-SCALE COMBINED HEAT AND POWER GENERATION USING SEMI-PERMEABLE MEMBRANE Jing Li, Gang Pei, Jie Ji Abstract: A novel CHP system using semi-permeable membrane is proposed. The fundamentals are illustrated. Mathematical models are built. And some results are presented. THE APPLICATION OF ROSENBLAD HEAT EXCHANGERS IN THE ORC DOMESTIC SYSTEMS Piotr Kolasiński, Zbigniew Rogala Abstract: One of the problems encountered while designing the ORC systems is the proper selection of the heat exchangers which depends on many factors. Among these factors are: characteristics of the heat source supplying the system, required parameters, type of working medium and auxiliary mediums of the system. Frequently the shell-tube and plate heat exchangers are mainly used in ORC systems. They can be characterized by low ratio of heat flow to heat transfer surface. It influences the size of the heat exchangers, and furthermore, the amount of the material used and the whole installation expense. Interesting alternative for the currently applied heat exchangers might be Rosenblad’s Spiral Heat Exchangers (SHE). What makes this construction so particular is the relatively high ratio of the heat flow to the heat transfer surface. The new modified calculation method dedicated to the Rosenblad’s SHE is presented in this article. The formulated method was applied to the calculations of the Rosenblad’s SHE, which serves as evaporator in the prototype ORC system. The results of these calculations are presented herein. The results of the analysis show that the Rosenblad’s SHE might be an interesting alternative for the other types of the heat exchangers applied presently to the ORC systems. Their application creates a possibility of the reduction of size of the installation, as well as, its expenses. THERMODYNAMIC OPTIMIZATION OF A SOLAR-BOOSTED OCEAN THERMAL ENERGY CONVERSION SYSTEM BASED ON ORGANIC RANKINE CYCLE Man Wang, Jiangfeng Wang, Pan Zhao, Yiping Dai Abstract: This paper presents a solar-boosted Ocean Thermal Energy Conversion system based on organic Rankine cycle. Flat-plate solar collectors are installed to collect the solar radiation to elevate the temperature of warm surface seawater. By establishing thermodynamic models of system, parametric optimization is conducted to obtain the optimal system performance using different working fluids. Genetic Algorithm is employed to conduct the optimization of the system. The exergy efficiency of entire system is selected as the objective function under given conditions, and turbine inlet pressure, turbine back pressure and pinch point temperature in evaporator are chose to be the decision variable. Optimization results indicate that three parameters all have significant impact on system performance. Compared with other working fluids, the system using R245fa has the best exergy efficiency, shown as 5.23%, and the net power output can be 79.16 kW. DESIGN, MODELLING AND EXPERIMENTATION OF A SMALL-SCALE SOLAR ORC Olivier Dumont, Sylvain Quoilin, Vincent Lemort, Sebastien Declaye Abstract: This projects presents the design, the modelization and the experimentation of an Organic Rankine Cycle for a small-scale solar power plant (2,5 kWe) in a way to test and optimize control strategies. The final layout and the justifications of technical choices are presented. Simulations predict a global efficiency of 5% and an ORC efficiency of 8.5% for evaporation and condensation temperatures being equals respectively to 140°C and 35°C. TRANSIENT RESPONSE OF PHASE CHANGE HEAT EXCHANGER IN VARIABLE HEAT SOURCE TEMPERATURE Byung-Sik Park, Muhammad Usman Aslam, Jae Yong Lee, Dong-Hyun Lee Abstract: This paper presents a model describing dynamics of two-phase flow heat exchanger in variable temperature heat source, based on moving boundary approach. Organic Rankine Cycle (ORC) systems are popular for recovering energy from low temperature heat sources. At small scale, ORC units are reliable and cost effective. Generally the power systems are designed to be operated with constant temperature heat source. Waste heat recovery applications may involve scenarios where source temperature may vary. If system operates at variable heat source temperature, evaporator pressure and outlet enthalpy of working fluid would vary due to variation in amount of heat supplied, which in effect will change the operating conditions of rest of the system. MULTIVARIABLE EPSAC PREDICTIVE CONTROL FOR ORGANIC RANKINE CYCLE TECHNOLOGY Andres Hernandez, Adriano Desideri, Clara Ionescu, Sylvain Quoilin, Vincent Lemort, Robin De Keyser Abstract: The Organic Rankine Cycle (ORC) technology has become very popular, as it is extremely suitable for waste heat recovery from low-grade heat sources. As the ORC is a strongly coupled nonlinear multiple-input multiple-output (MIMO) process, conventional control strategies (e.g. PID) may not achieve satisfactory results. In this contribution our focus is on the accurate regulation of the superheating, in order to increase the efficiency of the cycle and to avoid the formation of liquid droplets that could damage the expander. To this end, a multivariable Model Predictive Control (MPC) strategy with improved disturbance rejection capabilities is proposed, its performance is compared to the one of PI controllers for the case of variable waste-heat source profiles. AN EXPERIMENTAL ANALYSIS OF A LOW-LOSS RECIPROCATING PISTON EXPANDER FOR USE IN SMALL-SCALE ORGANIC RANKINE CYCLES Ilaria Guarracino, Richard Mathie, Aly Taleb, Christos Markides Abstract: The current trend for ever increasing energy prices acts as an important driver for improved efficiency via novel heat integration and energy generation schemes. An Organic Rankine Cycle (ORC) equipped with a low-loss two-stage reciprocating piston expander has been designed and is tested experimentally. The reciprocating expander is a low-cost, low-maintenance, and readily available prime mover option for these engines, with promising performance characteristics (e.g. efficiency). The tested expander is based on a commercially available unit intended for air-compression applications, which was reconditioned for the purposes of the present tests. A novel rotary valve was developed to guarantee a high efficiency and low leakage rate. The test-bed gives a maximum mechanical output of 3 kWe with R245fa as the working fluid at pressure limited to 10 bar. The optimal working-fluid was chosen from 21 possible refrigerants and alkanes based on theoretical efficiency calculations. IMPLEMENTATION OF A TWO-STAGE ORGANIC RANKINE CYCLE USING SCROLL EXPANDERS OPERATING UNDER VARIABLE HEAT INPUT George Kosmadakis, Dimitris Manolakos, Erika Ntavou, George Papadakis Abstract: A subcritical solar two-stage organic Rankine cycle (ORC) has been designed, according to optimization studies that have been conducted [1], and then manufactured. Some of its components have been properly modified (e.g. scroll compressors in reverse operation) in order to operate efficiently, while all components have been placed in a test-rig, equipped with the appropriate measuring equipment for its detailed experimental testing. An important feature of this ORC engine, with a capacity of around 10 kW and an efficiency of 10%, is the use of two similar scroll expanders, placed in series. The reason for selecting such configuration emanates from the requirement to operate these expansion machines at high efficiency (even over 70%). In other words, to keep the pressure ratio close to their built-in value, approximately equal to 3. The total pressure ratio at maximum heat input (maximum heat input: 100 kWth at around 130 oC) with a condensation temperature of 30 oC is close to 9-10, using the organic fluid R-245fa. At such conditions both expanders operate with high expansion efficiency, while at lower heat input when the evaporation temperature/pressure is lower, the first expander is totally by-passed and only the second one operates. By doing so, for the whole heat input range the scroll expanders operate at high efficiency and close to their maximum value, significantly contributing to a high system performance. The experimental testing of such ORC engine includes the controlled heat input from an electric heater (resembling the operation of a solar field), while focus is given on some operating parameters, such as the organic fluid mass flow rate, the rotational speeds of the expansion machines and the pump and the appropriate timing of by-passing the first expander. Acknowledgement: The present work is conducted within the framework of the project with contract No. 09SYN-32-982, partly funded by the Greek General Secretary of Research and Technology (GSRT). REFERENCES [1] G. Kosmadakis, D. Manolakos, G. Papadakis, “Investigating the double-stage expansion in a solar ORC”, Presented in the 1st Int Seminar on ORC Power Systems (ORC2011), Delft, The Netherlands, 22-23 September 2011. DEMONSTRATION OF OTEC ADVANCED RESEARCH PROTOTYPE Berend Jan Kleute, Aksel Benlevi, Bram Harmsen, Remi Blokker Abstract: Ocean Thermal Energy Conversion (OTEC) is the largest untapped source of solar energy in the world. With its capability to generate electricity day and night, year-round, OTEC is destined to become an attractive and essential part of the future global energy mix, enabling low cost and clean electricity production. Today, multiple OTEC pilot plants are under development worldwide. To demonstrate and improve OTEC technology, Bluerise in cooperation with Delft University of Technology has designed and built a room size demonstration of an advanced OTEC power plant. This OTEC demo uses a non-azeotropic working fluid suitable for low-grade, large capacity thermal resources, like the ocean, enabling an improved efficiency compared to standard ORC-based OTEC systems. Initial tests focused on stabilizing OTEC demo operation. An accurate temperature control for the warm and cold water side was installed in order to resemble actual operational conditions. Through real-time measurements and control of the system pressure, liquid levels and flow rates, stable operation was achieved validating our theoretical models. Current research is focusing on further optimizing the system behavior and performance. This working, demonstrable plant is an important step in proving validity of advanced OTEC technology and establishing a research center for (low-grade) thermal energy conversion technologies. Initial test results will be presented at the conference. PERFORMANCE EVALUATION OF A SCROLL EXPANDER FOR A LOW CAPACITY ORGANIC RANKINE CYCLE SYSTEM Hyungmook Kang, Sarng Woo Karng, Youhwan Shin, Kwang Ho Kim, Seo Young Kim Abstract: The Organic Rankine Cycle (ORC) is a promising research field in energy conversion. ORC recovers energy from various heat sources by converting low and medium temperature heat into useful work or electricity. The heat used in ORC can come from many different sources (e.g. biomass, geothermal, solar, waste heat from industry, etc.) making the process potentially usable in many commercial or industrial applications. Recently, the commercial applications of ORC technology have been developed with power range of more than 100 kWe. However, the low capacity ORC technologies are still in the research stage. The lack of suitable expander and the difficulty in selecting an appropriate working fluid are main problems in the low capacity ORC research field. The demand of low capacity ORC systems is expected to increase as the distributed combined heat and power supply networks in urban areas are developed. This study conducted performance evaluation of an automotive scroll compressor as an expander which may replace a turbine expander in low power conditions. The performance of a scroll expander was tested with a refrigerant R134a at the various expander inlet pressure, temperature and mass flow rate. The entropic efficiency of the expander was obtained by the experiments. From the experimental results, the cycle analysis was simulated as an optimization process using the genetic algorithm which is one of most powerful optimization method at the multi-domain variations. In addition, the several candidate working fluids were compared by considering efficiency, flow rate, pressure ratio and flammability. The scroll expander is expected to become an advanced alternative main component for low capacity ORC systems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.5569940805435181, "perplexity": 2145.7135764839973}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00431.warc.gz"}
https://proofwiki.org/wiki/Definition:Vector_(Euclidean_Space)
# Definition:Vector (Euclidean Space) ## Definition A vector is defined as an element of a vector space. We have that $\R^n$, with the operations of vector addition and scalar multiplication, form a real vector space. Hence a vector in $\R^n$ is defined as any element of $\R^n$. ### $\R^1$ As $\R$ forms a vector space, every real number is a vector. Note that as $\R$ is a vector space over itself, every real number is also a scalar. Hence a vector in $\R^n$ is sometimes imprecisely used to mean "a vector in $\R^n$, $n > 1$". ### Geometric Interpretation From Set of Real Numbers is Equivalent to Infinite Straight Line, the real number line $\R$ can be represented by an infinite straight line. By the same token, a vector in $\R$ can be represented by a directed line segment. Formally, a vector $\left\langle{x_1}\right\rangle, \ x_1 \in \R$ is accurately represented by the set of all directed line segments having: • Direction dependent on whether $x_1 < 0$ or $x_1 > 0$ By convention, if only one axis is under consideration, the line is placed horizontally, such that segments oriented towards the right are positive, to the left negative. Note that in such a context the zero vector can be interpreted as a directed line segment beginning and terminating at the same point. ## $\R^2$ We have that $\R^2$ is a vector space. Hence any ordered $2$-tuple of $2$ real numbers is a vector. ### Geometric Interpretation From the definition of the real number plane, we can represent the vector space $\R^2$ by points on the plane. That is, every pair of coordinates $\left({x_1,x_2}\right)$ can be uniquely defined by a point in the plane. An arrow with base at the origin and terminal point $\left({x_1,x_2}\right)$ is defined to have the length equal to the magnitude of the vector, and direction defined by the relative location of $\left({x_1,x_2}\right)$ with the origin as the point of reference. Each vector is then represented by the set of all directed line segments with: • Direction equal to the direction of $\overrightarrow{\left({0, 0}\right) \left({x_1, x_2}\right)}$ ## Vector Notation Several conventions are found in the literature for annotating a general vector in a style that distinguishes it from a scalar, as follows. Let $\set {x_1, x_2, \ldots, x_n}$ be a collection of scalars which form the components of an $n$-dimensional vector. The vector $\tuple {x_1, x_2, \ldots, x_n}$ can be annotated as: $\displaystyle \bsx$ $=$ $\displaystyle \tuple {x_1, x_2, \ldots, x_n}$ $\quad$ $\quad$ $\displaystyle \vec x$ $=$ $\displaystyle \tuple {x_1, x_2, \ldots, x_n}$ $\quad$ $\quad$ $\displaystyle \hat x$ $=$ $\displaystyle \tuple {x_1, x_2, \ldots, x_n}$ $\quad$ $\quad$ $\displaystyle \underline x$ $=$ $\displaystyle \tuple {x_1, x_2, \ldots, x_n}$ $\quad$ $\quad$ $\displaystyle \tilde x$ $=$ $\displaystyle \tuple {x_1, x_2, \ldots, x_n}$ $\quad$ $\quad$ To emphasize the arrow interpretation of a vector, we can write: $\bsv = \sqbrk {x_1, x_2, \ldots, x_n}$ or: $\bsv = \sequence {x_1, x_2, \ldots, x_n}$ In printed material the boldface $\bsx$ is common. This is the style encouraged and endorsed by $\mathsf{Pr} \infty \mathsf{fWiki}$. However, for handwritten material (where boldface is difficult to render) it is usual to use the underline version $\underline x$. Also found in handwritten work are the tilde version $\tilde x$ and arrow version $\vec x$, but as these are more intricate than the simple underline (and therefore more time-consuming and tedious to write), they will only usually be found in fair copy. It is also noted that the tilde over $\tilde x$ does not render well in MathJax under all browsers, and differs little visually from an overline. The hat version $\hat x$ usually has a more specialized meaning, namely to symbolize a unit vector. In computer-rendered materials, the arrow version $\vec x$ is popular, as it is descriptive and relatively unambiguous, and in $\LaTeX$ it is straightforward. However, it does not render well in all browsers, and is therefore (reluctantly) not recommended for use on this website. Because of this method of rendition, some sources refer to vectors as arrows. ### Comment The reader should be aware that a vector in $\R^n$ is and only is an ordered $n$-tuple of $n$ real numbers. The geometric interpretations given above are only representations of vectors. Further, the geometric interpretation of a vector is accurately described as the set of all line segments equivalent to a given directed line segment, rather than any particular line segment.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289231896400452, "perplexity": 250.75190098941843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203093.63/warc/CC-MAIN-20190323221914-20190324003914-00541.warc.gz"}
https://arxiv.org/abs/math-ph/0507008
# Title:Spectral Gap and Exponential Decay of Correlations Abstract: We study the relation between the spectral gap above the ground state and the decay of the correlations in the ground state in quantum spin and fermion systems with short-range interactions on a wide class of lattices. We prove that, if two observables anticommute with each other at large distance, then the nonvanishing spectral gap implies exponential decay of the corresponding correlation. When two observables commute with each other at large distance, the connected correlation function decays exponentially under the gap assumption. If the observables behave as a vector under the U(1) rotation of a global symmetry of the system, we use previous results on the large distance decay of the correlation function to show the stronger statement that the correlation function itself, rather than just the connected correlation function, decays exponentially under the gap assumption on a lattice with a certain self-similarity in (fractal) dimensions D<2. In particular, if the system is translationally invariant in one of the spatial directions, then this self-similarity condition is automatically satisfied. We also treat systems with long-range, power-law decaying interactions. Comments: 23 pages, no figures, v2: major revisions of Sections 2 and 4, an error in Appendix A corrected, and minor revisons; v3: a major revision of Appendix A, Assumptions on the interactions of the models changed, and minor corrections Subjects: Mathematical Physics (math-ph); Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th) Journal reference: Commun.Math.Phys. 265 (2006) 781-804 DOI: 10.1007/s00220-006-0030-4 Cite as: arXiv:math-ph/0507008 (or for this version) ## Submission history From: Tohru Koma [view email] [v1] Mon, 4 Jul 2005 06:53:10 UTC (13 KB) [v2] Wed, 23 Nov 2005 09:01:02 UTC (16 KB) [v3] Fri, 16 Dec 2005 03:00:29 UTC (16 KB) math-ph (what is this?)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157960414886475, "perplexity": 880.7131854423992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00248.warc.gz"}
https://www.physicsforums.com/threads/independent-events.335089/
Independent events 1. Sep 6, 2009 kumamako Let A1,A2, . . . ,An be subsets of . Show that if A1,A2, . . . ,An are independent, then the same is true when any number of the sets Ai are replaced by their complements (Ai)c. (Hint: First do the case in which just one of the sets is replaced by its complement. Then argue by induction on the number of sets replaced.) Can someone guide me through this question please? thanks 2. Sep 6, 2009 HallsofIvy Staff Emeritus What is your definition of "independent" sets? Similar Discussions: Independent events
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381370544433594, "perplexity": 2043.727597794557}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00705.warc.gz"}
http://quantumtheory.physik.unibas.ch/people/stano/
# Peter Stano ## Contact Department of Physics University of Basel Klingelbergstrasse 82 CH-4056 Basel, Switzerland office: 4.17 email: view address tel: +41 (0)61 267 3756 (office) ## Research Interests • Semiconductor spin qubits: spin-orbit effects in lateral quantum dots • Mesoscopic spintronics: non-linear efects in transport and spin current measurement • Quantum decoherence: electron/hole spin in nuclear spin bath ## Short CV 2012-now SCIEX fellow with Prof. Dr. Daniel Loss 2009-2011 post-doc with Prof. Philippe Jacquod, University of Arizona, USA 2008-2009 post-doc with Prof. RNDr. Vladimir Buzek DrSc., Slovak Academy of Sciences, Bratislava 2007:PhD in condensend matter theory with Prof. Dr. Jaroslav Fabian, Universty of Regensburg, Germany 2003:Master in theoretical and matematical physics, Comenius University, Bratislava, Slovakia ## Publications Show all abstracts. 1. Topological Superconductivity and Majorana Fermions in RKKY Systems Jelena Klinovaja, Peter Stano, Ali Yazdani, and Daniel Loss. arXiv:1307.1442 We consider quasi one-dimensional RKKY systems in proximity to an s-wave superconductor. We show that a $2k_F$-peak in the spin susceptibility of the superconductor in the one-dimensional limit supports helical order of localized magnetic moments via RKKY interaction, where $k_F$ is the Fermi wavevector. The magnetic helix is equivalent to a uniform magnetic field and very strong spin-orbit interaction (SOI) with an effective SOI length $1/2k_F$. We find the conditions to establish such a magnetic state in atomic chains and semiconducting nanowires with magnetic atoms or nuclear spins. Generically, these systems are in a topological phase with Majorana fermions. The inherent self-tuning of the helix to $2k_F$ eliminates the need to tune the chemical potential. 2. Circuit QED with Hole-Spin Qubits in Ge/Si Nanowire Quantum Dots Christoph Kloeffel, Mircea Trif, Peter Stano, and Daniel Loss. arXiv:1306.3596 We propose a setup for universal and electrically controlled quantum information processing with hole spins in Ge/Si core/shell nanowire quantum dots (NW QDs). Single-qubit gates can be driven through electric-dipole-induced spin resonance, with spin-flip times shorter than 100 ps. Long-distance qubit-qubit coupling can be mediated by the cavity electric field of a superconducting transmission line resonator, where we show that operation times below 20 ns seem feasible for the entangling square-root-of-iSWAP gate. The absence of Dresselhaus spin-orbit interaction (SOI) and the presence of an unusually strong Rashba-type SOI enable precise control over the transverse qubit coupling via an externally applied, perpendicular electric field. The latter serves as an on-off switch for quantum gates and also provides control over the g factor, so that single- and two-qubit gates can be operated independently. Remarkably, we find that idle states are insensitive to charge noise and phonons, and we discuss strategies for enhancing noise-limited gate fidelities. 3. Local Spin Susceptibilities of Low-Dimensional Electron Systems Peter Stano, Jelena Klinovaja, Amir Yacoby, and Daniel Loss. arXiv:1303.1151 We investigate, assess, and suggest possibilities for a measurement of the local spin susceptibility of a conducting low-dimensional electron system. The basic setup of the experiment we envisage is a source-probe one. Locally induced spin density (e.g. by a magnetized atomic force microscope tip) extends in the medium according to its spin susceptibility. The induced magnetization can be detected as a dipolar magnetic field, for instance, by an ultra-sensitive nitrogen-vacancy center based detector, from which the spatial structure of the spin susceptibility can be deduced. We find that one-dimensional systems, such as semiconducting nanowires or carbon nanotubes, are expected to yield a measurable signal. The signal in a two-dimensional electron gas is weaker, though materials with high enough $g$-factor (such as InGaAs) seem promising for successful measurements. 4. Suppression of Interactions in Multimode Random Lasers in the Anderson Localized Regime Peter Stano and Philippe Jacquod. Nature Photonics 7, 66 (2013); arXiv:1210.6462. Understanding random lasing is a formidable theoretical challenge. Unlike conventional lasers, random lasers have no resonator to trap light, they are highly multimode with potentially strong modal interactions and they are based on disordered gain media, where photons undergo random multiple scattering. Interference effects notoriously modify the propagation of waves in such random media, but their fate in the presence of nonlinearity and interactions is poorly understood. Here, we present a semiclassical theory for multimode random lasing in the strongly scattering regime. We show that Anderson localization, a wave-interference effect, is not affected by the presence of nonlinearities. To the contrary, its presence suppresses interactions between simultaneously lasing modes. Using a recently constructed theory for complex multimode lasers, we show analytically how Anderson localization justifies a noninteracting, single-pole approximation. Consequently, lasing modes in a strongly scattering random laser are given by long-lived, Anderson localized modes of the passive cavity, whose frequency and wave profile does not vary with pumping, even in the multi-mode regime when mode overlap spatially. 5. Spin ordering in magnetic quantum dots: From core-halo to Wigner molecules Rafal Oszwaldowski, Peter Stano, Andre G. Petukhov, and Igor Zutic. Phys. Rev. B 86, 201408(R) (2012); arXiv:1210.6422. The interplay of confinement and Coulomb interactions in quantum dots can lead to strongly correlated phases differing qualitatively from the Fermi liquid behavior. We explore how the presence of magnetic impurities in quantum dots can provide additional opportunities to study correlation effects and the resulting ordering in carrier and impurity spin. By employing exact diagonalization we reveal that seemingly simple two-carrier quantum dots lead to a rich phase diagram. We propose experiments to verify our predictions, in particular we discuss interband optical transitions as a function of temperature and magnetic field. 6. Spin-orbit coupled particle in a spin bath Peter Stano, Jaroslav Fabian, and Igor Zutic. PRB 87, 165303 (2013); arXiv:1208.5606. We consider a spin-orbit coupled particle confined in a quantum dot in a bath of impurity spins. We investigate the consequences of spin-orbit coupling on the interactions that the particle mediates in the spin bath. We show that in the presence of spin-orbit coupling, the impurity-impurity interactions are no longer spin-conserving. We quantify the degree of this symmetry breaking and show how it relates to the spin-orbit coupling strength. We identify several ways how the impurity ensemble can in this way relax its spin by coupling to phonons. A typical resulting relaxation rate for a self-assembled Mn-doped ZnTe quantum dot populated by a hole is 1 $\mu$s. We also show that decoherence arising from nuclear spins in lateral quantum dots is still removable by a spin echo protocol, even if the confined electron is spin-orbit coupled. 7. Transition from fractional to Majorana fermions in Rashba nanowires Jelena Klinovaja, Peter Stano, and Daniel Loss. Phys. Rev. Lett. 109, 236801 (2012); arXiv:1207.7322. We study hybrid superconducting-semiconducting nanowires in the presence of Rashba spin-orbit interaction as well as helical magnetic fields. We show that the interplay between them leads to a competition of phases with two topological gaps closing and reopening, resulting in unexpected reentrance behavior. Besides the topological phase with localized Majorana fermions (MFs) we find new phases characterized by fractionally charged fermion (FF) bound states of Jackiw-Rebbi type. The system can be fully gapped by the magnetic fields alone, giving rise to FFs that transmute into MFs upon turning on superconductivity. We find explicit analytical solutions for MF and FF bound states and determine the phase diagram numerically by determining the corresponding Wronskian null space. We show by renormalization group arguments that electron-electron interactions enhance the Zeeman gaps opened by the fields. 8. Theory of Spin Relaxation in Two-Electron Lateral Coupled Si/SiGe Quantum Dots Martin Raith, Peter Stano, and Jaroslav Fabian. Phys. Rev. B 86, 205321 (2012); arXiv:1206.6906. Highly accurate numerical results of phonon-induced two-electron spin relaxation in silicon double quantum dots are presented. The relaxation, enabled by spin-orbit coupling and the nuclei of 29Si (natural or purified abundance), are investigated for all relevant parameter regimes, the interdot coupling, the magnetic field magnitude and orientation, and the detuning. We calculate all relaxation rates for zero and finite temperatures (100 mK), concluding that all findings for zero temperature qualitatively remain valid also for 100 mK. We confirm the same anisotropic switch of the axis of prolonged spin lifetime with varying detuning as recently predicted in GaAs. However, there is a striking difference compared to the GaAs counterpart. In silicon, the hyperfine-induced relaxation rate is negligible in all cases we studied-even for natural silicon. The spin-orbit coupling, although weak, is the dominant contribution, yielding anisotropic relaxation rates of at least two order of magnitude lower than in GaAs. 9. Theory of Spin Relaxation in Two-Electron Lateral Coupled Quantum Dots Martin Raith, Peter Stano, Fabio Baruffa, and Jaroslav Fabian. Phys. Rev. Lett. 108, 246602 (2012); arXiv:1111.6724. A global quantitative picture of the phonon-induced two-electron spin relaxation in GaAs double quantum dots is presented using highly accurate numerical calculations. Wide regimes of interdot coupling, magnetic field magnitude and orientation, and detuning are explored in the presence of a nuclear bath. Most important, the unusually strong magnetic anisotropy of the singlet-triplet relaxation can be controlled by detuning switching the principal anisotropy axes: a protected state becomes unprotected upon detuning, and vice versa. It is also established that nuclear spins can dominate spin relaxation for unpolarized triplets even at high magnetic fields, contrary to common belief. These findings are central to designing quantum dots geometries for spin-based quantum information processing with minimal environmental impact. 10. Non-linear spin to charge conversion in mesoscopic structures Peter Stano, Jaroslav Fabian, and Philippe Jacquod. Phys. Rev. B 85, 241301(R) (2012); arXiv:1201.0249. Motivated by recent experiments [Vera-Marun et al., arXiv:1109.5969], we formulate a non-linear theory of spin transport in quantum coherent conductors. We show how a mesoscopic constriction with energy-dependent transmission can convert a spin current injected by a spin accumulation into an electric signal, relying neither on magnetic nor exchange fields. When the transmission through the constriction is spin-independent, the spin-charge coupling is non-linear, with an electric signal that is quadratic in the accumulation. We estimate that gated mesoscopic constrictions have a sensitivity that allows to detect accumulations much smaller than a percent of the Fermi energy. 11. Measuring Spin Accumulations with Current Noise Jonathan Meair, Peter Stano, and Philippe Jacquod. Phys. Rev. B 84, 073302 (2011); arXiv:1104.2353. We investigate the time-dependent fluctuations of the electric current injected from a reservoir with a non-equilibrium spin accumulation into a mesoscopic conductor. We show how the current noise power directly reflects the magnitude of the spin accumulation in two easily noticeable ways. First, as the temperature is lowered, the small-bias noise saturates at a value determined by the spin accumulation. Second, in the presence of spin-orbit interactions in the conductor, the current noise exhibits a sample-dependent mesoscopic asymmetry under reversal of the electric current direction. These features provide for a purely electric protocol for measuring spin accumulations. 12. Spin-to-Charge Conversion of Mesoscopic Spin Currents Peter Stano and Philippe Jacquod. Phys. Rev. Lett. 106, 206602 (2011); arXiv:1012.1831. Recent theoretical investigations have shown that spin currents can be generated by passing electric currents through spin-orbit coupled mesoscopic systems. Measuring these spin currents has however not been achieved to date. We show how mesoscopic spin currents in lateral heterostructures can be measured with a single-channel voltage probe. In the presence of a spin current, the charge current $I_{\rm qpc}$ through the quantum point contact connecting the probe is odd in an externally applied Zeeman field $B$, while it is even in the absence of spin current. Furthermore, the zero field derivative $\partial_B I_{\rm qpc}$ is proportional to the magnitude of the spin current, with a proportionality coefficient that can be determined in an independent measurement. We confirm these findings numerically. 13. Theory of Single Electron Spin Relaxation in Si/SiGe Lateral Coupled Quantum Dots Martin Raith, Peter Stano, and Jaroslav Fabian. Phys. Rev. B 83, 195318 (2011); arXiv:1101.3858. We investigate the spin relaxation induced by acoustic phonons in the presence of spin-orbit interactions in single electron Si/SiGe lateral coupled quantum dots. The relaxation rates are computed numerically in single and double quantum dots, in in-plane and perpendicular magnetic fields. The deformation potential of acoustic phonons is taken into account for both transverse and longitudinal polarizations and their contributions to the total relaxation rate are discussed with respect to the dilatation and shear potential constants. We find that in single dots the spin relaxation rate scales approximately with the seventh power of the magnetic field, in line with a recent experiment. In double dots the relaxation rate is much more sensitive to the dot spectrum structure, as it is often dominated by a spin hot spot. The anisotropy of the spin-orbit interactions gives rise to easy passages, special directions of the magnetic field for which the relaxation is strongly suppressed. Quantitatively, the spin relaxation rates in Si are typically 2 orders of magnitude smaller than in GaAs due to the absence of the piezoelectric phonon potential and generally weaker spin-orbit interactions. 14. Spin-orbit coupling and anisotropic exchange in two-electron double quantum dots Fabio Baruffa, Peter Stano, and Jaroslav Fabian. Phys. Rev. B 82, 045311 (2010); arXiv:1004.2610. The influence of the spin-orbit interactions on the energy spectrum of two-electron laterally coupled quantum dots is investigated. The effective Hamiltonian for a spin qubit pair proposed in F. Baruffa et al., Phys. Rev. Lett. 104, 126401 (2010) is confronted with exact numerical results in single and double quantum dots in zero and finite magnetic field. The anisotropic exchange Hamiltonian is found quantitatively reliable in double dots in general. There are two findings of particular practical importance: i) The model stays valid even for maximal possible interdot coupling (a single dot), due to the absence of a coupling to the nearest excited level, a fact following from the dot symmetry. ii) In a weak coupling regime, the Heitler-London approximation gives quantitatively correct anisotropic exchange parameters even in a finite magnetic field, although this method is known to fail for the isotropic exchange. The small discrepancy between the analytical model (which employes the linear Dresselhaus and Bychkov-Rashba spin-orbit terms) and the numerical data for GaAs quantum dots is found to be mostly due to the cubic Dresselhaus term. 15. Spin-dependent tunneling into an empty lateral quantum dot Peter Stano and Philippe Jacquod. Phys. Rev. B 82, 125309 (2010); arXiv:1005.0024. Motivated by the recent experiments of Amasha {\it et al.} [Phys. Rev. B {\bf 78}, 041306(R) (2008)], we investigate single electron tunneling into an empty quantum dot in presence of a magnetic field. We numerically calculate the tunneling rate from a laterally confined, few-channel external lead into the lowest orbital state of a spin-orbit coupled quantum dot. We find two mechanisms leading to a spin-dependent tunneling rate. The first originates from different electronic $g$-factors in the lead and in the dot, and favors the tunneling into the spin ground (excited) state when the $g$-factor magnitude is larger (smaller) in the lead. The second is triggered by spin-orbit interactions via the opening of off-diagonal spin-tunneling channels. It systematically favors the spin excited state. For physical parameters corresponding to lateral GaAs/AlGaAs heterostructures and the experimentally reported tunneling rates, both mechanisms lead to a discrepancy of $\sim$10% in the spin up vs spin down tunneling rates. We conjecture that the significantly larger discrepancy observed experimentally originates from the enhancement of the $g$-factor in laterally confined lead. 16. Theory of anisotropic exchange in laterally coupled quantum dots Fabio Baruffa, Peter Stano, and Jaroslav Fabian. Phys. Rev. Lett. 104, 126401 (2010); arXiv:0908.2961. The effects of spin-orbit coupling on the two-electron spectra in lateral coupled quantum dots are investigated analytically and numerically. It is demonstrated that in the absence of magnetic field the exchange interaction is practically unaffected by spin-orbit coupling, for any interdot coupling, boosting prospects for spin-based quantum computing. The anisotropic exchange appears at finite magnetic fields. A numerically accurate effective spin Hamiltonian for modeling spin-orbit-induced two-electron spin dynamics in the presence of magnetic field is proposed. 17. Coexistence of quantum operations Teiko Heinosaari, Daniel Reitzner, Peter Stano, and Mario Ziman. J. Phys. A 42, 365302 (2009); arXiv:0905.4953. Quantum operations are used to describe the observed probability distributions and conditional states of the measured system. In this paper, we address the problem of their joint measurability (coexistence). We derive two equivalent coexistence criteria. The two most common classes of operations - Luders operations and conditional state preparators - are analyzed. It is shown that Luders operations are coexistent only under very restrictive conditions, when the associated effects are either proportional to each other, or disjoint. 18. Notes on Joint Measurability of Quantum Observables Teiko Heinosaari, Daniel Reitzner, and Peter Stano. Foundations of Physics 38, 1133-1147 (2008); arXiv:0811.0783. For sharp quantum observables the following facts hold: (i) if we have a collection of sharp observables and each pair of them is jointly measurable, then they are jointly measurable all together; (ii) if two sharp observables are jointly measurable, then their joint observable is unique and it gives the greatest lower bound for the effects corresponding to the observables; (iii) if we have two sharp observables and their every possible two outcome partitionings are jointly measurable, then the observables themselves are jointly measurable. We show that, in general, these properties do not hold. Also some possible candidates which would accompany joint measurability and generalize these apparently useful properties are discussed. 19. Approximate Joint Measurability of Spin Along Two Directions Teiko Heinosaari, Peter Stano, and Daniel Reitzner. International Journal of Quantum Information 6, 975 (2008); arXiv:0801.2712. We study the existence of jointly measurable POVM approximations to two non-commuting sharp spin observables. We compare two different ways to specify optimal approximations. 20. Coexistence of qubit effects Peter Stano, Daniel Reitzner, and Teiko Heinosaari. Phys. Rev. A 78, 012315 (2008); arXiv:0802.4248. We characterize all coexistent pairs of qubit effects. This gives an exhaustive description of all pairs of events allowed, in principle, to occur in a single qubit measurement. The characterization consists of three disjoint conditions which are easy to check for a given pair of effects. Known special cases are shown to follow from our general characterization theorem. 21. Control of electron spin and orbital resonance in quantum dots through spin-orbit interactions Peter Stano and Jaroslav Fabian. Phys. Rev. B 77, 045310 (2008); arXiv:cond-mat/0611228. Influence of resonant oscillating electromagnetic field on a single electron in coupled lateral quantum dots in the presence of phonon-induced relaxation and decoherence is investigated. Using symmetry arguments it is shown that spin and orbital resonance can be efficiently controlled by spin-orbit interactions. The control is possible due to the strong sensitivity of Rabi frequency to the dot configuration (orientation of the dot and a static magnetic field) as a result of the anisotropy of the spin-orbit interactions. The so called easy passage configuration is shown to be particularly suitable for magnetic manipulation of spin qubits, ensuring long spin relaxation time and protecting the spin qubit from electric field disturbances accompanying on-chip manipulations. 22. Semiconductor Spintronics J. Fabian, A. Matos-Abiague, C. Ertler, P. Stano, and I. Zutic. Acta Physica Slovaca 57, No.4&5, 565-907 (2007); arXiv:0711.1461. Spintronics refers commonly to phenomena in which the spin of electrons in a solid state environment plays the determining role. In a more narrow sense spintronics is an emerging research field of electronics: spintronics devices are based on a spin control of electronics, or on an electrical and optical control of spin or magnetism. This review presents selected themes of semiconductor spintronics, introducing important concepts in spin transport, spin injection, Silsbee-Johnson spin-charge coupling, and spindependent tunneling, as well as spin relaxation and spin dynamics. The most fundamental spin-dependent nteraction in nonmagnetic semiconductors is spin-orbit coupling. Depending on the crystal symmetries of the material, as well as on the structural properties of semiconductor based heterostructures, the spin-orbit coupling takes on different functional forms, giving a nice playground of effective spin-orbit Hamiltonians. The effective Hamiltonians for the most relevant classes of materials and heterostructures are derived here from realistic electronic band structure descriptions. Most semiconductor device systems are still theoretical concepts, waiting for experimental demonstrations. A review of selected proposed, and a few demonstrated devices is presented, with detailed description of two important classes: magnetic resonant tunnel structures and bipolar magnetic diodes and transistors. In most cases the presentation is of tutorial style, introducing the essential theoretical formalism at an accessible level, with case-study-like illustrations of actual experimental results, as well as with brief reviews of relevant recent achievements in the field. 23. Orbital and spin relaxation in single and coupled quantum dots Peter Stano and Jaroslav Fabian. Phys. Rev. B 74, 045320 (2006); arXiv:cond-mat/0604633. Phonon-induced orbital and spin relaxation rates of single electron states in lateral single and double quantum dots are obtained numerically for realistic materials parameters. The rates are calculated as a function of magnetic field and interdot coupling, at various field and quantum dot orientations. It is found that orbital relaxation is due to deformation potential phonons at low magnetic fields, while piezoelectric phonons dominate the relaxation at high fields. Spin relaxation, which is dominated by piezoelectric phonons, in single quantum dots is highly anisotropic due to the interplay of the Bychkov-Rashba and Dresselhaus spin-orbit couplings. Orbital relaxation in double dots varies strongly with the interdot coupling due to the cyclotron effects on the tunneling energy. Spin relaxation in double dots has an additional anisotropy due to anisotropic spin hot spots which otherwise cause giant enhancement of the rate at useful magnetic fields and interdot couplings. Conditions for the absence of the spin hot spots in in-plane magnetic fields (easy passages) and perpendicular magnetic fields (weak passages) are formulated analytically for different growth directions of the underlying heterostructure. It is shown that easy passages disappear (spin hot spots reappear) if the double dot system loses symmetry by an xy-like perturbation. 24. Theory of phonon-induced spin relaxation in laterally coupled quantum dots Peter Stano and Jaroslav Fabian. Phys. Rev. Lett. 96, 186602 (2006); arXiv:cond-mat/0512713. Phonon-induced spin relaxation in coupled lateral quantum dots in the presence of spin-orbit coupling is calculated. The calculation for single dots is consistent with experiment. Spin relaxation in double dots at useful interdot couplings is dominated by spin hot spots that are strongly anisotropic. Spin hot spots are ineffective for a diagonal crystallographic orientation of the dots with a transverse in-plane field. This geometry is proposed for spin-based quantum information processing. 25. Spin properties of single electron states in coupled quantum dots Peter Stano and Jaroslav Fabian. Phys. Rev. B 72, 155410 (2005); arXiv:cond-mat/0506610. Spin properties of single electron states in laterally coupled quantum dots in the presence of a perpendicular magnetic field are studied by exact numerical diagonalization. Dresselhaus (linear and cubic) and Bychkov-Rashba spin-orbit couplings are included in a realistic model of confined dots based on GaAs. Group theoretical classification of quantum states with and without spin orbit coupling is provided. Spin-orbit effects on the g-factor are rather weak. It is shown that the frequency of coherent oscillations (tunneling amplitude) in coupled dots is largely unaffected by spin-orbit effects due to symmetry requirements. The leading contributions to the frequency involves the cubic term of the Dresselhaus coupling. Spin-orbit coupling in the presence of magnetic field leads to a spin-dependent tunneling amplitude, and thus to the possibility of spin to charge conversion, namely spatial separation of spin by coherent oscillations in a uniform magnetic field. It is also shown that spin hot spots exist in coupled GaAs dots already at moderate magnetic fields, and that spin hot spots at zero magnetic field are due to the cubic Dresselhaus term only.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402817845344543, "perplexity": 1853.6795838206654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00019.warc.gz"}
http://www.5htreceptor.com/2017/09/page/11/
## Was measured by densitometry. This was plotted against the inhibitory activity Was measured by densitometry. This was plotted against the inhibitory activity of every sample to make sure that inhibition of MGC formation was not a straightforward function from the concentration with the full length fusion PubMed ID:http://jpet.aspetjournals.org/content/120/2/255 protein. Monocyte fusion assay Peripheral blood monocytes were derived from peripheral whole blood of healthy volunteers by Ficoll-Hypaque … ## Ysis was performed as described in whereas CD14 expression was drastically Ysis was performed as described in whereas CD14 expression was drastically increased following culturing the cells in DCmedium for 24 h. Discussion The present study aimed at investigating the effects of really low LPS concentrations on human immune cells. We show that CD1c+ dendritic cells specially may be activated by minimal amounts of LPS, equivalent … ## Ination: n =R-DC:n = 1 D-DC:n =D1 Hauben(2008)(D)H-(R Ination: n =R-DC:n = 1 D-DC:n =D1 Hauben(2008)(D)H-(R)H-mDC-VAF347 (17) imDC+VAF347 (19) mDC (14) imDC (18)!!q -YTHY/R-DCTotleMHC total mismatch: n = 1 (D)H-2dMonotherapy: n = 1 Combination: n =R-DC:n = 1 D-DC:n =EHuang(2010)7 (R)H-2bR-KSC+D-DC R-KSC+R-DC!!q -Y–/R/D-DCTotleMHC total mismatch: n = 1 (R)H-2b (D)H-2d (T)H-2kMonotherapy: n = 1 Combination: n = 0 CD4+imDC+anti-CD154Ab (6) CD4+imDC+antiCD154Ab+ anti-IL10R Ab(4) … ## Cancer, COPD, and anorexia nervosa [26]. This study investigates the hypothesis that Cancer, COPD, and anorexia nervosa [26]. This study investigates the hypothesis that patients with newlydiagnosed TB display abnormal regulation of hormones which relate to appetite and HDAC-IN-3 nutritional status, and that these abnormalities trend back towards normal values as patients are treated. A better understanding of the mechanisms of appetite suppression in TB may reveal … ## Ce between Th1 and Th2 cells, The functions of IL-12 have Ce between Th1 and Th2 cells, The functions of IL-12 have been fairly well characterized; however, the role of INF-c in asthma has been controversial. Although Caenorhabditis elegans extract was reported to ameliorate asthma symptoms by increasing INF-c expression, hydrocortisone, which is used to treat asthma, has been shown to decrease INF-c expression [28]. Previous …
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712338209152222, "perplexity": 16385.361185950896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00177.warc.gz"}
https://www.physicsforums.com/threads/how-do-you-know-for-sure-what-quadrant.17365/
# Homework Help: How do you know for sure what quadrant? 1. Mar 29, 2004 ### mathzeroh hello all! how do you know for sure which quadrant "they" want you to have your measure in? for example: Write each equation in normal form. Then find p, the measure of its normal, and "phi" the angle the normal makes with the positive x-axis. 21.-10x+5=-5y i've got all the other stuff, it's just that when it comes to the angle measure of "phi," i get sonfused. I don't know how to recognize in which quadrant it should be. for this, I thought that it was this measure: -26.57..........but the correct answer was 333 degrees, approximately. i know that they got this by adding 360 to -26 degrees, but WHY I don't know. thanks in advance for any help 2. Mar 29, 2004 ### Janitor For the original line, the rise over run is 2/1. I'm sure you got that far. The angle that such a line makes to the horizontal axis is arctan(2). The angle the normal to that line makes to the horizontal axis is arctan(2) - 90, and it is pointing into quadrant IV, so it can be thought of as a negative angle. That gives you -26.56 degrees, or so says my calculator. Looking at it as an angle swung counterclockwise (the positive direction of rotation in the plane, by convention) from a ray going horizontally to the right, the angle is 360 - 26.56 = 333.43. Last edited: Mar 30, 2004
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448340535163879, "perplexity": 626.1280890364417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00545.warc.gz"}
http://mathhelpforum.com/algebra/67427-equation-involving-modulus-help.html
# Math Help - Equation involving modulus -- help.. 1. ## Equation involving modulus -- help.. I need help with solving this equation: |x - 1| = |x| + 1 I know that |x| = x when x >= 0 and |x| = -x when x < 0 but I am not sure how to apply it here. When |x - 1| >= 0 , i.e. when x > 1, x - 1 = x + 1 => 0 Probably my reasoning is wrong thus far so I need help... 2. Hi I can see 2 possibilities to answer 1) The equation is equivalent to |x-1| - |x| = 1 You can build a table like this --------------------oo------------0-----------1------------+oo -------|x-1|-------------1-x----------1-x----------x-1 --------|x|----------------x------------x-----------x ----|x-1|-|x|------------1------------1-2x----------1 Now you can easily conclude 2) An alternative way is to square the equation |x-1| = |x| + 1 |x-1|² = (|x| + 1)² x²-2x+1 = x²+2|x|+1 because |x|²=x² -x = |x| 3. Hello, struck! Solve: . $|x-1| \:=\:|x| + 1$ I'll do it the Long Way . . . Since both $x-1$ and $x$ can be positive or negative, . . there are four cases to consider: [1] Both positive: . $x-1 > 0,\;\;x> 0 \quad\Rightarrow\quad x > 1$ . . .Then we have: . $x - 1 \:=\:x + 1 \quad\Rightarrow\quad 0 \:=\:2$ . . . impossible [2] Positive-negative: . $x - 1 > 0,\;\;x < 0$ . . .But this means: . $(x > 1) \:\wedge\:(x < 0)$ . . . impossible [3] Negative-positive: . $x-1 < 0,\;\;x > 0$ . . .Then we have: . $1 - x \:=\:x + 1 \quad\Rightarrow\quad x = 0$ [4] Negative-negative: . $x-1 < 0,\;\;x < 0 \quad\Rightarrow\quad x < 0$ . . .Then we have: . $1 - x \:=\:-x + 1 \quad\Rightarrow\quad 0 \:=\:0$ . . . always true. The inequality is satisfied in cases [3] and [4] The solution is: . $x \:\leq \:0$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ If we graph the two functions, we can "eyeball" the solution. The graph of $y = |x|$ is a $\vee$, vertex at (0,0). .[1] The graph of . $y \:=\:|x-1|$ . is [1], moved one unit to the right. Code: \| 1* / |\ / | \ / | \ / | \ / | \ / - - - - + - - * - - - - | 1 The graph of . $y \:=\:|x|+1$ .is [1], moved one unit upward. Code: | \ | / \ | / \ | / \ | / \ | / \|/ 1* | | | | | - - - - + - - - - - - - | Sketch them on the same coordinate system . . and we can see the solotion: . $x \:\leq\:0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791640043258667, "perplexity": 712.9430710539331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648297.22/warc/CC-MAIN-20141024030048-00098-ip-10-16-133-185.ec2.internal.warc.gz"}
https://icsecbsemath.com/2016/07/09/class8-chapter-19-special-products-and-expansions-exercise-19b/
Question 1: Find the following products: i)  $( y+9 ) ( y-9 )$          ii)  $( 4+b ) ( 4-b )$ iii)  $( 3x-5 ) ( 3x+5 )$          iv)  $( a-$ $\frac{2}{3}$ $) ( a+$ $\frac{2}{3}$ $)$ i)  $( y+9 ) ( y-9 ) = y^2-81$ ii)  $( 4+b ) ( 4-b ) = 16-b^2$ iii)  $( 3x-5 ) ( 3x+5 ) = 9x^2-25$ iv)  $( a-$ $\frac{2}{3}$ $) ( a+$ $\frac{2}{3}$ $) = a^2-$ $\frac{4}{9}$ $\\$ Question 2: Find the following products: i)  $( 3x-5 ) ( 3x+5 )$          ii)  $( 2+7x ) ( 2-7x )$ iii)  $($ $\frac{a}{2}$ $+3 ) ($ $\frac{a}{2}$ $-3 )$         iv)  $( 4x+3y ) ( 4x-3y )$ i)  $( 3x-5 ) ( 3x+5 ) = 9x^2-25$ ii)  $( 2+7x ) ( 2-7x ) = 4-49x^2$ iii)  $($ $\frac{a}{2}$ $+3 ) ($ $\frac{a}{2}$ $-3 ) =$ $\frac{a^2}{4}$ $-9$ iv)  $( 4x+3y ) ( 4x-3y ) = 16x^2-9y^2$ $\\$ Question 3: Find the following products: i)  $($ $\frac{a}{3}$ $-$ $\frac{b}{4}$ $) ($ $\frac{a}{3}$ $+$ $\frac{b}{4}$ $)$           ii)  $($ $\frac{t}{2}$ $-$ $\frac{1}{3}$ $) ($ $\frac{t}{2}$ $+$ $\frac{1}{3}$ $)$ i)  $($ $\frac{a}{3}$ $-$ $\frac{b}{4}$ $) ($ $\frac{a}{3}$ $+$ $\frac{b}{4}$ $) =$ $\frac{a^2}{9}$ $-$ $\frac{b^2}{16}$ ii)  $($ $\frac{t}{2}$ $-$ $\frac{1}{3}$ $) ($ $\frac{t}{2}$ $+$ $\frac{1}{3}$ $) =$ $\frac{t^2}{4}$ $-$ $\frac{1}{9}$ $\\$ Question 4: Find the following products: i)  $($ $\frac{2}{x}$ $+$ $\frac{3}{y}$ $) ($ $\frac{2}{x}$ $-$ $\frac{3}{y}$ $)$      ii)  $($ $\frac{1}{a}$ $-$ $\frac{1}{b}$ $) ($ $\frac{1}{a}$ $+$ $\frac{1}{b}$ $)$ iii)  $($ $\frac{1}{3x}$ $+$ $\frac{2}{5y}$ $) ($ $\frac{1}{3x}$ $-$ $\frac{2}{5y}$ $)$      iv)  $( 1.1x-0.3y ) ( 1.1x+0.3y )$ i)  $($ $\frac{2}{x}$ $+$ $\frac{3}{y}$ $) ($ $\frac{2}{x}$ $-$ $\frac{3}{y}$ $) =$ $\frac{4}{x^2}$ $-$ $\frac{9}{y^2}$ ii)  $($ $\frac{1}{a}$ $-$ $\frac{1}{b}$ $) ($ $\frac{1}{a}$ $+$ $\frac{1}{b}$ $) =$ $\frac{1}{a^2}$ $-$ $\frac{1}{b^2}$ iii)  $($ $\frac{1}{3x}$ $+$ $\frac{2}{5y}$ $) ($ $\frac{1}{3x}$ $-$ $\frac{2}{5y}$ $) =$ $\frac{1}{9x^2}$ $-$ $\frac{2}{25y^2}$ iv)  $( 1.1x-0.3y ) ( 1.1x+0.3y ) = 1.21x^2-0.09y^2$ $\\$ Question 5: Find the following products: i)  $( a^2+2b^2 ) ( a^2-2b^2 )$          ii)  $( 6x^2-7y^2 ) ( 6x^2+7y^2 )$ iii)  $( 4x^2+2yz ) ( 2x^2-yz )$          iv)  $( ab-$ $\frac{3}{2}$ $cd ) ( 2ab+3cd )$ i)  $( a^2+2b^2 ) ( a^2-2b^2 ) = a^4-4b^4$ ii)  $( 6x^2-7y^2 ) ( 6x^2+7y^2 ) = 36x^4-49y^4$ iii)  $( 4x^2+2yz ) ( 2x^2-yz ) = 8x^4+4x^2yz-4x^2yz-2y^2z^2 = \ 8x^4-2y^2z^2$ iv)  $( ab-$ $\frac{3}{2}$ $cd ) ( 2ab+3cd ) = 2a^2b^2-3abcd+3abcd-$ $\frac{9}{2}$ $c^2d^2 = 2a^2b^2-$ $\frac{9}{2}$ $c^2d^2$ $\\$ Question 6: Find the following products: i)  $( 2x+3 ) ( 2x-3 ) ( 4x^2+9 )$          ii)  $( x+2y ) ( x-2y ) ( x^2+4y^2 )$ iii)  $( a+bc ) ( a-bc ) ( a^2+b^2c^2 )$          iv)  $($ $\frac{2}{5}$ $+x ) ($ $\frac{2}{5}$ $-x ) ($ $\frac{4}{25}$ $+x^2 )$ i)  $( 2x+3 ) ( 2x-3 ) ( 4x^2+9 ) = \ ( 4x^2-9 ) ( 4x^2+9 ) = \ 16x^2-81$ ii)  $( x+2y ) ( x-2y ) ( x^2+4y^2 ) = \ ( x^2-4y^2 ) ( x^2+4y^2 ) = \ x^4-16y^4$ iii)  $( a+bc ) ( a-bc ) ( a^2+b^2c^2 ) = \ ( a^2-b^2c^2 ) ( a^2+b^2c^2 ) = {\ a}^4-b^4c^4$ iv)  $($ $\frac{2}{5}$ $+x ) ($ $\frac{2}{5}$ $-x ) ($ $\frac{4}{25}$ $+x^2 ) = \ ($ $\frac{4}{25}$ $-x^2 ) ($ $\frac{4}{25}$ $+x^2 ) =$ $\frac{16}{625}$ $-x^4$ $\\$ Question 7: Using the identity $( a+b ) ( a-b ) = ( a^2-b^2 )$ , evaluate the following: i)  $88 \times 112$      ii)  $153 \times 167$       iii)  $10.8 \times 9.2$ iv)  $3$ $\frac{1}{3}$ $\times 4$ $\frac{2}{3}$      v)  $9$ $\frac{1}{4}$ $\times 15$ $\frac{3}{4}$ i)  $88 \times 112 = ( 100-12 ) ( 100+12 ) = 1000-144 = 9856$ ii)  $153 \times 167 = ( 160-7 ) ( 160+7 ) = 25600-49 = 25551$ iii)  $10.8 \times 9.2 = ( 10+0.8 ) ( 10-0.8 ) = {10}^2-{0.8}^2 = 100-0.64 = 99.36$ iv)  $3$ $\frac{1}{3}$ $\times 4$ $\frac{2}{3}$ $= ( 4-$ $\frac{2}{3}$ $) ( 4+$ $\frac{2}{3}$ $) = 16-$ $\frac{4}{9}$ $= 15$ $\frac{5}{9}$ v)  $9$ $\frac{1}{4}$ $\times 15$ $\frac{3}{4}$ $= ($ $\frac{25}{2}$ $-3$ $\frac{1}{4}$ $) ($ $\frac{25}{2}$ $+3$ $\frac{1}{4}$ $) = 145.6875$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 230, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217851161956787, "perplexity": 1002.2719970858745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886095.7/warc/CC-MAIN-20200704073244-20200704103244-00441.warc.gz"}
https://byjus.com/jee/jee-main-2020-question-paper-physics-sept-4-shift-2/
# JEE Main 2020 Physics Paper With Solutions Sept 4 (Shift 2) The solutions of JEE Main 2020 Physics (Shift 2-Sept 4th) are available on this page. The IIT aspirants will find it very informative and helpful. Practising these questions will help them in cracking the examination. The candidates should refer to these questions to understand the pattern of the question paper, weightage given to each concept, etc. Click on the link below to download the Physics 2020 Shift 2 paper. ### September 4 Shift 2 - Physics 1. A circular coil has moment of inertia 0.8 kg m2 around any diameter and is carrying current to produce a magnetic moment of 20 Am2. The coil is kept initially in a vertical position and it can rotate freely around a horizontal diameter. When a uniform magnetic field of 4 T is applied along the vertical, it starts rotating around its horizontal diameter. The angular speed the coil acquires after rotating by 60° will be: 1. 1) 10 π rad s–1 3. 3) 20 π rad s–1 Solution: By energy conservation Ui+ Kf = Ui+ Kf -MBcos600 +0 = -MBcos00+ (1/2)Iω2 (-MB/2) + MB = (1/2)Iω2 $$\omega =\sqrt{\frac{MB}{I}}=\sqrt{\frac{20\times 4}{0.8}}=\sqrt{100}=10rad/s$$ 2. A person pushes a box on a rough horizontal platform surface. He applies a force of 200 N over a distance of 15 m. Thereafter, he gets progressively tired and his applied force reduces linearly with distance to 100 N. The total distance through which the box has been moved is 30 m. What is the work done by the person during the total movement of the box? 1. 1) 5690 J 2. 2) 5250 J 3. 3) 2780 J 4. 4) 3280 J Solution: Work done = area of ABCEO = area of trap. ABCD + area of rect. ODCE = (1/2)x 45x100+100x30=5250J 3. Match the thermodynamic processes taking place in a system with the correct conditions. In the table: ∆Q is the heat supplied, W is the work done and U is change in internal energy of the system. Process - Condition (I) Adiabatic - (1) ∆W = 0 (II) Isothermal - (2) ∆ Q = 0 (III) Isochoric - (3) ∆U ≠0, ∆W ≠ 0, ∆Q ≠ 0 (IV) Isobaric - (4) ∆U = 0 1. 1) (I) - (1), (II) - (1), (III) - (2), (IV) - (3) 2. 2) (I) - (1), (II) - (2), (III) - (4), (IV) - (4) 3. 3) (I) - (2), (II) - (4), (III) - (1), (IV) - (3) 4. 4) (I) - (2), (II) - (1), (III) - (4), (IV) - (3) Solution: Isothermal, U = 0 Isochoric,ʃpdv = 0 W 0 Isobaric, ∆U ≠0, ∆W ≠ 0, ∆Q ≠ 0 4. The driver of a bus approaching a big wall notices that the frequency of his bus’s horn changes from 420 Hz to 490 Hz when he hears it after it gets reflected from the wall. Find the speed of the bus if speed of the sound is 330 ms–1. 1. 1) 81 kmh–1 2. 2) 91 kmh–1 3. 3) 71 kmh–1 4. 4) 61 kmh–1 Solution: 5. A small ball of mass m is thrown upward with velocity u from the ground. The ball experiences a resistive force mkv2 where v is its speed. The maximum height attained by the ball is: 1. 1) $$\frac{1}{k}tan^{-1}\frac{ku^{2}}{2g}$$ 2. 2) $$\frac{1}{2k}In\left [1+\frac{ku^{2}}{g} \right ]$$ 3. 3) $$\frac{1}{2k}In\left [1+\frac{ku^{2}}{2g} \right ]$$ 4. 4) $$\frac{1}{2k}tan^{-1}\left [\frac{ku^{2}}{g} \right ]$$ Solution: $$H_{max}=\frac{1}{2k}ln\left [ \frac{g+ku^{2}}{g} \right ]$$ $$H_{max}=\frac{1}{2k}ln\left [1+ \frac{ku^{2}}{g} \right ]$$ 6. Consider two uniform discs of the same thickness and different radii R1= R and R2 =αR made of the same material. If the ratio of their moments of inertia I1 and I2, respectively, about their axes is I1: I2 = 1 : 16 then the value of α is 1. 1) √2 2. 2) 2 3. 3) 2√2 4. 4) 4 Solution: 7. A series L-R circuit is connected to a battery of emf V. If the circuit is switched on at t =0, then the time at which the energy stored in the inductor reaches (1/n) times of its maximum value, is: 1. 1) $$\frac{L}{R}In\left [ \frac{\sqrt{n}}{\sqrt{n}+1} \right ]$$ 2. 2) $$\frac{L}{R}In\left [ \frac{\sqrt{n}}{\sqrt{n}-1} \right ]$$ 3. 3) $$\frac{L}{R}In\left [ \frac{\sqrt{n}+1}{\sqrt{n}-1} \right ]$$ 4. 4) $$\frac{L}{R}In\left [ \frac{\sqrt{n}-1}{\sqrt{n}} \right ]$$ Solution: P.E. in inductor $$U=\frac{1}{2}LI^{2}$$ $$U\propto I^{2}$$ $$\frac{U}{U_{0}}=\left [ \frac{I}{I_{0}} \right ]^{2}$$ $$\frac{1}{n}=\left [ \frac{I}{I_{0}} \right ]^{2}$$ $$I= \frac{I_{0}}{\sqrt{n}}$$ $$I= I_{0}(1-e^{-\frac{R}{L}t})$$ $$\frac{I_{0}}{\sqrt{n}}= I_{0}(1-e^{-\frac{R}{L}t})$$ taking In & solving we get, $$t = \frac{L}{R}ln\left [ \frac{\sqrt{n}}{\sqrt{n}-1} \right ]$$ 8. The electric field of a plane electromagnetic wave is given by $$\vec{E}=E_{0}(\hat{x}+\hat{y})sin(kz-\omega t)$$ . Its magnetic field will be given by: 1. 1) $$\frac{E_{0}}{c}(\hat{x}+\hat{y})sin(kz-\omega t)$$ 2. 2) $$\frac{E_{0}}{c}(\hat{x}-\hat{y})sin(kz-\omega t)$$ 3. 3) $$\frac{E_{0}}{c}(\hat{x}-\hat{y})cos(kz-\omega t)$$ 4. 4) $$\frac{E_{0}}{c}(-\hat{x}+\hat{y})sin(kz-\omega t)$$ Solution: $$\vec{E}\times \vec{B}$$ should be in direction of $$\vec{v}$$ $$\vec{B} = \frac{E_{0}}{c}(-\hat{x}+\hat{y})sin(kz-\omega t)$$ 9. A cube of metal is subjected to a hydrostatic pressure of 4 GPa. The percentage change in the length of the side of the cube is close to: (Given bulk modulus of metal, B = 8 × 1010 Pa) 1. 1) 0.6 2. 2) 20 3. 3) 1.67 4. 4) 5 Solution: $$(-)\frac{\Delta P}{\frac{\Delta V}{V}}=B$$ $$\Delta P=\left | {\frac{\Delta V}{V}} \right |.B$$ $$={\frac{3\Delta L}{L}}\times B$$ Therefore, $${\frac{\Delta L}{L}}=\frac{\Delta P}{3B}$$ $$\frac{4\times 10^{9}}{3\times8 \times10^{10}} = \frac{1}{60}$$ Therefore, % we get, $${\frac{\Delta L}{L}}\times 100%$$ Putting values we get = 1.67 10. A paramagnetic sample shows a net magnetisation of 6 A/m when it is placed in an external magnetic field of 0.4 T at a temperature of 4 K. When the sample is placed in an external magnetic field of 0.3 T at a temperature of 24 K, then the magnetisation will be: 1. 1) 4 A/m 2. 2) 1 A/m 3. 3) 0.75 A/m 4. 4) 2.25 A/m Solution: $$M = \frac{CB_{ext}}{T}$$ $$6 = \frac{C\times 0.4}{4}$$ $$\Rightarrow C=60$$ Therefore, $$M = \frac{60\times 0.3}{24}=0.75 A/m$$ 11. A body is moving in a low circular orbit about a planet of mass M and radius R. The radius of the orbit can be taken to be R itself. Then the ratio of the speed of this body in the orbit to the escape velocity from the planet is: 1. 1) 2 2. 2) √2 3. 3) 1 4. 4)1/√2 Solution: 12. A particle of charge q and mass m is subjected to an electric field E = E0 (1 – ax2) in the x-direction, where a and E0 are constants. Initially the particle was at rest at x =0. Other than the initial position the kinetic energy of the particle becomes zero when the distance of the particle from the origin is: 1. 1) $$\sqrt{\frac{2}{a}}$$ 2. 2) a 3. 3) $$\sqrt{\frac{3}{a}}$$ 4. 4) $$\sqrt{\frac{1}{a}}$$ Solution: W = ∆KE $$\int_{0}^{x}Fdx = 0$$ $$\int_{0}^{x}qEdx = 0$$ $$q \int_{0}^{x} E_{0}\left(1-a x^{2}\right) d x=0$$ $$q E_{0}\left[\int_{0}^{x} d x-a \int_{0}^{x} x^{2} d x\right]=0$$ $$q E_{0}\left[x-\frac{a x^{3}}{3}\right]=0$$ $$x\left(1-\frac{a x^{2}}{3}\right)=0$$ X = 0, $$\left(1-\frac{a x^{2}}{3}\right)=0$$ $$\left(\frac{a x^{2}}{3}\right)=1$$ $$x = \sqrt{\frac{3}{a}}$$ 13. A capacitor C is fully charged with voltage V0. After disconnecting the voltage source, it is connected in parallel with another uncharged capacitor of capacitance C/2. The energy loss in the process after the charge is distributed between the two capacitors is: 1. 1) $$\frac{1}{2}CV_{0}^{2}$$ 2. 2) $$\frac{1}{4}CV_{0}^{2}$$ 3. 3) $$\frac{1}{3}CV_{0}^{2}$$ 4. 4) $$\frac{1}{6}CV_{0}^{2}$$ Solution: $$v_{f}=\frac{C V_{0}}{3 \frac{C}{2}}=\frac{2 V_{0}}{3}$$ $$u_{i}=\frac{1}{2} C V_{0}^{2}$$ $$u_{f}=\frac{1}{2}\left(\frac{3 C}{2}\right) \frac{4 V_{0}^{2}}{9}=\frac{C V_{0}^{2}}{3}$$ $$u_{i}-u_{f}=\frac{1}{2} C V_{0}^{2}-\frac{C V_{0}^{2}}{3}$$ $$=C V_{0}^{2}\left(\frac{1}{2}-\frac{1}{3}\right)=\frac{C V_{0}^{2}}{6}$$ 14. Find the Binding energy per nucleon for 12050 Sn. Mass of proton mp = 1.00783 U, mass of neutron mn = 1.00867 U and mass of tin nucleus mSn = 119.902199 U. (take 1U = 931 MeV) 1. 1) 8.0 MeV 2. 2) 9.0 MeV 3. 3) 7.5 MeV 4. 4) 8.5 MeV Solution: B.E = ∆mc2 =∆ m x 931 ∆m = (50 x 1.00783)+(70 x 1.00867)- (119.902199) = (120.9984 - 119.902199)U = 1.0962 U BE = (1.0962 x 931) = 1020.5622MeV BE per nucleon ≈1020.5622/120 = 8.5 Mev 15. The value of current i1 flowing from A to C in the circuit diagram is: 1. 1) 4 A 2. 2) 5 A 3. 3) 2 A 4. 4) 1 A Solution: 16. Two identical cylindrical vessels are kept on the ground and each contain the same liquid of density d. The area of the base of both vessels is S but the height of liquid in one vessel is x1 and in the other, x2. When both cylinders are connected through a pipe of negligible volume very close to the bottom, the liquid flows from one vessel to the other until it comes to equilibrium at a new height. The change in energy of the system in the process is: 1. 1) gdS (x2 + x1)2 2. 2) gdS (x22 + x12) 3. 3) 1/4gdS(x2 – x1)2 4. 4) 3/4gdS(x2 – x1)2 Solution: $$dSg\left [ \frac{x_{1}^{2}}{2} + \frac{x_{2}^{2}}{2} - x_{1}x_{2}\right ]$$ $$\frac{dSg}{4} (x_{1} - x_{2})^{2}$$ 17. A quantity x is given by (IFv2/WL4) in terms of moment of inertia I, force F, velocity v, work W and Length L. The dimensional formula for x is same as that of: 1. 1) coefficient of viscosity 2. 2) energy density 3. 3) force constant 4. 4) planck’s constant Solution: $$[x]=\frac{I F v^{2}}{W L^{4}}=\frac{\left(M^{\prime} L^{2}\right)\left(M L T^{-2}\right)\left(L T^{-1}\right)^{2}}{\left(M L^{2} T^{-2}\right) L^{4}}$$ = ML–1T–2 = Energy density 18. For a uniform rectangular sheet shown in the figure, the ratio of moments of inertia about the axes perpendicular to the sheet and passing through O (the centre of mass) and O’ (corner point) is: 1. 1) 1/2 2. 2) 2/3 3. 3) 1/4 4. 4) 1/8 Solution: 19. Identify the operation performed by the circuit given below: 1. 1) NOT 2. 2) OR 3. 3) AND 4. 4) NAND Solution: 20. In a photoelectric effect experiment, the graph of stopping potential V versus reciprocal of wavelength obtained is shown in the figure. As the intensity of incident radiation is increased: 1. 1) Straight line shifts to right 2. 2) Straight line shifts to left 3. 3) Slope of the straight line get more steep 4. 4) Graph does not change Solution: ev = hν –w (w = work function) v = (hν –w)/ e as (h/e) & (w /e) constant. Therefore no change in graph. 21. The speed verses time graph for a particle is shown in the figure. The distance travelled (in m) by the particle during the time interval t = 0 to t = 5 s will be________. Solution: Distance = Area under speed – time graph = (1/2) x 8 x 5 = 20m 22. Four resistances 40 Ω, 60 Ω, 90 Ω and 110 Ω make the arms of a quadrilateral ABCD. Across AC is a battery of emf 40 V and internal resistance negligible. The potential difference across BD in V is _______. Solution: $$V_{B}-\left [ \frac{40}{100}\times 60 \right ]+\left [ 110\times \frac{40}{200} \right ]-V_{D}=0$$ VB – VD = 24 -22 = 2V 23. The change in the magnitude of the volume of an ideal gas when a small additional pressure ∆P is applied at a constant temperature, is the same as the change when the temperature is reduced by a small quantity ∆T at constant pressure. The initial temperature and pressure of the gas were 300 K and 2 atm. respectively. If │∆T│= C│∆ P│ then value of C in (K/atm.) is _________. Solution: 1st case PV = nRT PdV + VdP = 0 P∆V + V∆P = 0 (∆V = (-∆P/P)v) 2nd case PV = (–nRT/P) (∆V = (-nRT /P)) $$\frac{-\Delta P}{P}V=\frac{-nR\Delta T}{P}\Rightarrow \Delta T=\Delta P\frac{v}{nR}$$ $$\Rightarrow \frac{\Delta T}{\Delta P}=\frac{V}{nR}$$ Now, given │∆T│= C│∆ P│ $$C=\frac{\Delta T}{\Delta P}=\frac{V}{nR}$$ $$C=\frac{ T}{P}=\frac{300}{2}=150$$ 24. Orange light of wavelength 6000×10–10 m illuminates a single slit of width 0.6 × 10–4 m. The maximum possible number of diffraction minima produced on both sides of the central maximum is ___________. Solution: For minima d sin θ = nλ or sin θ = nλ/d maximum value of sin θ is 1 Therefore, (nλ/d) ≤ 1 n ≤ d/λ $$n\leq \frac{0.6 \times 10^{-4}}{6000\times 10^{-10}}$$ $$n\leq 100$$ for both sides 100 +100 = 200 25. The distance between an object and a screen is 100 cm. A lens can produce real image of the object on the screen for two different positions between the screen and the object. The distance between these two positions is 40 cm. If the power of the lens is close to (N/100)D where N is an integer, the value of N is _________. Solution: $$f=\frac{D^{2}-d^{2}}{4D}=\frac{100^{2}-40^{2}}{400}$$ $$=\frac{10000-1600}{400}$$ $$=\frac{100-16}{4}=\frac{84}{4}=21$$ $$p=\frac{1}{f}=\frac{1}{21}=\frac{1}{21}\times \frac{100}{100}=\left [ \frac{4.76}{100} \right ]=\frac{N}{100}$$ Therefore, N ≈5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097221255302429, "perplexity": 1627.682155473855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00426.warc.gz"}
https://www.authorea.com/users/15216/articles/41181/_show_article
# 03 - Semantic Networks Semantic networks are a knowledge representation scheme. This lesson will cover the following topics: • Knowlege representations • Semantic networks • Problem-solving with semantic networks • Represent & Reason ## Representation In each knowledge representation, there is a language, and that language has a vocabulary. In addition, the representation contains some content (or knowledge). ### Example: Newton's 2nd Law of Motion $$F = ma$$ Force is equal to mass times acceleration ## Introduction to Semantic Networks How to represent Raven’s Progressive Matrices using a semantic network. State A, and state B. 1. Label all objects (x is a circle, y is the diamond, z is the black dot), and reference them as nodes 2. Represent the relationships between nodes, in both states (frames), both A and B. 3. Represent the transformation between the nodes between states, A and B. ### Structure of Semantic Networks • 1. Lexically: nodes • 3. Semantically: application-specific labels ### Characteristics of Good Representations • Make relationships explicit • exposese natural contraints • bring objects and relations together • exclude extraneous details • transparent, concise, complete, fast, computable ## Guards and Prisoners Problem ### Description • Three guard and three prisoners must cross river. • Boat may take only one or two people at a time. • Prisoners may never outnumber guards on either time (thought prisoner may be alone on either coast). ### Modeling using Semantic Nework Lexicon: Consider each node to be a unique state, represented by: - number of prisoners and guards on left side - number of prisoners and guards on right side - side that boat is on. Structure: Semantic:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7131189107894897, "perplexity": 8010.840706508759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613780.89/warc/CC-MAIN-20170530031818-20170530051818-00336.warc.gz"}
https://mathforums.com/threads/sound-intensity.347946/
# Sound intensity #### markosheehan May 2016 114 1 Ireland Hi I am stuck on this question . If I knew the sound intensity I could use the formula B=(10dB)log10(I/Io) to determine the dB But unfortunately I am only given the Watts of the sound from the source so I am unsure what to do. #### romsek Math Team Sep 2015 2,959 1,673 USA First we have to get the $200~W$ as $dB$. I see they refer to the picowatt for this so $P_0 = 10 \log_{10}\left(\dfrac{200}{10^{-12}}\right) = 143.01 ~dB$ at $100~m$ the loss due to spherical spreading is $A = 10\log_{10}\left(\dfrac{1}{4\pi 10^2}\right) = -50.99~dB$ Thus at $100~m$ we have $P =143.01 - 50.99 = 92.02~dB$ So given the choices I'd choose $92~dB$ Last edited: #### skeeter Math Team Jul 2011 3,356 1,848 Texas $I = \dfrac{Power}{4\pi r^2}$ $I_0 = 10^{-12} \dfrac{w}{m^2}$ #### markosheehan May 2016 114 1 Ireland First we have to get the $200~W$ as $dB$. I see they refer to the picowatt for this so $P_0 = 10 \log_{10}\left(\dfrac{200}{10^{-12}}\right) = 143.01 ~dB$ at $100~m$ the loss due to spherical spreading is $A = 10\log_{10}\left(\dfrac{1}{4\pi 10^2}\right) = -50.99~dB$ Thus at $100~m$ we have $P =143.01 - 50.99 = 92.02~dB$ So given the choices I'd choose $92~dB$ I did not realize you could sub the 200 watts in for the intensity in the formula. I thought the units for intensity were always W/M^2 #### markosheehan May 2016 114 1 Ireland $I = \dfrac{Power}{4\pi r^2}$ $I_0 = 10^{-12} \dfrac{w}{m^2}$ Thanks for this formula. I was not given this before tacking this problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9741904139518738, "perplexity": 1539.4632197311817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00329.warc.gz"}
http://math.stackexchange.com/questions/166144/equivalences-of-sn-vs-omegansn
# Equivalences of $S^n$ vs. $\Omega^nS^n$ Let $H(n)$ be the group of self-homotopy-equivalences of $S^n$ preserving the basepoint. I read that $H(n)$ may be identified with ''two components of $\Omega^nS^n$''. What does this mean and how can I see it? - Maps $[S^n,S^n]$ are classified by degree. For each $z\in\mathbb{Z}$ there is a component of $\Omega^nS^n$. The only maps that are homotopy equivalences have degree $1$ and $-1$. EDIT: In answer to the comment below from JasonDevito: For any finite CW complex $X$, $[S^1,X]$ is the same thing as $[S^0,\Omega X]$, since $\Omega$ and $\Sigma$ (reduced suspension) are adjoints. But, based maps from $S^0$ to $Y$ is the same as $Y$ for any finite CW complex. Thus $$[S^1,X]=[S^0,\Omega X]=\Omega X.$$ Iterate this and you get $$[S^n,S^n]=[\Sigma S^{n-1},S^n]=[S^{n-1},\Omega S^n]=\cdots =\Omega^n S^n.$$ - I feel as though I'm being dense, but I have an intuitive way of thinking about $\Omega S^n$. How does one think about $\Omega^n S^n$ in terms of $[S^n,S^n]?$ For example, how does a loop of loops in $S^2$ correspond to a map from $S^2$ to itself? Is each loop coming from, say, a lattitude of $S^2$ and the north and south poles are, up to homotopy, sent to the same point (making a trivial loop?) –  Jason DeVito Jul 3 '12 at 14:38 @JasonDeVito: Does my edit answer you question? –  Joe Johnson 126 Jul 3 '12 at 20:57 It does - I was being very dense. I've actually used these facts on answers I've posted on Mathoverflow - I'm just running really low on sleep. Sorry to be a bother! –  Jason DeVito Jul 3 '12 at 21:17 If these are homotopy classes then isn't $\mathbb Z = \pi_n(S^n) = [S^n, S^n] =[S^0, \Omega^n S^n ] = \pi_0(\Omega^n S^n)$, with the two path components corresponding to $1$ and $-1$. –  Justin Young Jul 4 '12 at 21:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7292863726615906, "perplexity": 379.6626375027276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768309.18/warc/CC-MAIN-20141217075248-00141-ip-10-231-17-201.ec2.internal.warc.gz"}