url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www2.perimeterinstitute.ca/conferences/tensor-networks-quantum-field-theories-ii?qt-pi_page_blocks_quicktabs=5
# Tensor Networks for Quantum Field Theories II COVID-19 information for PI Residents and Visitors Conference Date: Tuesday, April 18, 2017 (All day) to Friday, April 21, 2017 (All day) Scientific Areas: Quantum Matter Quantum Foundations Quantum Gravity Quantum Information Tensor networks have proven to be an extremely useful tool in examining quantum many-body systems.  More recently, they have also emerged in the study of the holographic principle in quantum gravity.  While these discussions and successes have referred to applying tensor networks to describe discrete lattice systems, there has been growing interest and also progress in extending these techniques to continuous quantum field theories (QFTs). The purpose of this meeting is to discuss current research in this direction.  Continuous matrix product states and continuous multi-scale entanglement renormalization ansatz (cMERA) can tackle QFTs directly, without the need to put them on the lattice.  They offer a non-perturbative, wavefunctional-based, variational approach to QFT's, with a variety of potential applications, including the efficient simulation of relativistic and non-relativisitc continuous systems, and the study of their renormalization group flow.  On the other hand, hyperbolic tensor networks such as MERA, the exact holographic mapping, or holographic quantum error correction codes, are currently investigated for its conjectured relation to the AdS/CFT correspondence of quantum gravity.  The continuous versions of these constructions, such as the cMERA, are natural candidates to realizing the AdS/CFT correspondence more accurately. Topics:  Non-relativisitc QFT's    Renormalization group    Conformal field theory    Holography Sponsorship for this event has been provided by: Registration for this event is now closed. Perimeter Institute has launched a new program whereby child care support may be available to facilitate your participation in workshops and conferences.  Please visit http://www.perimeterinstitute.ca/research/conferences/child-care-support-conference-participants for more information. • Bartek Czech, Institue for Advanced Study • Glen Evenbly, University of Sherbrooke • Martin Ganahl, Perimeter Institute • Jutho Haegeman, University of Ghent • Janet Hung, Fudan University • Robert Leigh, University of Illinois at Urbana-Champaign • Ashley Milsted, Perimeter Institute • Robert Myers, Perimeter Institute • *Tobias Osborne, University of Hannover • Xiaoliang Qi, Stanford University • Volker Scholz, Ghent University • Miles Stoudenmire, University of Calfornia, Irvine • Jamie Sully, McGill University • Brian Swingle, MIT, Harvard University & Brandeis University • Tadashi Takayanagi, Yukawa Institute for Theoretical Physics • Frank Verstraete, University of Ghent • Guifre Vidal, Perimeter Institute • Steven White, University of California, Irvine *via teleconference • Javier Arguello, Perimeter Institute • Ganapathy Baskaran, Institute of Mathematical Sciences Chennai • Lakshya Bhardwaj, Perimeter Institute • Arpan Bhattacharyya, Fudan University • Dean Carmi, Perimeter Institute • Shira Chapman, Perimeter Institute • Jordan Cotler, Stanford University • Bartek Czech, Institute for Advanced Study • Clement Delcamp, Perimeter Institute • Bianca Dittrich, Perimeter Institute • Glen Evenbly, University of Sherbrooke • Matthew Fishman, California Institute of Technology • Adrian Franco Rubio, Perimeter Institute • Adil Gangat, National Taiwan University • Martin Ganahl, Perimeter Institute • Jutho Haegeman, University of Ghent • Muxin Han, Florida Atlantic University • Markus Hauru, Perimeter Institute • Joshuah Heath, Boston College • Michal Heller, Albert Einstein Institute • Qi Hu, Perimeter Institute • Janet Hung, Fudan University • Nick Hunter-Jones, California Institute of Technology • Robert Jefferson, Perimeter Institute • Robert Leigh, University of Illinois at Urbana-Champaign • Adam Lewis, Perimeter Institute • Shengqiao Luo, Perimeter Institute • Hugo Marrochio, Perimeter Institute • Alex May, University of British Columbia • Roger Melko, Perimeter Institute & University of Waterloo • Ashley Milsted, Perimeter Institute • Sebastian Mizera, Perimeter Institute • Robert Myers, Perimeter Institute • Xiaoliang Qi, Stanford University • Jason Pye, University of Waterloo • Hammam Qassim, Institute for Quantum Computing • Djordje Radicevic, Perimeter Institute • Julian Rincon, Perimeter Institute • Burak Sahinoglu, California Institute of Technology • Volker Scholz, Ghent University • Didina Serban, Perimeter Institute • Andrei Shieber, Perimeter Institute • Vasudev Shyam, Perimeter Institute • Joan Simon, University of Edinburgh • Kevin Slagle, University of Toronto • Barbara Soda, Perimeter Institute • Miles Stoudenmire, University of Calfornia, Irvine • Jamie Sully, McGill University • Brian Swingle, MIT, Harvard University & Brandeis University • Tadashi Takayanagi, Yukawa Institute for Theoretical Physics • Nick Van den Broeck, Perimeter Institute • Guillaume Verdon-Akzam, Institute for Quantum Computing • Frank Verstraete, University of Ghent • Guifre Vidal, Perimeter Institute • Steven White, University of California, Irvine • Gabriel Wong, University of Virginia • Shuo Yang, Perimeter Institute • Beni Yoshida, Perimeter Institute • Jose Zapata, Centro de Ciencias Matematicas • Yijian Zou, Perimeter Institute Tuesday, April 18, 2017 Time Event Location 9:00 – 9:30am Registration Reception 9:30 – 9:35am Guifre Vidal, Perimeter InstituteWelcome and Opening Remarks Bob Room 9:35 – 10:35am Steven White, University of CaliforniaDiscretizing the many-electron Schrodinger Equation Bob Room 10:35 – 11:00am Coffee Break Bistro – 1st Floor 11:00-12:00pm Ashley Milsted, Perimeter InstituteEmergence of conformal symmetry in critical spin chains Bob Room 12:00 – 2:00pm Lunch Bistro – 2nd Floor 2:00 – 2:40pm Miles Stoudenmire, University of CaliforniaApplying DMRG to Non-relativistic Continuous Systems in 1D and3D Bob Room 2:40 – 3:20pm Martin Ganahl, Perimeter InstituteSolving Non-relativistic Quantum Field Theories with continuous Matrix Product States Bob Room 3:20 – 3:50pm Coffee Break Bistro – 1st Floor 3:50 – 4:30 pm Jutho Haegeman, University of GhentBridging Perturbative Expansions with Tensor Networks Bob Room Wednesday, April 19, 2017 Time Event Location 9:30 – 10:30am Guifre Vidal, Perimeter InstituteThe continuous multi-scale entanglement renormalization ansatz (cMERA) Bob Room 10:30 – 11:00am Coffee Break Bistro – 1st Floor 11:00-12:00pm Robert Leigh, University of Illinois at Urbana-ChampaignUnitary Networks from the Exact Renormalization of Wavefunctionals Bob Room 12:00 – 2:00pm Lunch Bistro – 2nd Floor 2:00 – 2:40pm Brian Swingle,Massachusetts Institute of TechnologyHarvard UniversityBrandeis UniversityTensor networks and Legendre transforms Bob Room 2:40 – 3:20pm Volkher Scholz, University of GhentAnalytic approaches to tensor networks for field theories 3:20 – 3:50pm Coffee Break Bistro – 1st Floor 3:50 - 4:50pm Frank Verstraete, University of GhentTensor network renormalization and real space Hamiltonian flows Bob Room 5:00 – 6:00pm Poster Session Atrium 6:00pm Banquet Bistro – 2nd Floor Thursday, April 20, 2017 Time Event Location 9:30 – 10:30am Jamie Sully, McGill UniversityTensor Networks and Holography Bob Room 10:30 – 11:00am Coffee Break Bistro – 1st Floor 11:00-12:00pm Tadashi Takayanagi, Yukawa Institute for Theoretical PhysicsTwo Continous Approaches to AdS/Tensor Network duality Bob Room 12:00 – 2:00pm Lunch Bistro – 2nd Floor 2:00 – 3:00pm Robert Myers, Perimeter InstituteComplexity, Holography & Quantum Field Theory Bob Room 3:00 – 3:30pm Coffee Break Bistro – 1st Floor 3:30 – 4:10pm Bartek Czech, Institute for Advanced StudyHow Tensor Network Renormalization quantifies circuit complexity and why this is a problem of [considerable] gravity Bob Room Friday, April 21, 2017 Time Event Location 9:00 – 10:00am Xiaoliang Qi, Stanford UniversityRandom tensor networks and holographic coherent states Bob Room 10:00 – 10:30am Coffee Break Bistro – 1st Floor 10:30 – 11:10am Tobias Osborne, University of Hannover [via teleconference]Dynamics for holographic codes Bob Room 11:10 – 11:50am Janet Hung, Fudan UniversityTensor network and (p-adic) AdS/CFT Bob Room 11:50 – 12:30pm Glen Evenbly, University of SherbrookeHyper-invariant tensor networks and holography Bob Room 12:30pm Lunch Bistro – 2nd Floor Bartek Czech, Institute for Advanced Study How Tensor Network Renormalization quantifies circuit complexity and why this is a problem of [considerable] gravity According to a recent proposal, in the AdS/CFT correspondence the circuit complexity of a CFT state is dual to the Einstein-Hilbert action of a certain region in the dual space-time. If the proposal is correct, it should be possible to derive Einstein's equations by varying the complexity in a class of circuits that prepare the requisite CFT state. This talk attempts such a derivation in very special settings: Virasoro descendants of the CFT2 ground state, which are dual to locally AdS3 geometries. By applying Tensor Network Renormalization to the discretized Euclidean path integral that prepares the CFT state, I will justify the recent suggestion by Caputa et al. that the complexity of a path integral is quantified by the Liouville action. The Liouville field specifies the conformal frame in which the path integral is evaluated; in the most efficient / least complexity frame, the Liouville field is closely related to entanglement entropies of CFT2 intervals. Assuming the Ryu-Takayanagi proposal, the said entanglement entropies are lengths of geodesics living in the dual space-time. The Liouville equation of motion satisfied by the minimal complexity Liouville field is a geodesic-wise rewriting of the non-linear vacuum Einstein's equations in 3d with a negative cosmological constant. I emphasize that this is very much work in progress; I hope the audience will help me to sharpen the arguments. Glen Evenbly, University of Sherbrooke Hyper-invariant tensor networks and holography I will propose a new class of tensor network state as a model for the AdS/CFT correspondence and holography. This class shall be demonstrated to retain key features of the multi-scale entanglement renormalization ansatz (MERA), in that they describe quantum states with algebraic correlation functions, have free variational parameters, and are efficiently contractible. Yet, unlike MERA, they are built according to a uniform tiling of hyperbolic space, without inherent directionality or preferred locations in the holographic bulk, and thus circumvent key arguments made against the MERA as a model for AdS/CFT. Novel holographic features of this tensor network class will be examined, such as an equivalence between the causal cone C[R] and the entanglement wedge E[R] of connected boundary regions R. Martin Ganahl, Perimeter Institute Solving Non-relativistic Quantum Field Theories with continuous Matrix Product States Since its proposal in the breakthrough paper  [F. Verstraete, J.I. Cirac, Phys. Rev. Lett. 104, 190405(2010)], continuous Matrix Product States (cMPS) have emerged as a powerful tool for obtaining non-perturbative ground state and excited state properties of interacting quantum field theories (QFTs) in (1+1)d. At the heart of the cMPS lies an efficient parametrization of manybody wavefunctionals directly in the continuum, that enables one to obtain ground states of QFTs via imaginary time evolution. In the first part of my talk I will give a general introduction to the cMPS formalism. In the second part, I will then discuss a new method for cMPS optimization, based on energy gradient instead of the usual imaginary time evolution. This new method overcomes several problems associated with imaginary time evolution, and allows to perform calculations at much lower cost / higher accuracy than previously possible. Jutho Haegeman, University of Ghent Bridging Perturbative Expansions with Tensor Networks We demonstrate that perturbative expansions for quantum many-body systems can be rephrased in terms of tensor networks, thereby providing a natural framework for interpolating perturbative expansions across a quantum phase transition. This approach leads to classes of tensor-network states parameterized by few parameters with a clear physical meaning, while still providing excellent variational energies. We also demonstrate how to construct perturbative expansions of the entanglement Hamiltonian, whose eigenvalues form the entanglement spectrum, and how the tensor-network approach gives rise to order parameters for topological phase transitions. Janet Hung, Fudan University Tensor network and (p-adic) AdS/CFT We will describe how the reconstruction of a bulk operator can be organised systematically. With a suitable parametrisation, an analogue of the HKLL formula emerges, involving a smearing function satisfying a Klein Gordon equation in the graph. The parametrisation also allows us to read off interaction vertices, and build up loop diagrams systematically. When we interpret the Bruhat-Tits tree as a tensor network, we recover (partially) features of the p-adic AdS/CFT dictionary discussed recently in the literature. Robert Leigh, University of Illinois at Urbana-Champaign Unitary Networks from the Exact Renormalization of Wavefunctionals The exact renormalization group (ERG) for O(N) vector models at large N on flat Euclidean space admits an interpretation as the bulk dynamics of a holographically dual higher spin gauge theory on AdS_{d+1}. The generating functional of correlation functions of single trace operators is reproduced by the on-shell action of this bulk higher spin theory, which is most simply presented in a first-order (phase space) formalism. This structure arises because of an enormous non-local symmetry of free fixed point theories. In this talk, I will review the ERG construction and describe its extension to the RG flow of the wave functionals of arbitrary states of the O(N) vector model at the free fixed point. One finds that the ERG flow of the ground state and a specific class of excited states is implemented by the action of unitary operators which can be chosen to be local. Thus the ERG equations provide a continuum notion of a tensor network. We compare this tensor network with the entanglement renormalization networks, MERA, and cMERA. The ERG tensor network appears to share the general structure of cMERA but differs in important ways. Ashley Milsted, Perimeter Institute Emergence of conformal symmetry in critical spin chains We demonstrate that 1+1D conformal symmetry emerges in critical spin chains by constructing a lattice ansatz Hn for (certain combinations of) the Virasoro generators Ln. The generators Hn offer a new way of extracting conformal data from the low energy eigenstates of the lattice Hamiltonian on a finite circle. In particular, for each energy eigenstate, we can now identify which Virasoro tower it belongs to, as well as determine whether it is a Virasoro primary or a descendant (and similarly for global conformal towers and global conformal primaries/descendants). The central charge is obtained from a simple ground-state expectation value. Non-universal, finite-size corrections are the main source of error. We propose and demonstrate the use of periodic Matrix Product States, together with an improved ground state solver, to reach larger system sizes. We uncover that, importantly, the MPS single-particle excitation ansatz accurately describes all low energy excited states. Robert Myers, Perimeter Institute Complexity, Holography & Quantum Field Theory I will describe some recent work studying proposals for computational complexity in holographic theories and in quantum field theories. In particular, I will discuss some interesting properties of the new gravitational observables and of complexity in the boundary theory. Tobias Osborne, University of Hannover Dynamics for holographic codes In this talk I discuss the problem of introducing dynamics for holographic codes. To do this it is necessary to take a continuum limit of the holographic code. As I argue, a convenient kinematical continuum limit space is given by Jones’ semicontinuous limit. Dynamics are then furnished by a unitary representation of a discrete analogue of the conformal group known as Thompson’s group T. I will describe these representations in detail in the simplest case of a discrete AdS geometry modelled by trees. Consequences such as the ER=EPR argument are then realised in this setup. Extensions to more general tessellations with a MERA structure are possible, and will be (very) briefly sketched. Xiaoliang Qi, Stanford University Random tensor networks and holographic coherent states Tensor network is a constructive description of many-body quantum entangled states starting from few-body building blocks. Random tensor networks provide useful models that naturally incorporate various important features of holographic duality, such as the Ryu-Takayanagi formula for entropy-area relation, and operator correspondence between bulk and boundary. In this talk I will overview the setup and key properties of random tensor networks, and then discuss how to describe quantum superposition of geometries in this formalism. By introducing quantum link variables, we show that random tensor networks on all geometries form an overcomplete basis of the boundary Hilbert space, such that each boundary state can be mapped to a superposition of (spatial) geometries. We discuss how small fluctuations around each geometry forms a “code subspace” in which bulk operators can be mapped to boundary isometrically. We further compute the overlap between distinct geometries, and show that the overlap is suppressed exponentially in an area law fashion, in consistency with the holographic principle. In summary, random tensor networks on all geometries form an overcomplete basis of “holographic coherent states” which may provide a new starting point for describing quantum gravity physics. References [1] Patrick Hayden, Sepehr Nezami, Xiao-Liang Qi, Nathaniel Thomas, Michael Walter, Zhao Yang, JHEP 11 (2016) 009 [2] Xiao-Liang Qi, Zhao Yang, Yi-Zhuang You, arxiv:1703.06533 Volker Scholz, Ghent University Analytic approaches to tensor networks for field theories I will discuss analytic approaches to construct tensor network representations of quantum field theories, more specifically conformal field theories in 1+1 dimensions. A key insight is that we should understand how well the tensor network can reproduce the correlation functions of the quantum field theory. Based on this measure of closeness, I will present rigorous results allowing for explicit error bounds which show that both Matrix product states (MPS) as well as the multiscale renormalization Ansatz (MERA) do approximate conformal field theories. In particular, I will discuss the case of Wess-Zumino-Witten models. based on joint work with Robert Koenig (MPS), Brian Swingle and Michael Walter (MERA) Miles Stoudenmire, University of California, Irvine Applying DMRG to Non-relativistic Continuous Systems in 1D and 3D The density matrix renormalization group works very well for one-dimensional (1D) lattice systems, and can naively be adapted for non-relativistic continuum systems in 1D by discretizing real space using a grid. I will discuss challenges inherent in this approach and successful applications. Recently, the success of the grid approach for 1D motivated us to extend the approach to 3D by treating the transverse directions with a basis set. This hybrid grid/basis-set approach allows DMRG to scale much better for long molecules and we obtain state-of-the-art results with modest computing resources. A key component of the approach is a powerful algorithm for compressing long-range interactions into a matrix product operator which I will present in some detail. James Sully, Stanford Linear Accelerator Center Tensor Networks and Holography Brian Swingle, MIT, Harvard University & Brandeis University Tensor networks and Legendre transforms Tensor networks have primarily, thought not exclusively, been used to the describe quantum states of lattice models where there is some inherent discreteness in the system. This raises issues when trying to describe quantum field theories using tensor networks, since the field theory is continuous (or at least the regulator should not play a central role). I'll present some work in progress studying tensor networks designed to directly compute correlation functions instead of the full state. Here the discreteness arises from our choice of where and how to probe the field theory. This approach is roughly analogous to studying a Legendre transform of the state. I'll discuss the properties of such networks and show how to construct them in some cases of interest, including non-interacting fermion field theories. Partly based on work with Volkher Scholz and Michael Walter. Tadashi Takayanagi,  Yukawa Institute for Theoretical Physics Two Continous Approaches to AdS/Tensor Network duality In this talk, I would like to discuss how we can realize the correspondence between AdS/CFT and tensor network in quantum field theories (i.e. the continous limit). As the first approach I will discuss a possible connection between continuous MERA and AdS/CFT. Next I will introduce the second approach based on the optimization of Euclidean path-integral, where the strcutures of hyperbolic spaces and entanglement wedges emerge naturally. This second appraoch is closely related to the idea of tensor network renormalization. Frank Verstraete, University of Ghent Tensor network renormalization and real space Hamiltonian flows We will review the topic of tensor network renormalization, relate it to real space Hamiltonian flows, and discuss the emergence of matrix product operator algebras as symmetries of the renormalization fixed points. joint work with Matthias Bal, Michael Marien and Jutho Haegeman Guifre Vidal, Perimeter Institute The continuous multi-scale entanglement renormalization ansatz (cMERA) The first half of the talk will introduce the cMERA, as proposed by Haegeman, Osborne, Verschelde and Verstratete in 2011 [1], as an extension to quantum field theories (QFTs) in the continuum of the MERA tensor network for lattice systems. The second half of the talk will review recent results [2] that show how a cMERA optimized to approximate the ground state of a conformal field theory (CFT) retains all of its spacetime symmetries, although these symmetries are realized quasi-locally. In particular, the conformal data of the original CFT can be extracted from the optimized cMERA. [1] J. Haegeman, T. J. Osborne, H. Verschelde, F. Verstraete, Entanglement renormalization for quantum fields, Phys. Rev. Lett, 110, 100402 (2013), arXiv:1102.5524 [2] Q. Hu, G. Vidal, Spacetime symmetries and conformal data in the continuous multi-scale entanglement renormalization ansatz, arXiv:1703.04798 Steven White, University of California, Irvine Discretizing the many-electron Schrodinger Equation Large parts of condensed matter theoretical physics and quantum chemistry have as a central goal discretizing and solving the continuum many-electron Schrodinger Equation.  What do we want to get from these calculations?  What are key problems of interest? What sort of approaches are used?  I'll start with a broad overview of these questions using the renormalization group as a conceptual framework. I'll then progress towards our recent tensor network approaches for the many electron problem, discussing along the way issues of the area law, wavelet techniques and Wilson's related work, wavelets and MERA, and discretizations that combine grids and basis sets. Arpan Bhattacharyya, Fudan University AdS/CFT via Tensor Network : Bulk boundary Reconstruction We will demonstrate , how to reconstruct bulk operator starting form the local boundary using our model of tensor network which is basically using being build form the perfect tensor plus some small perturbations away form it. We will show that it has the similar features as that of HKLL construction thereby making the connection with the holography (AdS/CFT) concrete. Also we will demonstrate the connection between the linear part of the operator reconstruction and the wavelet transformation. Further we will show that the non linear part of the reconstruction has the possibility of giving the "Geodesic Witten diagram ". At last , we will consider the example of p-adic tree where all these things can be written down explicitly. Jordan Cotler, Stanford University cMERA for Interacting Scalar Fields We upgrade cMERA to a systematic variational ansatz and develop techniques for its application to interacting quantum field theories in arbitrary spacetime dimensions. By establishing a correspondence between the first two terms in the variational expansion and the Gaussian Effective Potential, we can exactly solve for a variational approximation to the cMERA entangler. As examples, we treat scalar ϕ^4 theory and the Gross-Neveu model and extract non-perturbative behavior. We also comment on the connection between generalized squeezed coherent states and more generic entanglers. Matthew Fishman, California Institute of Technology Improving the Corner Transfer Matrix Renormalization Group Method with Fixed Points We present an explicitly translationally invariant version of the Corner Transfer Matrix Renormalization Group (CTMRG) method, which allows us to reformulate the method in terms of a set of fixed point equations. This leads to speedups in the convergence time of the algorithm, particularly for systems near criticality. To show the performance of the algorithm, we present various benchmarks for contracting 2D statistical mechanics models as well as 2D quantum models written as projected entangled pair states (PEPS). Adrian Franco Rubio, Perimeter Institute Entanglement structure and UV regularization in cMERA The continuous multi-scale entanglement renormalization ansatz or cMERA provides a variational ansatz for the ground state of a quantum field theory. Such states come equipped with an intrinsic length scale that acts as an ultraviolet cutoff. We provide evidence for the existence of this cutoff based on the entanglement structure of a particular family of cMERA states, namely Gaussian states optimized for free bosonic and fermionic CFTs. Our findings reflect that short distance entanglement is not fully present in the ansatz states, thus hinting at ultraviolet regularization. Adil Gangat, National Taiwan University Steady States of Infinite-Size Dissipative Quantum Chains via Imaginary Time Evolution Directly in the thermodynamic limit, we show how to combine imaginary and real time evolution of tensor networks to efficiently and accurately find the nonequilibrium steady states (NESS) of one-dimensional dissipative quantum lattices governed by the Lindblad master equation. The imaginary time evolution first bypasses any highly correlated portions of the real-time evolution trajectory by directly converging to the weakly corre- lated subspace of the NESS, after which real time evolution completes the convergence to the NESS with high accuracy. We demonstrate the power of the method with the dissipative transverse field quantum Ising chain. We show that a crossover of an order parameter shown to be smooth in previous finite-size studies remains smooth in the thermodynamic limit. Markus Hauru, Perimeter Institute Topological conformal defects with tensor network The critical two-dimensional classical Ising model on the square lattice has two topological conformal defects: the $\mathbb{Z}_2$ symmetry defect $D_{\epsilon}$ and the Kramers-Wannier duality defect $D_{\sigma}$. These two defects implement antiperiodic boundary conditions and a more exotic form of twisted boundary conditions, respectively. On the torus, the partition function $Z_{D}$ of the critical Ising model in the presence of a topological conformal defect $D$ is expressed in terms of the scaling dimensions $\Delta_{\alpha}$ and conformal spins $s_{\alpha}$ of a distinct set of primary fields (and their descendants, or conformal towers) of the Ising conformal field theory. This characteristic conformal data $\{\Delta_{\alpha}, s_{\alpha}\}_{D}$ can be extracted from the eigenvalue spectrum of a transfer matrix $M_{D}$ for the partition function $Z_D$. We present results from a recent paper (arXiv:1512.03846), where we investigate the use of tensor network techniques to both represent and coarse-grain the partition functions $Z_{D_\epsilon}$ and $Z_{D_\sigma}$ of the critical Ising model with either a symmetry defect $D_{\epsilon}$ or a duality defect $D_{\sigma}$. We also explain how to coarse-grain the corresponding transfer matrices $M_{D_\epsilon}$ and $M_{D_\sigma}$, from which we can extract accurate numerical estimates of $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\epsilon}}$ and $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\sigma}}$. Two key new ingredients of our approach are (i) coarse-graining of the defect $D$, which applies to any (i.e.\ not just topological) conformal defect and yields a set of associated scaling dimensions $\Delta_{\alpha}$, and (ii) construction and coarse-graining of a generalized translation operator using a local unitary transformation that moves the defect, which only exist for topological conformal defects and yields the corresponding conformal spins $s_{\alpha}$. Qi Hu, Perimeter Institute Continuous Multi-scale Entanglement Renormalization Ansatz The generalization of the multi-scale entanglement renormalization ansatz (MERA) to continuous systems, or cMERA, is a variational ansatz for the ground state of quantum field theories. For a conformal field theory, it can capture the space-time symmetries of the ground state, and we can extract the conformal data from cMERA. Adam Lewis, Perimeter Institute Matrix Product State Simulations of Quantum Fields in an Expanding Universe The matrix product state (MPS) ansatz makes possible computationally-efficient representations of weakly entangled many-body quantum systems with gapped Hamiltonians near their ground states, notably including massive, relativistic quantum fields on the lattice. No Wick rotation is required to apply the time evolution operator, enabling study of time-dependent Hamiltonians. Using free massive scalar field theory on the 1+1 Robertson-Walker metric as a toy example, I present early efforts to exploit this fact to model quantum fields in curved spacetime. We use the ADM formalism to write the appropriate Hamiltonian witnessed by a particular class of normal observers. Possible applications include simulations of gravitational particle production in the presence of interactions, studies of the slicing-dependence of entanglement production, and inclusion of the expectation of the stress-energy tensor as a matter source in a numerical relativity simulation. Alex May, University of British Columbia Tensor networks for dynamic spacetimes Existing tensor network models of holography are limited to representing the geometry of constant time slices of static spacetimes. We study the possibility of describing the geometry of a dynamic spacetime using tensor networks. We find it is necessary to give a new definition of length in the network, and propose a definition based on the mutual information. We show that by associating a set of networks with a single quantum state and making use of the mutual information based definition of length, a network analogue of the maximin formula can be used to calculate the entropy of boundary regions. Hugo Marrochio, Perimeter Institute Holographic complexity and related progress towards a cMERA realization Julian Rincon, Perimeter Institute Continuous matrix product representations for mixed states The continuous matrix product states (cMPS) is a powerful variational ansatz for the ground state of interacting quantum field theories in 1+1 spacetime dimensions [F. Verstraete, J.I. Cirac, Phys. Rev. Lett. 104, 190405(2010)]. Here we propose a density matrix generalization of the cMPS, the continuous matrix product density operator (cMPDO), and investigate its suitability to represent thermal states and master equation dynamics. We show the existence of the cMPDO by taking the continuum limit of a lattice MPDO and characterize its mathematical properties. For thermal states of field theories, we find that the cMPDO offers an accurate description of their corresponding density matrix. We argue that these results can also be extended for the case of master equation dynamics. Yijian Zou, Perimeter Institute Extracting conformal data with periodic boundary matrix product states We construct Virasoro generators on a finite critical lattice system with the periodic boundary condition, and use them to identify conformal towers. Ground state and excited states corresponding to scaling operators are found with periodic boundary matrix product states. Scaling dimensions and central charge are estimated with high accuracy from finite size scaling. ## Hyper-invariant tensor networks and holography Friday Apr 21, 2017 Speaker(s): I will propose a new class of tensor network state as a model for the AdS/CFT correspondence and holography. This class shall be demonstrated to retain key features of the multi-scale entanglement renormalization ansatz (MERA), in that they describe quantum states with algebraic correlation functions, have free variational parameters, and are efficiently contractible. Scientific Areas: ## Tensor network and (p-adic) AdS/CFT Friday Apr 21, 2017 Speaker(s): We will describe how the reconstruction of a bulk operator can be organised systematically. With a suitable parametrisation, an analogue of the HKLL formula emerges, involving a smearing function satisfying a Klein Gordon equation in the graph. The parametrisation also allows us to read off interaction vertices, and build up loop diagrams systematically. When we interpret the Bruhat-Tits tree as a tensor network, we recover (partially) features of the p-adic AdS/CFT dictionary discussed recently in the literature. Scientific Areas: ## Dynamics for holographic codes Friday Apr 21, 2017 Speaker(s): In this talk I discuss the problem of introducing dynamics for holographic codes. To do this it is necessary to take a continuum limit of the holographic code. As I argue, a convenient kinematical continuum limit space is given by Jones’ semicontinuous limit. Dynamics are then furnished by a unitary representation of a discrete analogue of the conformal group known as Thompson’s group T. I will describe these representations in detail in the simplest case of a discrete AdS geometry modelled by trees. Consequences such as the ER=EPR argument are then realised in this setup. Scientific Areas: ## Random tensor networks and holographic coherent states Friday Apr 21, 2017 Speaker(s): Tensor network is a constructive description of many-body quantum entangled states starting from few-body building blocks. Random tensor networks provide useful models that naturally incorporate various important features of holographic duality, such as the Ryu-Takayanagi formula for entropy-area relation, and operator correspondence between bulk and boundary. In this talk I will overview the setup and key properties of random tensor networks, and then discuss how to describe quantum superposition of geometries in this formalism. Scientific Areas: ## How Tensor Network Renormalization quantifies circuit complexity and why this is a problem of [considerable] gravity Thursday Apr 20, 2017 Speaker(s): According to a recent proposal, in the AdS/CFT correspondence the circuit complexity of a CFT state is dual to the Einstein-Hilbert action of a certain region in the dual space-time. If the proposal is correct, it should be possible to derive Einstein's equations by varying the complexity in a class of circuits that prepare the requisite CFT state. This talk attempts such a derivation in very special settings: Virasoro descendants of the CFT2 ground state, which are dual to locally AdS3 geometries. Scientific Areas: ## Complexity, Holography & Quantum Field Theory Thursday Apr 20, 2017 Speaker(s): I will describe some recent work studying proposals for computational complexity in holographic theories and in quantum field theories. In particular, I will discuss some interesting properties of the new gravitational observables and of complexity in the boundary theory. Scientific Areas: ## Two Continous Approaches to AdS/Tensor Network duality Thursday Apr 20, 2017 Speaker(s): In this talk, I would like to discuss how we can realize the correspondence between AdS/CFT and tensor network in quantum field theories (i.e. the continous limit). As the first approach I will discuss a possible connection between continuous MERA and AdS/CFT. Next I will introduce the second approach based on the optimization of Euclidean path-integral, where the strcutures of hyperbolic spaces and entanglement wedges emerge naturally. This second appraoch is closely related to the idea of tensor network renormalization. Scientific Areas: ## Tensor Networks and Holography Thursday Apr 20, 2017 Speaker(s): Scientific Areas: ## Tensor network renormalization and real space Hamiltonian flows Wednesday Apr 19, 2017 Speaker(s): We will review the topic of tensor network renormalization, relate it to real space Hamiltonian flows, and discuss the emergence of matrix product operator algebras as symmetries of the renormalization fixed points. joint work with Matthias Bal, Michael Marien and Jutho Haegeman Scientific Areas: ## Analytic approaches to tensor networks for field theories Wednesday Apr 19, 2017 Speaker(s): I will discuss analytic approaches to construct tensor network representations of quantum field theories, more specifically conformal field theories in 1+1 dimensions. A key insight is that we should understand how well the tensor network can reproduce the correlation functions of the quantum field theory. Based on this measure of closeness, I will present rigorous results allowing for explicit error bounds which show that both Matrix product states (MPS) as well as the multiscale renormalization Ansatz (MERA) do approximate conformal field theories. Scientific Areas: ## Pages Scientific Organizers: • Robert Myers, Perimeter Institute • Tadashi Takayanagi, Yukawa Institute for Theoretical Physics • Frank Verstraete, University of Ghent • Guifre Vidal, Perimeter Institute • Steven White, University of California, Irvine
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.783820629119873, "perplexity": 1877.3559054756809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00410.warc.gz"}
https://byjus.com/area-of-a-pentagon-formula
# Area of a Pentagon Formula A pentagon is five-sided polygon in geometry. It may be simple or self – intersecting in shape. The five angles present in the pentagon are equal. A regular pentagon has all of the sides and angles are the same as each other. Pentagons can be regular or irregular and convex or concave. A regular pentagon is one with all equal sides and angles. Its interior angles are 108 degrees and its exterior angles are 72 degrees. An irregular pentagon is a shape that does not have equal sides and/or angles and therefore do not have specified angles. A convex pentagon is one whose vertices, or points where the sides meet, is pointing outwards as opposed to a concave pentagon whose vertices point inwards. Imagine a collapsed roof of a house. The Area of a Pentagon Formula is, A = $\frac{5}{2}$sa Where, s is the side of the pentagon. a is the apothem length. ### Solved Examples Question 1: Find the area of a pentagon of side 10 cm and apothem length 5 cm ? Solution: Given, s = 10 cm a = 5 cm Area of a pentagon = $\frac{5}{2}$ sa = $\frac{5}{2}$ $\times$ 10 $\times$ 5 cm2 = 125 cm2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22594612836837769, "perplexity": 671.8796520736067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512504.64/warc/CC-MAIN-20181020034552-20181020060052-00214.warc.gz"}
http://cs.stackexchange.com/questions/6202/maximum-schedulable-set-zero-lateness-deadline-scheduling
# Maximum Schedulable Set Zero-Lateness Deadline Scheduling This is a homework problem for my introduction to algorithms course. Recall the scheduling problem from Section 4.2 in which we sought to minimize the maximum lateness. There are $n$ jobs, each with a deadline $d_i$ and a required processing time $t_i$, and all jobs are available to be scheduled starting at time $s$. For a job $i$ to be done, it needs to be assigned a period from $s_i \geq s$ to $f_i$ = $s_i + t_i$, and different jobs should be assigned nonoverlapping intervals. As usual, such an assignment of times will be called a schedule. In this problem, we consider the same setup, but want to optimize a different objective. In particular, we consider the case in which each job must either be done by its deadline or not at all. We’ll say that a subset $J$ of the jobs is schedulable if there is a schedule for the jobs in $J$ so that each of them finishes by its deadline. Your problem is to select a schedulable subset of maximum possible size and give a schedule for this subset that allows each job to finish by its deadline. (a) Prove that there is an optimal solution $J$ (i.e., a schedulable set of maximum size) in which the jobs in $J$ are scheduled in increasing order of their deadlines. (b) Assume that all deadlines $d_i$ and required times $t_i$ are integers. Give an algorithm to find an optimal solution. Your algorithm should run in time polynomial in the number of jobs $n$, and the maximum deadline $D = \max_i d_i$. I've solved the problem as worded with the recurrence $Opt(i, d) = \max\left \{ \begin{array} \\ Opt(i-1, d-t_i) + 1 \hspace{20 mm} d\leq d_i \\ Opt(i-1, d) \end{array} \right \}$ but our instructor added a new requirement that our algorithm must not be dependent on D. This recurrence seems like it would produce an $O(nD)$ running time if implemented with dynamic programming. I can't figure out how to reduce its running time from $O(nD)$ to $O(n^k)$. To me it seems like it's a variation on the knapsack problem with all values equal to 1. In which case it seems like this is the best that can be done. If I'm doing something wrong could someone point me in the right direction, or if I've done everything right so far, could someone at least give me a hint as to how I can make an $O(n^k)$ recurrence or algorithm. - "Recall the scheduling problem from Section 4.2" -- what was that? Can you summarise the problem in a meaningful way, and focus on the parts essential to your question? –  Raphael Oct 31 '12 at 9:58 add comment ## 1 Answer Use a dynamic programming algorithm to compute an $n \times n$ table $T$, where the entry $T(j,k)$ answers the question: suppose you wish to schedule $j$ out of the first $k$ jobs. What is the earliest time you can complete processing these? How do we compute $T(j,k+1)$? Either the job $k+1$ is included in the best set of $j$ out of $k+1$ jobs, so $$T(j,k+1) = T(j-1,k) + t_{k+1},$$ or job $k$ is not included in this set, so $$T(j,k+1) = T(j,k).$$ We also have to worry about the deadline. We can do this by making the entries of $T$ where the task is impossible equal to $\infty$, and checking whether we have exceeded the deadline at every step. So the pseudocode for computing the $T(j,k+1)$ entry of the table is if T[j-1,k] + t[k+1] > d[k+1] then T[j,k+1] = T[j,k] else T[j,k+1] = min ( T[j,k-1]+t[k+1], T[j,k] ) We initialize by setting $T(j,k) = \infty$ if $j > k$, and $T(1,1) = \infty$ if $t_1 > d_1$, and $T(1,1) = t_1$ otherwise. - I already used dynamic programming. If you compute the values to sub problems in an efficient order (i.e. dynamic programming), you still end up with $O(nD)$. This homework is done, and if I remember correctly I came up with an answer that worked, so I'll post that later if I have time. –  Joseph Shanak Oct 31 '12 at 14:34 @Joseph: There can be different algorithms that use dynamic programming to solve the same problem, and have different running times. This hint is for a dynamic programming algorithm that runs in $O(n^2)$ time. I'll elaborate later if I have time. –  Peter Shor Oct 31 '12 at 15:46 add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7622812986373901, "perplexity": 243.09964484477743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037851/warc/CC-MAIN-20131204131717-00088-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.ck12.org/physical-science/Isotopes-in-Physical-Science/lecture/Isotopes-of-Carbon/r1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Isotopes ## Atoms that have the same atomic number, but different mass numbers due to the number of neutrons. 0% Progress Practice Isotopes Progress 0% Isotopes of Carbon A video about the different types of isotopes that carbon forms.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250726103782654, "perplexity": 9806.973143780553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448506.69/warc/CC-MAIN-20151124205408-00116-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.r-bloggers.com/2012/12/page/27/
# Monthly Archives: December 2012 ## write.table with proper column number in the header December 5, 2012 By Did you notice that the file generated from write.table() in R has missed a tab (\t) in the top-left corner, when row.names=T (by default)?I found the solution here:http://stackoverflow.com/questions/2478352/write-table-in-r-screws-up-header-when-has-r... ## Forbes Graph Makeover Contest Entry #1 December 5, 2012 By Naomi Robbins is running a graph makeover challenge over at her Forbes blog and this is my entry for the B2B/B2C Traffic Sources one (click for larger version): And, here’s the R source for how to generate it: library(ggplot2)   df = read.csv("b2bb2c.csv")   ggplot(data=df,aes(x=Site,y=Percentage,fill=Site)) + geom_bar(stat="identity") + facet_grid(Venue ~ .) + coord_flip() + opts(legend.position ## Population dynamics using INLA December 5, 2012 By Summary: Two methods of inferring (effective) population dynamics from genetic variation are compared: (i) Markov chain Monte Carlo (MCMC; using BEAST); and (ii) integrated nested Laplace approximation (INLA; using R interface of that name). INLA runs >1000 times faster than … Continue reading → ## Bottom-up creation of data-driven capabilities: show don’t tell December 5, 2012 By I’ve been writing lately on what to do when people who make decisions in an organization say they want data-driven capabilities but then ignore or attack the results of data-driven analysis for not saying what they think the data ought to say. Some of the most productive things you can do in that situation include ## APSRtable: Getting Tables from R to $$\LaTeX$$ December 5, 2012 By Oftentimes you might be writing in LaTeX and trying to push your results from R into your .tex file. This, at times, can be very frustrating. Luckily, there’s apsrtable, an R package that automatically produces the LaTeX code for your  R model ... ## Modis QC Bits December 5, 2012 By In the course of working through my MODIS  LST project and reviewing the steps that Imhoff and Zhang took as well has the data preparations other researchers have taken ( Neteler ) the issue of MODIS Quality control bits came up.  Every MODIS  HDF file comes with multiple SDS or multiple layers of data. For ## Function Closures and S4 Methods December 4, 2012 By This brief tutorial illustrates how to combine S4 object oriented capabilities with function closures in order to develop classes with built in methods. Thanks to Hadley Wickham for the great contribution of material and tutorials made available on the web and to Bill Venables and Stefano Iacus for their kind reviews. Regular … Continue reading → ## Plotting Likert Scales December 4, 2012 By Graphs can provide an excellent way to emphasize a point and to quickly and efficiently show important information. Sadly, poor graphs can be a good way to waste space in an article, take up time in a presentation, and waste a lot of ink all while providing little to no information. Excel has made it ## Shiny Server – Earthshattering News December 4, 2012 By As you probably know, I am one of the strongest proponents of the Shiny package for developing interactive web applications Amongst the latest news from RStudio is that what was planned to be commercial software will now be free and Open Source (AGPLv3 license) To celebrate this momentous announcement, I have produced an Earthquake app.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1830143928527832, "perplexity": 5307.989809742413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447819.35/warc/CC-MAIN-20141017005727-00126-ip-10-16-133-185.ec2.internal.warc.gz"}
https://brilliant.org/problems/breaking-the-rod/
# Breaking The Rod!! A wooden rod is 6 m long. 2 points A and B on the rod are uniformly and independently chosen. The rod is then cut at both points to obtain a smaller rod AB. Find the chance that the rod AB will be at least 1m in length. If the chance can be written as $$\frac{a}{b}$$, where $$a, b$$ are positive coprime integers, find $$a + b$$. Image credit: Adapated from Wikipedia ArnoldReinhold. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554500579833984, "perplexity": 465.20231585672076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00302-ip-10-233-31-227.ec2.internal.warc.gz"}
https://rd.springer.com/article/10.1007/s00229-014-0657-y
Manuscripta Mathematica , Volume 144, Issue 3–4, pp 439–456 # On the number of reducible polynomials of bounded naive height • Artūras Dubickas Article ## Abstract We prove an asymptotical formula for the number of reducible integer polynomials of degree d and of naive height at most T when $${T \to \infty}$$. The main term turns out to be of the form $${\kappa_d T^d}$$ for each $${d \geq 3}$$, where the constant $${\kappa_d}$$ is given in terms of some infinite Dirichlet series involving the volumes of symmetric convex bodies in $${\mathbb{R}^d}$$. For d = 2, we prove that there are asymptotically $${\kappa_2 T^2 \,\text{log} T}$$ of such polynomials, where $${\kappa_2:=6(3\sqrt{5}+2\,\text{log} (1+\sqrt{5}) -2 \,\text{log}\, 2)/\pi^2}$$. Earlier results in this direction were given by van der Waerden, Pólya and Szegö, Dörge, Chela, and Kuba. 11R09 12E05 ## References 1. 1. Apostol T.M.: Introduction to Analytic Number Theory. Springer, New York (1998) 2. 2. Chela R.: Reducible polynomials. J. London Math. Soc. 38, 183–188 (1963) 3. 3. Chern S.-J., Vaaler J.D.: The distribution of values of Mahler’s measure. J. Reine Angew. Math. 540, 1–47 (2001) 4. 4. Dörge K.: Abschätzung der Anzahl der reduziblen Polynome. Math. Ann. 160, 59–63 (1965) 5. 5. Dubickas A.: Polynomials irreducible by Eisenstein’s criterion. Appl. Algebra Eng. Commun. Comput. 14, 127–132 (2003) 6. 6. Fel’dman, N.I.: Approximations of algebraic numbers. Moskov. Gos. Univ., Moscow, (in Russian) (1981)Google Scholar 7. 7. Heyman R., Shparlinski I.E.: On the number of Eisenstein polynomials of bounded height. Appl. Algebra Eng. Commun. Comput. 24, 149–156 (2013) 8. 8. Knobloch H.W.: Zum Hilbertschen Irreduzibilitätssatz. Abh. Math. Sem. Univ. Hamburg 19, 176–190 (1955) 9. 9. Konyagin S.V.: On the number of irreducible polynomials with 0,1 coefficients. Acta Arith. 88, 333–350 (1999) 10. 10. Koyuncu F., Özbudak F.: Probabilities for absolute irreducibility of multivariate polynomials by the polytope method. Turkish J. Math. 35, 367–377 (2011) 11. 11. Kuba G.: On the distribution of reducible polynomials. Math. Slovaca 59, 349–356 (2009) 12. 12. Masser, D., Vaaler, J.D.: Counting algebraic numbers with large height. I. In: Schlickewei, Hans Peter et al. (eds.) Diophantine approximation. Festschrift for Wolfgang Schmidt, Vienna, Austria, 2003, Springer, Developments in Mathematics, 16, 237–243 (2008)Google Scholar 13. 13. Masser D., Vaaler J.D.: Counting algebraic numbers with large height. II. Trans. Am. Math. Soc. 359, 427–455 (2007) 14. 14. Peter M.: Lattice points in convex bodies with planar points on the boundary. Monatsh. Math. 135, 37–57 (2002) 15. 15. Pólya G., Szegö G.: Problems and Theorems in Analysis, Vol II. Springer, Berlin, Heidelberg, New York (1976) 16. 16. Schanuel S.H.: Heights in number fields. Bull. Soc. Math. France 107, 433–449 (1979) 17. 17. Schinzel A.: Polynomials with special regard to irreducibility. CUP, Cambridge (2000) 18. 18. Schmidt W.M.: Northcott’s theorem on heights I. A general estimate. Monatsch. Math. 115, 169–181 (1993) 19. 19. van der Waerden B.L.: Die Seltenhen der Gleichungen mit Affekt. Math. Ann. 109, 13–16 (1934) 20. 20. Widmer M.: Counting primitive points of bounded height. Trans. Am. Math. Soc. 362, 4793–4829 (2010)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279192686080933, "perplexity": 3318.292159217235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00108.warc.gz"}
https://www.analyzemath.com/calculus/Integrals/rules-of-integrals-with-examples-and-solutions.html
# Rules of Integrals with Examples A tutorial, with examples and detailed solutions, in using the rules of indefinite integrals in calculus is presented. A set of questions with solutions is also included. In what follows, C is a constant of integration and can take any value. ## 1 - Integral of a power function: f(x) = x n x n dx = x n + 1 / (n + 1) + c Example: Evaluate the integral x 5 dx Solution: x 5 dx = x5 + 1 / ( 5 + 1) + c = x 6 / 6 + c ## 2 - Integral of a function f multiplied by a constant k: k f(x) k f(x) dx = k f(x) dx Example: Evaluate the integral 5 sinx dx Solution: According to the above rule 5 sin (x) dx = 5sin(x) dx sin(x) dx   is given by 2.1 in table of integral formulas, hence 5 sin(x) dx = - 5 cos x + C ## 3 - Integral of Sum of Functions. [f(x) + g(x)] dx = f(x) dx + g(x) dx Example: Evaluate the integral [x + e x] dx Solution: According to the above property [x + e x] dx = x dx + e x dx ò x dx is given by 1.3 and e x dx by 4.1 in table of integral formulas, hence [x + e x] dx = x 2 / 2 + e x + c ## 4 - Integral of Difference of Functions. [f(x) - g(x)] dx = f(x) dx - g(x) dx Example: Evaluate the integral [2 - 1/x] dx Solution: According to the above property [2 - 1/x] dx = 2 dx - (1/x) dx 2 dx is given by 1.2 and (1/x) dx by 1.4 in table of integral formulas, hence [2 - 1/x] dx = 2x - ln |x| + c ## 5 - Integration by Substitution. [f(u) du/dx] dx = f(u) du Example: Evaluate the integral (x 2 - 1) 20 2x dx Solution: Let u = x 2 - 1, du/dx = 2x and the given integral can be written as (x 2 - 1) 20 2x dx = u 20 (du/dx) dx = u 20 du         according to above property = u 21 / 21 + c = (x 2 - 1) 21 / 21 + c ## 6 - Integration by Parts. f(x) g '(x) dx = f(x) g(x) - f '(x) g(x) dx Example: Evaluate the integral x cos x dx Solution: Let f(x) = x and g ' (x) = cos x which gives f ' (x) = 1 and g(x) = sin x From integration by parts formula above, x cos x dx = x sin x - 1 sin x dx = x sin x + cos x + c ## More Questions with Solutions Use the table of integral formulas and the rules above to evaluate the following integrals. [Note that you may need to use more than one of the above rules for one integral]. 1. (1 / 2) ln (x) dx 2. [sin (x) + x 5] dx 3. [sinh (x) - 3] dx 4. - x sin (x) dx 5. sin 10(x) cos(x) dx ## Solutions to the Above Questions 1. This is the integral of ln (x) multiplied by 1 / 2 and we therefore use rule 2 above to obtain: (1 / 2) ln (x) dx = (1 / 2) ln (x) dx We now use formula 4.3 in the table of integral formulas to evaluate ln (x) dx. Hence (1 / 2) ln (x) dx = (1 / 2) ( (x ln (x)) - x ) + c 2. Use rule 3 ( integral of a sum ) to obtain [sin (x) + x 5] dx = sin (x) dx + x 5 dx We use formula 2.1 in the table of integral formulas to evaluate sin (x) dx and rule 1 above to evaluate x 5 dx. Hence [sin (x) + x 5] dx = - cos (x) + x 6 / 6 3. Use rule 4 (integral of a difference) to obtain (sinh (x) - 3) dx = sinh (x) dx - 3 dx We use formula 7.1 in the table of integral formulas to evaluate sinh (x) dx and integral of the constant 3 to obtain (sinh (x) - 3) dx = cosh (x) - 3 x + c 4. The integrand is the product of two function x and sin (x) and we try to use integration by parts in rule 6 as follows: Let f(x) = x , g'(x) = sin(x) and therefore g(x) = - cos(x) Hence - x sin (x) dx = - f(x) g'(x) dx = - ( f(x) g(x) - f'(x) g(x) dx) Substitute f(x), f'(x), g(x) and g'(x) by x , 1, sin(x) and - cos(x) respectively to write the integral as = - x (- cos(x)) + 1 (- cos(x)) dx Use formula 2.2 in in the table of integral formulas to evaluate cos(x) dx and simplify to obtain = x cos (x) - sin(x) + c 5. Let u = sin(x) and therefore du/dx = cos(x). Hence the given integral can be written as sin10(x) cos dx = ( u10 du/dx ) dx Use rule 5 to write = u10 du which gives = u 11 / 11 + c Substitute u by sin(x) to obtain = (1 / 11) (sin 11(x) ) + c ## More References and Links Table of Integral Formulas integrals and their applications in calculus. evaluate integrals. Integration by Substitution.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959344863891602, "perplexity": 2121.0340312270564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00138.warc.gz"}
https://bioinformatics.stackexchange.com/questions/2769/how-can-i-specify-to-deseq2-to-only-perform-comparisons-on-which-i-am-interested
How can I specify to DEseq2 to only perform comparisons on which I am interested with? I am currently performing a large RNA-seq analysis from mice PBMCs. The dataset contains around 6,000 transcriptomic profiles and I would like to use DESeq2 to identify the sets of differentially expressed genes in the different conditions. In total, I have 100 biological stimulations, and for each stimulation I have 30 control samples and 30 samples treated with a molecule of interests (so is have: (30 controls+30 treated) * 100 stimulations = 6,000 samples). I want to identify, for each stimulation, the sets of differentially expressed genes between control samples and treated samples. I do not want to compare samples from the different stimulations. In total, I would like to have thus 100 lists of differentially expressed genes. I have started to use deseq2 to identify these lists but deseq2 is spending lot of time to perform comparisons on which I am not interested with (comparisons between the biological stimulations). For now, I have a table sampleTable which looks like that: I am using DEseq2 using the following command: DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design= ~ condition) Could you please help me with that? How can I specify to deseq2 to not perform the comparisons between the biological stimulations, but rather between control and treated sample within each stimulation ? Thank you and best, • I am not sure that you can do that... why are you not interested between control and treated of the different stimulations? – Henry Nov 4 '17 at 12:57 You can specify the exact comparisons you want in the results() function. So: dds = DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design= ~ condition) dds = DESeq(dds) res = results(dds, contrast("condition", "treatment1", "control1")) Note that condition should not be, "stim001-control1, stim001-control2, etc.", but instead, "stim001control, stim001control, stim001control, stim001treatment, etc.". Don't put minus signs in your levels and don't number your samples in them (they're not in different groups. The last command would be repeated for each of the comparisons. Note that this will still be quite slow due to the number of samples and the number of groups. With so many samples, you might just make a separate sampleTable for each of the comparisons you want to make. Each would then only contain the samples relevant for that comparison, so 60 total in each. Alternatively, if you want to fit all of the groups at once then it's likely that limma/voom will prove to have better performance. It uses different math that happens to be quicker with large models. BTW, if the slowness is coming from DESeqDataSetFromHTSeqCount() then you can mere everything on the command line into a single matrix. A couple lines of python or even the join command can do that quickly enough.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6983262896537781, "perplexity": 1709.3116564708453}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00066.warc.gz"}
http://tug.org/pipermail/texhax/2015-August/021874.html
# [texhax] Why is \rm deprecated? Michael Barr barr at math.mcgill.ca Tue Aug 18 01:49:36 CEST 2015 The fact that it is is clear. I would like to know why it is deprecated. I continue to use it because it is familiar and reduces the number of new things I have to use. It works well, and in both text and math and, as was clear from my original question, in some situations it works better than the alternatives. There must be some substantive reason it is deprecated. In a similar vein, why is it recommended to use $$...$$ for equations and $...$ for displays? I have defined a single function that lays down a pair of \$ and leaves the cursor between them. Hit it twice and I am in display mode. Meanwhile I use \( to as a macro for \left( and similarly for the others. TeX is tool, not a religion. Why are people trying to enforce conformity? Foolish hobgoblins.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113297462463379, "perplexity": 743.1894021930166}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948527279.33/warc/CC-MAIN-20171213143307-20171213163307-00467.warc.gz"}
https://tex.stackexchange.com/questions/7226/using-pgfplot-externalize-with-eps-output
# Using pgfplot externalize with .eps output I'm trying use to use pgfplot's externalization feature b/c I've got many large plots that I don't want to regenerate every time I typeset my thesis. Other components of my thesis require compile via latex > ps > pdf (i.e. "latex" followed by "dvips" followed by "ps2pdf"), so I'm trying to stick with that. I'm having trouble getting the externalization to work, unfortunately. Here is my minimum example, which I have produced by following the instructions in the pgfplots manual for getting .eps output (pg 240). \documentclass{article} \usepackage{pgfplots} \usetikzlibrary{pgfplots.groupplots} \usepgfplotslibrary{external} \tikzexternalize[shell escape=-enable-write18] \tikzset{external/system call={latex \tikzexternalcheckshellescape -halt-on-error -interaction=batchmode -jobname "\image" "\texsource" & dvips -o "\image".ps "\image".dvi}} \begin{document} \begin{tikzpicture} \begin{axis}[xlabel=x,ylabel=y] \end{axis} \end{tikzpicture} \end{document} When I do the compile, the external .dvi and .ps files for the figure are successfully generated and there are no errors, but the figure is absent from the actual compiled document .pdf. The pgfplots manual suggested I would get .eps output, but this doesn't happen (perhaps this is the trouble?). Anyone have any tips for what I'm doing wrong? • Wild guess here, but I've noticed some funny things in pgfplots go away with a second pass. I haven't verified that from the documentation, but maybe it's needed. Can you change the system call to run latex twice? – Matthew Leingang Dec 16 '10 at 15:31 • At your suggestion, I tried using the same call to latex twice in a row and it didn't fix the problem. – Matt Williams Dec 16 '10 at 16:02 • Well, it was worth a shot... – Matthew Leingang Dec 16 '10 at 16:18 • Here's one solution that probably wasn't as intended by the pgfplots people (so if anyone has other thoughts, I'd still really love to hear them)... If you add & perl ps2eps.pl -f "\image".ps to the end of the system call, assuming you have perl and ps2eps installed (with ps2eps.pl in your current directory in this example), it will compile as desired with no errors. – Matt Williams Dec 16 '10 at 17:32 • This works for me with TeX Live 2010 when I remove the line \tikzexternalize[shell escape=-enable-write18]. Which TeX system do you use, and can you post your log? – Joseph Wright Dec 16 '10 at 21:01 It looks like you made two typos in the \tikzset line; changing it to: \tikzset{external/system call={latex \tikzexternalcheckshellescape -halt-on-error -interaction=batchmode -jobname "\image" "\texsource" && dvips -o "\image".eps "\image".dvi}} will make the shell command generate an .eps file from each .dvi file after running the latex command to create that .dvi file. (I replaced a .ps in your file with a .eps, and a & with a &&. Interestingly, the & seemed to work in MiKTeX to invoke latex asynchronously; I wonder why it did that? I don't think cmd.exe supports that, though it does support &&.) But unfortunately none of that seems to help: though I end up with a postscript file for the whole document that includes the contents of the .eps, it doesn't seem to display any of it, though the .eps file renders fine on its own, and I don't get any error messages about it either :-(. Usually people forget to properly activate unrestricted shell escape: latex -shell-escape foo.tex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750573396682739, "perplexity": 3446.778441665438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00413.warc.gz"}
https://www.physicsforums.com/threads/solve-this-simple-equation.748063/
# Solve this simple equation 1. Apr 10, 2014 ### skrat 1. The problem statement, all variables and given/known data $z^4-6z^2+1=0$ 2. Relevant equations 3. The attempt at a solution $z^{2}=\frac{6\pm \sqrt{36-4}}{2}=3\pm 2\sqrt{2}$. Now this has to be wrong already yet I don't know why.. Wolfram gives me some solutions $1+\sqrt{2}$, $1-\sqrt{2}$, $-1+\sqrt{2}$, $-1-\sqrt{2}$... Now how can I find them without wolfram alpha? 2. Apr 10, 2014 ### jbunniii No, it's fine. Note that you have solved for $z^2$, not $z$. If the goal is to solve for $z$, you need to do some more work. Start with one of your solutions, say $z^2 = 3 + 2 \sqrt{2}$. There are two values of $z$ that satisfy this equation, namely $z = \pm \sqrt{3 + 2 \sqrt{2}}$. Note that this expression can be simplified. Indeed, $(1 + \sqrt{2})^2 = 3 + 2 \sqrt{2}$, so $\sqrt{3 + 2 \sqrt{2}} = 1 + \sqrt{2}$ is one solution. See if you can find the other solutions. There should be four since the original polynomial $z^4 -6z^2 + 1$ has degree 4. 3. Apr 10, 2014 ### skrat See, I couldn't see that. Now It is obvious. Thanks! 4. Apr 10, 2014 ### LCKurtz Your original question asked how to get the roots without Wolfram. Once you know that you have $$z=\pm\sqrt{3+2\sqrt 2}$$it is easy enough to verify that $z=1+2\sqrt 2$ works, but it's another thing to find that form in the first place. You might find it instructive to set$$(a+b\sqrt 2)^2 = 3+2\sqrt 2$$square both sides, and solve for $a$ and $b$. Draft saved Draft deleted Similar Discussions: Solve this simple equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572763442993164, "perplexity": 376.61179849463156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00465.warc.gz"}
https://wap.sciencenet.cn/blog-669170-1306531.html
## Control is hopeless·ÖÏíhttp://blog.sciencenet.cn/u/controlhopelessI just wonder how things are put together and then what happens ÒÑÓÐ 1653 ´ÎÔĶÁ 2021-10-2 17:37 |¸öÈË·ÖÀà:¹ØÓÚÌì²Å|ϵͳ·ÖÀà:¹ÛµãÆÀÊö »Øµ½¡°¿´Êé¿´À´µÄȤÊ¡±¡£ Ç°ÃæµÄȤÊÂ˵ÆðÁËÑîÕñÄþÏÈÉúµÄ¡°·Ç°¢±´¶û¹æ·¶ÀíÂÛ¡±µÄǰÊÀ¡£1988ÄêÈðµäÈ˽«µÚÒ»½ìOscar Klein½±ÕÂÊÚÓèÁËÑîÕñÄþÏÈÉú£¬ÑîÕñÄþ×öÁËKlein¼ÍÄîÑݽ²¡£ Oscar Klein ÄÇÆª±»ÒÅÍüµÄÎÄÕ¡°On the theory of charged fields¡± ÒÔ·¨Óï·Ö±ð·¢±íÔÚ1938ºÍ1939Ä겨À¼ºÍ·¨¹úµÄ»áÒéÉÏ£¬1986Äê±»·­ÒëΪӢÓ "Sur la theorie des champs associes a des particules chargees," Les Nouvelles Theories de la Physique, Collection Scientific, Institute International de Cooperation Intellectuel, Paris A939), p. 81. Proceedings of Symposium in Warsaw, 30 May-3 June, 1938. ÎÒÃÇÕâÀï×ªÔØµÚÒ»½ìOscar Klein½²×ùÀïÌáµ½Õâ¸ö¹¤×÷µÄÏà¹ØÄÚÈÝ 1 À´×ÔÈðµäÓïµÄÓ¢ÎÄ·­Ò룬×÷Õß Inga Fischer-Hjalmars and Bertel Laurent£¬Department of Physics, Stockholm University, Stockholm, Sweden ¡°Gauge Theory A less well-known work, carried out by Oskar Klein in 1939, points far ahead in time. The essential idea in this work was repeated by C. N. Yang and R. L. Mills in 1954, obviously without any knowledge of Oskar Klein's achievements fifteen years earlier. The work of Yang and Mills has later become the model of methods that are at present mainly employed for theoretical approaches to elementary particles as well as gravitation. Presumably, there are several reasons why so little attention has been paid to the above-mentioned work by Oskar Klein: It is included in a publication that is difficult to obtain (and written in French) and it appeared perhaps too early. Typically, it was inspired by the five-dimensional theory; a theory with which most elementary particle physicists were not very well acquainted at that time.¡± ÖÐÎÄ·­Òë DeepL ¡°¹æ·¶ÀíÂÛ 2 ÑîÕñÄþÔÚ½²×ùÖеĺôÓ¦ ¡°I have mentioned above the paper 13 by O Klein which was his report at a 1938 conference in Warsaw. It is appropriate at this Oskar Klein Memorial Lecture to pay tribute to this very remarkable paper which presented a theory of fields satisfying equations that contain nonnonlinear terms very similar to those of Eq. A) above. How did Klein arrive at these terms? The answer is: he had started from the Kaluza-Klein theory which, being based on general relativity, had nonlinear terms. Unfortunately, as already mentioned in remark A), general relativity (i.e. tangent bundle) does not easily lend itself to generalizations to other gauge fields. Thus Klein did not discover non-Abelian gauge symmetry and his remarkable paper did not produce strong impact.¡± δ֪ǰÈ˹¤×÷£¬¶ÀÁ¢·¢Õ¹³öÐµĹ¤×÷£¬Õâʲ¢²»Ï¡Ææ¡£Ï¡ÆæµÄÊÇÔÚÖªµÀÒԺ󣬻ָ´ÊÀ¼äԭòµÄŬÁ¦¡£ https://wap.sciencenet.cn/blog-669170-1306531.html ÏÂһƪ£º²»µ¹µÄѧÊõ Êý¾Ý¼ÓÔØÖÐ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073903918266296, "perplexity": 2220.003017417618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00502.warc.gz"}
https://dmoj.ca/problem/coci21c1p4/editorial
## Editorial for COCI '21 Contest 1 #4 Set Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist. Submitting an official solution before solving the problem yourself is a bannable offence. For the first subtask, it is sufficient to try out every triplet of cards and check whether it forms a set in . For the second subtask, one should notice that if we choose two cards, the third card needed for a set is uniquely determined. Namely, for each position, if the corresponding characters in the first two cards are the same, then the third character has to be equal to them, and if they are different, the third character will be the remaining one. With time complexity , we can now try out every pair of cards and check whether there exists a third card which will form a set with them. Of course, we should first record in an array of size for each type of card if it shows up in the input. From now on, we will assume that the characters in question are instead of , so that we can view each card as a number in base . Let's try to come up with a simple rule that associates a pair of characters with the third character needed for a set. In other words, we are looking for a rule that acts in the following way: We can notice that , or equivalently, the characters form a set if and only if . Now let's make an analogy with the bitwise xor operation. This operation is denoted by and represents addition modulo with the digits in base . In this problem, we will consider addition modulo with the digits in base , which we will denote by . Having in mind the things mentioned above, three cards form a set if and only if , where the cards are thought of as numbers in base . Let's fix some card and try to figure out how many pairs of cards and exist so that . For each individual , we can calculate this in so the total complexity is , but we will show a way to find the answer for all cards simultaneously in the complexity . If instead of the operation we had the operation , the problem could be solved by calculating the desired -convolution with fast multiplication of polynomials using FFT. In this problem, we will therefore try to modify this idea to calculate the -convolution. The operations and are very similar and the -convolution is calculated in the same manner as the xor-convolution. Thus, what follows is a description of the modification of the 'fast Walsh–Hadamard transformation' (FWHT) to work modulo . More about this can be found on this Codeforces blog. (P.S. the day before the contest, another great blog on the topic appeared on Codeforces: link) At a high level, the idea is the following: • The given deck of cards is represented by a polynomial. • Each term in the polynomial represents a certain type of card. • The coefficients of the polynomial represent the number of times a card appears in the deck. In the beginning, all of the coefficients are either or . • We will square the polynomial to get new coefficients which represent the result of the desired convolution. (For now, this corresponds to the operation). • Before multiplying, we will convert the polynomial from coefficient form to point value form as a sequence of calculated values , which is more desirable for multiplication. • The result of the multiplication should be converted back to coefficient form. Regular multiplication of polynomials corresponds to the operation , that is . We would like to make a modification so that . We need to make two modifications: 1. Addition should be done separately for each digit. 2. Addition should be done modulo . Problem 1 can be solved by introducing a polynomial with variables . For example, a pair of cards and (that is and ) is represented by a polynomial Multiplication now corresponds to addition of the digits separately. Looking at each of the variables separately, the polynomial is of degree at most , but when squaring, the degree might grow larger. To convert a polynomial (which remember has coefficients) to point-value form, we will calculate the value of the polynomial at different points. We will choose three values and calculate for each possible combination where , of which there are also . In the implementation, we will therefore have a transformation that converts a sequence of coefficients to a sequence of calculated values. Since the product of two polynomials has more coefficients than the original polynomials, if we wanted to calculate the true product, we would have had to extend the polynomials to bigger powers, making the new coefficients . However, to solve problem 2, that is precisely what we will not do. When doing the inverse transformation which returns the coefficient form, we will purposefully demand that the result has coefficients. Additionally, for the values , we will choose the third roots of unity (both real and complex), that is the numbers , and , so that . The effect of these two things is that the coefficients of larger powers in the resulting product will get added to the coefficients of the smaller powers - precisely with the smallest power that is the same modulo (because of the choice of ). Thus, the powers will reduce modulo and we will get exactly the desired coefficients of the -convolution. Let us illustrate this on an example with . For the polynomial, we take which can also be written as where First, we will apply the transformation on each of the polynomials separately. Using the label , we have: In this way, the sequence of coefficients turns into a new list of coefficients, which we will label with We would like to obtain a list that has the following values in order What remains is to replace the triplets with new values, in the same manner as we did when calculating the values for . For bigger values of , the process is analogous. In each of the iterations, we gradually transform the coefficients in the described way, each time jumping by a larger power of . The inverse transformation is almost identical to the original, having the formula It should be noted that in the implementation, there is no need to make calculations with complex numbers, whose real and imaginary parts are stored with a floating point type. Instead, we can notice that at each moment every number will be of the form , where and are whole numbers which fit in long long int. When calculating, it is useful to keep in mind that and .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891350269317627, "perplexity": 255.18633004987345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00290.warc.gz"}
https://applying-maths-book.com/chapter-4/integration-intro.html
4 Integration.# Any book on chemistry, physics, mathematical biology, quantum mechanics, optics, astronomy, and material science is peppered with examples of integration and differential equations. It is essential to be familiar with the essence of integration and then to know when a problem is difficult enough to turn to a book or to computer algebra for help. Even when using programs such as python and sympy, some knowledge of integration is always necessary.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82794588804245, "perplexity": 390.10568999866837}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00137.warc.gz"}
https://math.libretexts.org/Bookshelves/Scientific_Computing_Simulations_and_Modeling/Book%3A_Introduction_to_Social_Network_Methods_(Hanneman)/02%3A_Why_Formal_Methods%3F/2.05%3A_Summary
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 2.5: Summary $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ There are three main reasons for using "formal" methods in representing social network data: • Matrices and graphs are compact and systematic: They summarize and present a lot of information quickly and easily; and they force us to be systematic and complete in describing patterns of social relations. • Matrices and graphs allow us to apply computers to analyzing data: This is helpful because doing systematic analysis of social network data can be extremely tedious if the number of actors or number of types of relationships among the actors is large. Most of the work is dull, repetitive, and uninteresting, but requires accuracy; exactly the sort of thing that computers do well, and we don't. • Matrices and graphs have rules and conventions: Sometimes these are just rules and conventions that help us communicate clearly. But sometimes the rules and conventions of the language of graphs and mathematics themselves lead us to see things in our data that might not have occurred to us to look for if we had described our data only with words. So, we need to learn the basics of representing social network data using matrices and graphs. The next several chapters (3, 4, 5, and 6) introduce these basic tools.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20468460023403168, "perplexity": 365.09118235153295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00188.warc.gz"}
https://proofwiki.org/wiki/Definition:Sigma-Ring
# Definition:Sigma-Ring ## Definition ### Definition 1 A $\sigma$-ring is a ring of sets which is closed under countable unions. That is, a ring of sets $\Sigma$ is a $\sigma$-ring if and only if: $\ds A_1, A_2, \ldots \in \Sigma \implies \bigcup_{n \mathop = 1}^\infty A_n \in \Sigma$ ### Definition 2 Let $\Sigma$ be a system of sets. $\Sigma$ is a $\sigma$-ring if and only if $\Sigma$ satisfies the $\sigma$-ring axioms: $(\text {SR} 1)$ $:$ Empty Set: $\ds \O \in \Sigma$ $(\text {SR} 2)$ $:$ Closure under Set Difference: $\ds \forall A, B \in \Sigma:$ $\ds A \setminus B \in \Sigma$ $(\text {SR} 3)$ $:$ Closure under Countable Unions: $\ds \forall A_n \in \Sigma: n = 1, 2, \ldots:$ $\ds \bigcup_{n \mathop = 1}^\infty A_n \in \Sigma$ ### Definition 3 Let $\Sigma$ be a system of sets. $\Sigma$ is a $\sigma$-ring if and only if $\Sigma$ satisfies the $\sigma$-ring axioms: $(\text {SR} 1')$ $:$ Empty Set: $\ds \O \in \Sigma$ $(\text {SR} 2')$ $:$ Closure under Set Difference: $\ds \forall A, B \in \Sigma:$ $\ds A \setminus B \in \Sigma$ $(\text {SR} 3')$ $:$ Closure under Countable Disjoint Unions: $\ds \forall A_n \in \Sigma: n = 1, 2, \ldots:$ $\ds \bigsqcup_{n \mathop = 1}^\infty A_n \in \Sigma$ ## Linguistic Note The $\sigma$ in $\sigma$-ring is the Greek letter sigma which equates to the letter s. $\sigma$ stands for for somme, which is French for union.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856241345405579, "perplexity": 440.77213635796056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00325.warc.gz"}
https://imadali.net/posts/environment-variables/
Basically environment variables are stored in the system and can be used by different shell processes. Environment variables are usually upper case (conventionally) and and follows Bash syntax rules. Environment variables will be available to any shell process. You can set an environment variable in a one line command using export (which sets the environment variable) and we can print the value of the environment variable with printenv. export KEY=value printenv KEY \$ value To remove the environment variable we can use unset (to unset shell/environment variables). unset KEY The other option is to use set (to set shell variables) and export together. set KEY=value export KEY printenv KEY If you set the environment variable in one shell using the command line and print the value then nothing will return. In order for the environment variables to be recognized in new shell processes you need to set them in your bash profile. For example, in ~/.bash_profile you could have the following. export KEY=value If you save the file and open up a new shell process and run printenv KEY from the command line then value will be returned. The PATH environment variable is super useful. Some programs will search this variable for executable. For example, one of the paths in my PATH variable points to the location of texbin so that LaTeX knows where to look when I try to build LaTeX code. Reference:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7349727749824524, "perplexity": 2013.3706262400706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00359.warc.gz"}
https://worldbuilding.stackexchange.com/questions/94764/how-could-steampunk-civilizations-get-lifting-gas/94765
# How could steampunk civilizations get lifting gas? Steampunk airships typically use either helium or hydrogen as a source of lift (depending on the specific world). However, the source of the lifting gas is often not fully explained, or frequently even not mentioned at all. How could a steampunk civilization with limited (although rapidly expanding) understanding of chemistry acquire the large amounts of helium and hydrogen that are required to lift airships? • No chemistry degrees never mind, steam(water vapour) is lighter than air and much safer too and even Neanderthal can do it! – user6760 Oct 13 '17 at 4:58 • @user6760 And with a sufficiently rigid hull one could even put much more of it into the airship. More gas, more lift, eh? :) - But, all joking aside, I actually didn't know - and didn't expect - water vapor to be a better lifting gas then plain hot air. – I'm with Monica Oct 13 '17 at 6:45 • @AlexanderKosubek Interesting. The only problem would be condensation, but I could see an interesting form of hot air balloon, er, steam balloon? – Gryphon - Reinstate Monica Oct 13 '17 at 6:47 • @Gryphon: People have a tendency to romanticise steam. Truth is that water vapour, especially under pressure, is awful stuff. A steam balloon just seems like a fast track ticket airmen getting the skin rendered off their faces!!! – Joe Bloggs Oct 13 '17 at 8:16 • @AndyD273 Pedantic point: The Hindenburg was actually designed for helium. Political considerations (distaste for the Nazis) led the United States to change its mind about selling helium to Germany. The Germans made the best of what they regarded as a bad situation and added additional passenger cabins to make use of the additional lift, so you are correct that the Hindenburg in its final state and fully loaded couldn't have been lifted by helium. – John Coleman Oct 13 '17 at 13:10 It turns out it's not that hard to make hydrogen gas for an airship, and you can do it yourself. 1. Take some dilute sulphuric acid. 2. Put small pieces of metal in the acid - iron, zinc, and aluminum should all work. 3. Capture the resulting hydrogen gas, and then put it in your airship. This method of hydrogen production was first specifically used for airships in the 18th century, in a time of technology even less advanced than your steampunk civilization - and at the time, hydrogen was not yet known to exist! It seems quite plausible that they could produce the gas in large quantities. • The Americans used this during the Civil War to lift spotting balloons. The Union Army Balloon Corps Chief Aeronaut Thaddeus Lowe built the first mobile hydrogen gas generators so that he could fill his balloons anywhere he needed to. – Thucydides Oct 13 '17 at 3:31 • A Balloon Corps Chief Aeronaut! Now there's a job title to conjure with. But that's a fascinating fact about early lighter-than-air aviation. – a4android Oct 13 '17 at 3:45 • Aluminium also works with sodium hydroxide, but aluminium is relatively hard to make and was not discovered until much later so might not be the best option – Slarty Oct 13 '17 at 12:56 • @Slarty I believe they already knows Al, but only discovered a viable (\$) way to produce it at large scale in late 1800's, Before it pure Al can be more expensive than Au – jean Oct 13 '17 at 14:01 • @jean yes true. I believe that Napoleon had an Aluminium set of cutlery to take on campaign with him because it was so light, but that was very much the exception in his day. – Slarty Oct 13 '17 at 16:34 Helium was first found in large quantity in gas from oil wells in 1902 and large-scale industrial production for airships started during WW1. This is late, but arguably still within in the steampunk timeframe. Oil and natural gas wells were operating from the mid 19th century and are suitably steampunk technology. • helium was still pretty rare back then - Germany was unable to find enough for the Hindenburg (mostly because US banned the exports since 1927 or something) – Jeutnarg Oct 13 '17 at 15:56 • The Helium Control act was passed to give the US a vitual monopoly on the gas. At the time, they were the only ones that could produce it in the quantities needed to lift the Hindenburg. Possibly due to a potential arms race in Military Zeppelins. Also, Hindenberg wasn't the worst Zepplin disaster... four years earlier the USS Akkon, a US Naval sub, that crashed into the ocean with 73 lives lost (as opposed to the 36 on Hindenberg). That moment had people reconsidering the craft, but Hindenberg helped kill it for good. – hszmv Oct 13 '17 at 19:24 My guess is that you can't really have a steampunk civilization without having roughly 19th century understanding of chemistry and with that, you're probably home free. As evidence I would like to quote Wikipedia: The first gas balloon made its flight in August 1783. Designed by professor Jacques Charles and Les Frères Robert, it carried no passengers or cargo. On 1 December 1783 their second hydrogen-filled balloon made a manned flight piloted by Jacques Charles and Nicolas-Louis Robert, 10 days after the first manned flight in a Montgolfier hot air balloon. Extrapolating from this I'd say that it means the lighter than air properties (and the manufacturing/extraction of) hydrogen was known as early as the 18th century. Which coincides nicely with James Watts continuous rotary motion steam engine in time, suggesting that the components of a steampunk style civilization align rather nicely in time. For a history of Hydrogen extraction (And further evidence in the case of lighter-than-air-flight), we can turn to Ebbe Almqvist's "History of Industrial Gases" which states. During the 18th century many worked on ideas surrounding the lighter-than-air principle, but suitable means were not available until in 1766 Henry Cavendish succeeded in producing hydrogen gas in pure form (the known as inflammable air), and discovered that it was 14 times lighter than air. The dream of the century, that of "air sailing," could now become a reality. In order to produce the gas for the Charles/Robert balloon trip, Almqvist tells us, 500 kilograms of iron and 250 kilograms of sulfuric acid was used. Suggesting, once more, that the industrial production of Hydrogen was well within the limits of late 18th-century chemical science. A more effective and ultimately cheaper way of producing hydrogen is the electrolysis method where you use electricity to split water into hydrogen and oxygen. This is also well within the capacity of most steampunk type civilizations (although it takes us into the 19th century. In 1800, Alessandro Volta presented his so-called voltaic pile, the forerunner of the electric battery. A few weeks later, William Nicholson and Anthony Carlisle constructed a voltaic battery and manufactured considerable quantities of oxygen and hydrogen. The electrolysis method remained expensive until the Belgian Zénobe Gramme invented the first steam-driven dynamo in 1873. From 1890 on, when large hydroelectric power stations were built, the method was used on a large scale wherever cheap hydroelectric power was found. Acid and metal reaction was already mentioned, but usually, a cheaper way was used. Lavoisier Meusnier iron-steam process was invented in 1784, it generated hydrogen by passing water vapor over a bed of red-hot iron at 600 °C. So only some iron rods and fuel were consumed - no expensive acid required. Union Army Balloon Corps mobile hydrogen generators used acid-metal reaction, so I guess iron-steam generators were too bulky for mobile use. • You might want to think about the implications of heating iron to 600 C and producing hydrogen - in a field situation. The odds of a stray flame/gas combination are unnervingly high. This shouldn't be a problem in a fixed industrial facility, but portable use is a different matter. Can you say "Hindenburg"? I knew you could. – WhatRoughBeast Oct 13 '17 at 15:56 Since I don't see a hard science tag.... Surely we can do better than mundane hydrogen or helium? Even the most ill-educated brute knows that the sky consists of two layers, Aer - the dim, lower part of the sky, and the Aether, the brighter upper part. Using observation we thus know that the brighter, cleaner air is - the higher it rises. So our goal is to cause an airship to rise. Logically this means we must trap cleaner air within the gas bag - so how do we clean the air? Obviously, as educated folks, we know that the purest element is Quintessence - and the studies done by the royal alchemist's guild show that we can create Quintessence and infuse it into the air - transmuting mundane Aer into Aether! The most cost-effective method is, of course, mixing together the transcendent elements Sulphur and Mercury. So we combine them, and using some sort of heating element - I recommend a glass lens focusing sunlight during the day or heated Luminiferous Aether at night to further enhance the heavenly properties of the admixture - burning the impurities out of the Aer and creating Aether - which will lift the airship! Allow for vents at the top of the bag to release the Aether and you can control your descent as well. https://en.wikipedia.org/wiki/Aether_(classical_element)#Quintessence https://en.wikipedia.org/wiki/Air_%28classical_element%29 https://www.thoughtco.com/alchemical-sulfur-mercury-and-salt-96036 • hahaha, for the time this is set in I would assume this wording would be the most accurate description. however wouldn't heated Luminiferous Aether be just a gas lamp though? – The Last Remnant Jul 26 '18 at 9:24 • Probably a really fancy one. – Brizzy Jul 26 '18 at 9:29 • The role playing Space 1889 used Aether as a lifting gas, both in the atmosphere and in space. – VBartilucci Mar 8 at 16:58 ### Mining helium gas If you don't mind doing a little handwaving, you've got two potential sources of helium that I think would both be great. Most commercial helium is extracted from natural gas deposits that have a high (1-10%) fraction of helium. We build refineries to separate the helium and natural gas. I'm not sure if the typical steampunk world would allow for the mining of hydrocarbons, but oil mining started in the mid 1800's, so that sounds about the right time period. If you want to make it easier on your people then they could find a deposit of nearly pure helium, so it's just a matter of getting it out of the ground. Such deposits have never been found on earth, but that doesn't mean that it is impossible or obviously violates the laws of physics. ### Nuclear Helium Another source of helium is from nuclear reactions. Helium-3 has a number of uses in modern industry and technology, and virtually all Helium-3 used for these purposes is produced by the decay of tritium. Of course tritium itself isn't something you find just lying around (if you did, you could just use the hydrogen as your lifting gas), but it still gives some hints for helium production. Alpha radiation is actually just energetic helium, so any radioactive substance that decays through alpha radiation effectively generates helium gas (which includes Uranium 235). You would need some handwaving, but if you want to get really dangerous then your steampunkers might discover that this strange, cakey, yellow substance produces lightweight gas all on its own. Fortunately for them, it is near impossible to generate a critical nuclear reaction using natural uranium (which is primarily composed of the more stable U-238 nucleus). To get sufficient production of He through alpha decay you may need a less stable element anyway, which is where more handwaving comes in (because less stable elements aren't normally naturally occurring). So this avenue may require more handwaving then you want, or it may be worth a lot of handwaving to mine helium gas from natural nuclear power (and who doesn't want to blow up a steampunk city with an accidental nuclear explosion, right??). ### Water Electrolysis Depending on your level of "technology" you can always have your people produce hydrogen via the electrolysis of water. You don't need to understand the chemistry for your people to try to do it, and it is so simple that just about everyone has the materials to do it at home. Doing it on an industrial scale and keeping the hydrogen gas separate are a bit trickier of course, but depending on your steampunk world generating hydrogen via electrolysis should be quite doable. If you want to keep the use of electricity simple then you can always come up with some sort of "lightning farm" that uses natural lightning in a stormy environment to split water for hydrogen. That's probably not possible in practice, but this is fiction after all and might be fun for a story.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5602409243583679, "perplexity": 1916.6452279269238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00521.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1139.93010
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1139.93010 Tsai, Zhi-Ren; Chang, Yau-Zen; Hwang, Jiing-Dong; Lee, Jye Robust fuzzy stabilization of dithered chaotic systems using island-based random optimization algorithm. (English) [J] Inf. Sci. 178, No. 4, 1171-1188 (2008). ISSN 0020-0255 Summary: Applying dither to highly nonlinear systems may suppress chaotic phenomena, but dynamic performance, such as convergence rate and disturbance attenuation, is usually not guaranteed. This paper presents a dithered $H_{\infty }$ robust fuzzy control scheme to stabilize chaotic systems that ensures disturbance attenuation bounds. In the proposed scheme, Takagi-Sugeno (T-S) fuzzy linear models are used to describe the relaxed models of the dithered chaotic system, and fuzzy controllers are designed based on an extension to the concept of parallel distributed compensation (PDC). Sufficient conditions for the existence of the $H_{\infty }$ robust fuzzy controllers are presented in terms of a novel linear matrix inequalities (LMI) form which takes full consideration of modeling error and disturbances, but cannot be solved by the standard procedures. In order to solve the LMI problem and to identify the chaotic systems as T-S fuzzy modes, we propose a compound optimization strategy called the Island-based Random-walk Algorithm (IRA). The algorithm is composed of a set of communicating random-walk optimization procedures concatenated with the down-hill simplex method. The design procedure and validity of the proposed scheme is demonstrated via numerical simulation of the dithered fuzzy control of a chaotic system. MSC 2000: *93B35 Sensitivity (robustness) of control systems 93C42 Fuzzy control 37D45 Strange attractors, chaotic dynamics Keywords: chaotic systems; dither; $H_{\infty }$ robust fuzzy control; linear matrix inequality Highlights Master Server
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7713519930839539, "perplexity": 2909.858694683385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698141028/warc/CC-MAIN-20130516095541-00026-ip-10-60-113-184.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007%2Fs10336-012-0864-9
Journal of Ornithology , Volume 154, Issue 1, pp 19–33 # Standardising distance sampling surveys of parrots in New Caledonia • Andrew Legault • Jörn Theuerkauf • Emilie Baby • Laetitia Moutin • Sophie Rouys • Maurice Saoumoé • Ludovic Verfaille • Nicolas Barré • Vivien Chartendrault • Roman Gula Open AccessOriginal Article DOI: 10.1007/s10336-012-0864-9 Legault, A., Theuerkauf, J., Baby, E. et al. J Ornithol (2013) 154: 19. doi:10.1007/s10336-012-0864-9 ## Abstract Standardised surveys are essential for monitoring populations and identifying areas that are critical for conservation. With the aim of developing a standardised method of surveying parrots in the rainforests of New Caledonia, we used distance sampling to estimate densities of New Caledonian Parakeets (Cyanoramphus saisseti), Horned Parakeets (Eunymphicus cornutus), Ouvéa Parakeets (E. uvaeensis), and New Caledonian Rainbow Lorikeets (Trichoglossus haematodus deplanchii). We carried out surveys in the early morning and late afternoon, when parrots were easiest to detect. To minimise errors associated with estimating distances and flock sizes by ear, we conducted brief searches to locate parrots, then measured their distance from the transect line. We recorded birds in flight and consider these records to be important when estimating parakeet populations. In agreement with existing knowledge on distance sampling, we found line transects to be more efficient than point transects for estimating the density of parakeets. Our results indicate that parrots located beyond 50–70 m from the transect line have little influence upon density estimates. In addition, surveys on roads are likely to underestimate densities if not corrected for road width. We generated relatively stable and precise density estimates (CV < 0.25) with approximately 40–50 detections, yet additional effort may be warranted under different study conditions. Although we aimed to improve parrot surveys in New Caledonia, our suggestions may be useful to other researchers studying rainforest birds, and can be adapted to suit different species or environments. ### Keywords ConservationDistance samplingNew CaledoniaParrotsSurvey methods ## Zusammenfassung Standardisierung von „Distance Sampling“ zur Erfassung von Populationsdichten der Papageien NeukaledoniensStandardisierte Erhebungen sind wichtig für die Überwachung von Populationsentwicklungen und die Erfassung von Bereichen, die für die Arterhaltung von entscheidender Bedeutung sind. Mit dem Ziel, eine standardisierte Methode zur Erfassung von Papageien der Regenwälder Neukaledoniens zu entwickeln, haben wir mit Hilfe von „Distance Sampling“ Bestände von Neukaledoniensittichen (Cyanoramphus saisseti), Hornsittichen (Eunymphicus cornutus), Ouvéa-Sittichen (E. uvaeensis) und Neukaledonien-Allfarbloris (Trichoglossus haematodus deplanchii) geschätzt. Wir führten die Zählungen am frühen Morgen und späten Nachmittag durch, wenn Papageien am leichtesten zu entdecken sind. Um Fehler bei der Schätzung von Entfernungen und Gruppengrößen per akustischem Wahrnehmen zu minimieren, haben wir die Papageien schnell gesucht und dann ihre Entfernung zur Transektlinie gemessen. Wir haben fliegende Vögel gleichfalls notiert und denken, dass diese ebenfalls in Schätzungen der Sittich-Populationsdichte eingehen sollten. In Übereinstimmung mit anderen Arbeiten über „Distance Sampling“ fanden wir, dass zur Dichteschätzung der Sittiche Linientransekte effizienter als Punkttransekte sind. Unsere Ergebnisse zeigen, dass Beobachtungen von Papageien, die über 50–70 m von der Transektlinie entfernt waren, wenig Einfluss auf die Dichteschätzungen haben. Darüber hinaus unterschätzen Zählungen entlang von Straßen die Populationsdichte, wenn die Fahrbahnbreite nicht zur Korrektur der Entfernungen zur Transektlinie hinzugezogen wird. Wir haben relativ stabile und präzise geschätzte Dichten (CV < 0,25) mit etwa 40–50 Beobachtungen ermittelt, jedoch könnten unter abweichenden Studienbedingungen weitere Beobachtungen nötig sein. Obwohl es unser Ziel war, Populationsdichteschätzungen der Papageien in Neukaledonien zu verbessern, können unsere Vorschläge nützlich für Dichteschätzungen anderer Regenwaldvögel sein und entsprechend für andere Arten oder Lebensräume angepasst werden. ## Introduction Population estimates play a critical role in identifying species that require protection, and in setting priorities for conservation (IUCN 2011), yet obtaining reliable estimates remains an elusive task for certain taxa (Cassey et al. 2007). One of the most commonly used methods to approximate animal abundance is index counting, where the observer records the number of individuals detected around a point, or along a transect (Rosenstock et al. 2002), usually over a defined time period. Fauna monitoring programs are often dependent upon such indices, as they are useful for comparing temporal and spatial patterns of abundance (Karubian et al. 2005), and gauging population growth or decline (Amar et al. 2008). In spite of this, indices only provide a relative measure of abundance, and do not account for the conspicuousness of the species under study, variations in the surrounding environment, or the differing skills of observers (Cassey et al. 2007). Although indices can play an important role in wildlife surveys (Hutto and Young 2003; Johnson 2008), approaches that consider the detectability of the species under study are often considered more appropriate, particularly when the goal is to estimate absolute numbers (Rosenstock et al. 2002; Thompson 2002; Norvell et al. 2003). Examples of these survey techniques include variable strip transects (Emlen 1971, 1977), variable circular plots (Reynolds et al. 1980), double-observer sampling (Nichols et al. 2000), independent-observer methods (Alldredge et al. 2006), and time-of-detection approaches (Farnsworth et al. 2002; Alldredge et al. 2007a). The method most commonly used to account for detectability is distance sampling, where the observer measures the distance to each animal (or group of animals) detected from a point or transect line (Burnham et al. 1980; Buckland et al. 1993, 2001). The distances are then used to calculate the probability of encountering an animal as a function of distance from the transect line. Because the probability of detection declines in a quantifiable and predictable manner, the detection function can be used to estimate the number of animals within the area that was effectively surveyed. Distance sampling is advantageous in this regard, as it provides an approximation of animal density, yet it does not require the detection of all individuals present, or prior knowledge of the size of the area sampled. The method is based on the assumption that individuals on the line or point are detected with certainty, individuals are detected at their initial location, and distances are measured exactly. All of these assumptions may be relaxed under certain circumstances (Buckland et al. 1993; Thomas et al. 2010), although the reliability of estimates may suffer if they are not met (Bächler and Liechti 2007). Distance sampling has been used to survey a variety of fauna, and is often employed in bird surveys. It is generally less intrusive than alternative methods such as mist netting (e.g. Meyers 1994; Whitman et al. 1997) and mark-recapture (e.g. Sandercock and Beissinger 2002), and provides an efficient means of estimating abundance when it is not feasible to conduct plot searches (e.g. Rodriguez-Estrella et al. 1992) or roost counts (e.g. Gnam and Burchsted 1991). In studies of parrots, distance sampling has proven useful for estimating population size (e.g. Walker and Cahill 2000; Rivera-Milán et al. 2005), assessing abundance in different habitats (e.g. Marsden et al. 2001; Marsden and Symes 2006), and evaluating conservation actions (e.g. Jepson et al. 2001; Barré et al. 2010). However, little consideration has been given to the application of distance sampling techniques for surveying birds in tropical rainforests, in spite of the fact that methods that work in other locations may need to be adapted to suit rainforest conditions (Raman 2003; Buckland et al. 2008; Lee and Marsden 2008; Gale et al. 2009). Rainforest birds are often cryptic and difficult to spot amongst the dense vegetation, so researchers frequently rely upon aural cues to estimate distances (e.g. Marsden et al. 2006; Gale et al. 2009). Difficult terrain can also make it impractical to place transects randomly, thus surveys are occasionally conducted along roads or trails (e.g. Jones et al. 1995; Marsden 1999; Gale and Thongaree 2006; Lee and Marsden 2008), or other non-random features such as ridges (e.g. Simon et al. 2002). Temporal variations in detectability can also pose problems when surveying tropical forest birds, therefore survey periods may need to be carefully regimented (Marsden 1999; Simon et al. 2002; Buckland et al. 2008). In New Caledonia, few attempts have been made to estimate the abundance of rainforest birds (Chartendrault and Barré 2005, 2006). Density estimates have played a key role in monitoring the population of endangered (IUCN 2011) Ouvéa Parakeets Eunymphicus uvaeensis (Avibase ID: 7CF9DDC2A21A1D9A; http://avibase.bsc-eoc.org), and in evaluating the consequences of parakeet conservation programs on Ouvéa (Barré et al. 2010). However, the parakeets of mainland New Caledonia have received comparatively little attention, despite the fact that New Caledonian Parakeets Cyanoramphus saisseti (Avibase ID: 75F9612EBA158702) and Horned Parakeets E. cornutus (Avibase ID: FC7AB945C8292D66) are both categorised as vulnerable by the IUCN (2011). Anecdotal evidence suggests that populations of mainland parakeets have fallen over the past century (Layard and Layard 1882; Bregulla 1993; Hahn 1993). Although these accounts provide cause for concern, there are almost no field data available to determine the extent of such declines. In the absence of detailed information about parakeet populations, it is difficult to detect population trends or identify critical areas for conservation. The primary aim of this research was to develop and test a standardised method for estimating parrot density in New Caledonia, so that populations can be monitored effectively. Standardised methods of surveying are essential for comparing parrot populations across different time frames or geographical regions as the resulting data can be interpreted with much greater confidence than data collected using an assortment of different techniques. Using distance sampling, we surveyed New Caledonian Parakeets, Horned Parakeets, and New Caledonian Rainbow Lorikeets Trichoglossus haematodus deplanchii (Avibase ID: E3C3CC2E71949308), which are endemic to mainland New Caledonia, and Ouvéa Parakeets, which are endemic to the neighbouring island of Ouvéa. In doing so, we attempted to find a balance between efficient and accurate counts by modifying various elements of our survey design to suit the conditions encountered in New Caledonian rainforests. We assessed whether line transects or point transects are likely to be more appropriate for surveying parakeets in New Caledonia, and established the amount of effort required to achieve a suitable level of confidence in each density estimate. Additionally, we analysed the relationship between the relative abundance and absolute density of parakeets on the mainland in order to increase the utility of existing indices. ## Methods ### Study areas and focal species Our study sites were located at three locations in New Caledonia (Fig. 1): Parc Provincial de la Rivière Bleue (PPRB; 22°07′ S, 166°40′ E), Parc des Grandes Fougères (PGF; 21°37′ S, 165°46′ E), and the island of Ouvéa (20°36′ S, 166°34′ E). All three sites have been recently designated as ‘Important Bird Areas’ as they provide valuable habitat for parakeets and other threatened bird species (Spaggiari et al. 2007). PPRB is a 90 km2 reserve located in the south of New Caledonia. We carried out our research in the valley of the Rivière Bleue (Theuerkauf et al. 2009). Around 93 % of the vegetation at the study site is rainforest, and the remaining 7 % is maquis (shrubland). The mean annual rainfall in the Rivière Bleue valley is 3,200 mm, which makes it one of the wettest lowland areas in New Caledonia (Bonnet de Larbogne et al. 1991). PGF is a 45 km² reserve located near the centre of New Caledonia. The vegetation at the study site consists of approximately 68 % rainforest, 22 % secondary regrowth (a mix of young forest and savannah), and 10 % scrub and pine plantations. The mean annual rainfall in the region is 1,800 mm (Jaffré and Veillon 1995). Ouvéa is a 130 km² raised coral atoll located approximately 100 km northeast of mainland New Caledonia. A narrow isthmus connects the north and south of the island. The vegetation at the study site consists of approximately 90 % rainforest and 10 % plantations (mainly of coconut). The island receives an average annual rainfall of 1,250 mm (Barré et al. 2010). All of the parrots we studied are mostly green, medium-sized birds, yet the vocalisations of each species are distinct. Rainbow Lorikeets occasionally form large flocks containing dozens of birds, but New Caledonian Parakeets, Horned Parakeets, and Ouvéa Parakeets rarely occur in flocks of more than three or four birds (Barré et al. 2010; Legault et al. 2012). Introduced Pacific Rats Rattus exulans, Black Rats Rattus rattus, and Feral Cats Felis catus are considered to be predators of parrots in New Caledonia (Robinet et al. 1998; Gula et al. 2010). All of these species inhabit the study areas on mainland New Caledonia (Rouys and Theuerkauf 2003), and all but the Black Rat are present on Ouvéa (Robinet et al. 1998). Although parakeet poaching has been an issue on Ouvéa in the past (Robinet et al. 1996), it is unlikely to have been an important factor over the period of our study (Pain et al. 2006; Barré et al. 2010). ### Surveys We used distance sampling (Buckland et al. 1993, 2001) to carry out line transect surveys at each of the study sites. Surveys were conducted by multiple observers as we were interested in developing methods that would allow monitoring to continue indefinitely. A pair of observers undertook 25 surveys along a 5.4 km transect at PPRB from November 2004 to January 2005. Two other individuals conducted 30 surveys along the same transect from March to June 2008. Another observer carried out 31 surveys along a 5.1 km transect at PGF from January to March 2009. It took approximately 2.5 h to complete each of these surveys. We collected additional data at both of these sites in order to compare different distance sampling techniques, as specified in the following section. Several groups of observers conducted surveys on the island of Ouvéa in December 2008, December 2009, and August 2011. Most of these groups were comprised of two or three individuals. Transects ranged in length from approximately 1–12 km, and followed similar routes each year, although we added several new transects in 2009 and 2011. We surveyed 163 km of transects in total, including 77 km (23 transects) in the north of Ouvéa, and 87 km (17 transects) in the south. The main purpose of these surveys was to estimate densities in the north and south of the island using a standardised methodology. We did not record Rainbow Lorikeets during surveys on Ouvéa or at PGF because the results of the initial survey at PPRB suggested that the transient nature of this species would pose difficulties in estimating local abundance, and there is only a small, introduced population of Rainbow Lorikeets on Ouvéa (Barré et al. 2010). In May of 2008, we tested point transect distance sampling (Buckland et al. 1993; Thomas et al. 2002) at PPRB to see how this method compared to line transect distance sampling in practice. We selected 24 points along the 5.4 km path that we used for line transects. Each point was 200 m apart, and we spent 3 min listening for parrots at each one. We carried out point transect surveys during the afternoon because mornings were dedicated to line transect surveys. From start to finish, it took approximately 2 h to survey each transect (including travel between points). We only conducted six point transect surveys as we found line transects to be more effective for surveying parakeets. ### Line transect survey method During surveys, we found it was practical to have at least two observers, so that tasks could be shared. However, some of the surveys were carried out by a single observer due to limitations in the availability of field workers. We mainly followed tracks or dirt roads, and sometimes a compass bearing through forest (on Ouvéa). For the most part, the tracks were narrow routes through rainforest. The dirt access road at PPRB was approximately 4-8 m wide, although some parts were covered entirely by the forest canopy, and several short sections of the road were cleared of vegetation beyond 10 m. We usually walked transects between 0.5 and 3 h after dawn, but for testing purposes, we surveyed about half of the transects at PGF in the afternoon, from 3 to 0.5 h before dusk. To increase sampling efficiency, we also surveyed half of the Ouvéa transects in the afternoon. We chose these periods of the day as they are when parrots are most active in New Caledonia (Robinet et al. 2003; Legault et al. 2012). In order to maximise the probability of detecting birds near the transect line, we only carried out surveys in fair weather (i.e. no rain or strong wind), and walked each transect at a slow pace (around 2 km per hour). We walked quietly during surveys so that birds could be easily detected, and listened for the sounds of wings flapping or parrots chattering. Whenever we saw or heard parrots, we made our way to their location as quickly as possible. We noted the number of birds, and measured the perpendicular distance (to the nearest metre) from the centre of the flock to the transect line using a measuring tape. Due to the difficulty in spotting parrots, it was occasionally necessary to take measurements from the tree where the calls came from. We only spent a few minutes searching for birds in order to reduce the likelihood of them moving prior to being located. If we suspected movement during the search period, or if we could not locate parrots during searches, then we estimated their original location based on their calls. In addition, we recorded the GPS coordinates (WGS 84, UTM) of each parrot or flock detected. We recorded flying birds at the location where we first saw them, and paid attention to their flight path in order to minimise the possibility of counting them twice along the same transect. During surveys at PPRB and PGF, we noted whether parrots were flying or perched in order to compare densities with and without birds in flight. In addition, we estimated the height of birds detected during surveys at these sites. If the height of birds influenced detectability, we would not expect the detections to be normally distributed with respect to height. Therefore, we tested the height data for normality using a Kolmogorov–Smirnov test. At PPRB, we also measured the width of the road at each point of detection in order to compensate for the lack of habitat above the road. ### Analyses We analysed the distance sampling data with Distance 6.0, Release 2 (Thomas et al. 2010). We used the program’s default CDS (Conventional Distance Sampling) engine, which analyses transect data using an approach described by Buckland et al. (1993, 2001). We used exact distance measurements and cluster sizes to estimate densities. We calculated the variance of each density estimate empirically, based on the variance in observations between samples. To compensate for potential differences in detectability, we generated separate detection functions for each species at each site. We tried out various combinations of key functions (uniform, half-normal, hazard-rate) and series adjustments (cosine, simple polynomial, hermite polynomial) and used Akaike’s Information Criterion (AIC) to evaluate the fit of each model (Thomas et al. 2010). Among the models with the lowest AIC values, we selected the one that appeared most suitable based on a visual examination of histograms and the results of goodness of fit tests (Buckland et al. 1993; Buckland 2006; Thomas et al. 2010). To facilitate comparison between different treatments, we assigned a truncation distance for each species at each site. We usually used the largest recorded distance for this purpose, though we truncated one or two of the most distant records if it improved the shape of the detection function (Buckland et al. 2001; Thomas et al. 2010). We calculated the density (including 95 % confidence intervals) of parrots after surveying each transect, and plotted these data to determine the number of detections, and length of transects, required for the mean density to stabilise. The aim was to identify the minimum number of detections required to achieve stable density estimates that were approximately equal to those achieved using all detections. Using a fixed truncation distance made it easier to evaluate when densities stabilised over the course of the study. Otherwise, we found that densities would change abruptly on days when more distant detections were recorded, as a result of the increased truncation distance and corresponding decrease in detection efficiency (Norvell et al. 2003). We also plotted the relationship between the coefficient of variation (CV) of density estimates and the number of records accumulated for each species at each site. We used a CV of 0.25 to identify an upper threshold of precision for comparing densities at different sites, although a CV of 0.20 or less might be more appropriate for management and monitoring purposes (Buckland et al. 1993). Researchers interested in obtaining the Distance files to carry out comparative analyses are encouraged to contact us. By analysing the data in various ways, we were able to determine how different survey techniques might influence density estimates. At PGF, we assessed whether the time of day affected densities by comparing morning and afternoon surveys. At PPRB and PGF, we also analysed how the inclusion or exclusion of flying birds influenced densities. To compensate for the presence of the road at PPRB, we subtracted half the width of the road from each detection distance. In doing so, we effectively shifted the centreline to the edge of the road (similar to Heydon et al. 2000). Had we not done this, the records closest to the centreline would have been largely restricted to birds in flight, due to the lack of vegetation over the road. ### Conversion between relative abundance and absolute density of parakeets On several of the days when we surveyed transects at PPRB and PGF, we additionally spent the rest of the day recording observations of New Caledonian Parakeets and Horned Parakeets in forested areas near the transect line. We carried out the equivalent of ten full-day counts at PPRB in 2004/2005, 5 full-day counts at PPRB in 2008, and ten full-day counts at PGF in 2009. This provided us with a measure of the average daily encounter rate (E) and the standard deviation of the daily encounter rate (SDE) for parakeets at both of the mainland sites. We undertook these counts in order to determine the relationship between encounter rates and absolute densities of parakeets. With the absolute density (D) and lower and upper 95 % confidence limits (LCL, UCL) provided by Distance 6.0, we calculated the standard deviation of the absolute density (SDD), as follows (modified from Thomas et al. 2002): $${\text{SD}}_{D} = \sqrt {\left( {{\text{e}}^{{10^{{\left( {\frac{{\log \left( {{{\ln C} \mathord{\left/ {\vphantom {{\ln C} {1.96}}} \right. \kern-\nulldelimiterspace} {1.96}}} \right)}}{0.5}} \right)}} }} - 1} \right) \times D^{2} }$$ (1) where $$C = \frac{D}{95\,\% \;LCL} = \frac{95\,\% \;UCL}{D}$$ (2) We calculated a coefficient (K = D/E) to convert from relative abundance (i.e. from daily encounter rates) to absolute densities of parakeets (i.e. from line transect distance sampling). In addition, we calculated 95 % confidence intervals for the coefficient based on standard deviations (SDK), which we estimated as follows (Theuerkauf et al. 2008; modified from Goodman 1960): $${\text{SD}}_{K} = \sqrt {\frac{{D^{2} {\text{SD}}_{E}^{2} + E^{2} {\text{SD}}_{D}^{2} + {\text{SD}}_{D}^{2} {\text{SD}}_{E}^{2} }}{{E^{4} }}}$$ (3) We also plotted densities (birds/km²) against encounter rates (birds/day) to verify that their relationship was linear. In order to increase the number of estimates per species, we split each survey period into three consecutive intervals and calculated densities and encounter rates over each interval. To improve the precision of density estimates, we pooled detection functions over consecutive intervals. ## Results We recorded 651 flocks of parrots during line transect distance sampling surveys in New Caledonia (Table 1). We detected 67 % of the parrots within 20 m of the transect line, and only 5 % past 50 m. However, detectability varied depending on the species and site (Fig. 2). Detections beyond 50–70 m generally contributed little to the density estimates (Table 2). In some cases, truncating the most distant records allowed us to model the data more reliably, yet this removed no more than 2.5 % of the detections (at most 2 records) for any given species at any given site (Table 2). Detections of New Caledonian Parakeets (P = 0.582) and Horned Parakeets (P = 0.114) were normally distributed with respect to height, whereas those of Rainbow Lorikeets (P = 0.007) were not. Table 1 Average flock size and number of flocks detected (n) during line transect distance sampling of parrots at Parc Provincial de la Rivière Bleue (PPRB), Parc des Grandes Fougères (PGF), and North and South Ouvéa Study site New Caledonian Parakeet Horned Parakeet Rainbow Lorikeet Ouvéa Parakeet Mean SD n Mean SD n Mean SD n Mean SD n PPRB 2004/05 1.4 0.6 46 1.5 1.1 39 2.1 1.1 100 PPRB 2008 1.9 1.0 40 1.9 1.1 63 PGF 1.3 0.6 50 1.8 1.3 92 North Ouvéa 2.0 1.1 153 South Ouvéa 1.8 0.9 68 Pooled 1.5 0.7 136 1.8 1.20 194 2.1 1.1 100 2.0 1.1 221 Table 2 Density estimates of New Caledonian Parakeets, Horned Parakeets, Rainbow Lorikeets, and Ouvéa Parakeets from line transect distance sampling at Parc Provincial de la Rivière Bleue (PPRB), Parc des Grandes Fougères (PGF), and North and South Ouvéa, showing the effect of deviating from the ‘standard’ method, as described in the “Methods Species Site Year Effort (km) Method D (CI) CV n w ESW New Caledonian Parakeet PPRB 2004/05 135.5 Standard (untruncated) 13.5 (8.6–21.3) 0.23 46 47 19 ex. flying birds 8.4 (4.8–14.4) 0.28 37 47 24 11.8 (7.3–19.1) 0.25 46 50 23 2008 162.6 Standard (truncated) 6.2 (4.2–9.2) 0.20 39 73 38 untruncated 6.0 (4.0–8.9) 0.20 40 78 41 ex. flying birds 5.6 (3.7–8.5) 0.21 37 73 40 5.7 (3.8–8.5) 0.21 39 75 42 PGF 2009 158.1 Standard (truncated) 7.8 (5.5–11.2) 0.18 49 70 29 untruncated 9.0 (6.0–13.4) 0.20 50 80 26 ex. flying birds 7.1 (4.8–10.3) 0.19 46 70 29 ex. afternoons 9.3 (5.9–14.6) 0.22 32 70 29 ex. mornings 6.3 (3.7–10.8) 0.26 17 70 29 Horned Parakeet PPRB 2004/05 135.5 Standard (untruncated) 10.7 (6.8–16.8) 0.23 39 52 20 ex. flying birds 8.9 (5.4–14.8) 0.26 35 52 22 8.3 (5.1–13.6) 0.25 39 62 26 2008 162.6 Standard (truncated) 21.3 (15.0–30.1) 0.18 62 56 18 untruncated 20.7 (14.6–29.2) 0.18 63 95 19 ex. flying birds 21.6 (15.2–30.7) 0.18 59 56 18 16.9 (11.2–25.4) 0.21 62 60 23 PGF 2009 158.1 Standard (truncated) 32.1 (24.6–42.0) 0.14 90 60 19 untruncated 33.2 (25.4–43.3) 0.14 92 90 19 ex. flying birds 30.8 (23.3–40.6) 0.14 84 60 19 ex. afternoons 30.6 (21.1–44.4) 0.19 47 60 19 ex. mornings 33.6 (24.7–45.6) 0.15 43 60 19 Rainbow Lorikeet PPRB 2004/05 135.5 Standard (truncated) 48.7 (35.7–66.5) 0.16 99 46 15 untruncated 48.9 (35.5–67.4) 0.16 100 103 15 ex. flying birds 15.9 (9.0–28.0) 0.29 43 46 24 41.4 (30.0–57.1) 0.16 99 50 18 Ouvéa Parakeet N. Ouvéa 2008–2011 76.6 Standard (untruncated) 61.4 (45.5–82.9) 0.15 153 70 30 S. Ouvéa 2008–2011 86.5 Standard (untruncated) 23.8 (14.4–39.4) 0.25 68 68 27 D density, CI 95 % confidence intervals, CV coefficient of variation, n number of detections, w truncation distance, ESW effective strip width, ex. excluding, unadj. unadjusted To achieve relatively stable density estimates of parakeets on the mainland, approximately 15–20 replicate transects were required, or 75–100 km of transects in total (Fig. 3). By this stage, most of the estimates had reached a plateau, and precision increased slowly with the addition of new transects. The relationship between the coefficient of variation (CV) of density estimates and the number of observations accumulated for each species at each site indicates that 30–40 observations were usually required to reduce the CV to 0.25, and 40–50 observations were usually required to achieve a CV of 0.20 (Fig. 4). Excluding flying birds from the analysis had the greatest effect upon the Rainbow Lorikeet, leading to a 67 % decrease in the 2004/2005 density estimate at PPRB (Table 2). The omission of flying birds reduced density estimates by an average of 19 % for New Caledonian Parakeets, and 7 % for Horned Parakeets (Table 2). The overlap in confidence intervals at each site suggests that these reductions may be irrelevant, although excluding these records slightly decreased the precision of several estimates. Correcting for the width of the road at PPRB increased density estimates by an average of 12 % for New Caledonian Parakeets, 27 % for Horned Parakeets, and 18 % for Rainbow Lorikeets (Table 2). New Caledonian Parakeet densities were slightly lower during the afternoon than in the morning at PGF, but the sample size was too small to achieve a precise estimate for each period. The estimated density of Horned Parakeets was comparable between morning and afternoon surveys (Table 2). Assuming a linear relationship between daily encounter rates and densities from line transect distance sampling at PPRB and PGF (Fig. 5), we estimate that a single encounter with a New Caledonian Parakeet per day represents an average density of 1.3 (1.2–1.4) birds/km², and a single encounter with a Horned Parakeet per day represents an average density of 2.3 (2.1–2.5) birds/km² (Table 3). From the six experimental point transect surveys (each with 24 points) conducted at PPRB, we estimated the density of New Caledonian Parakeets and Horned Parakeets to be 3.6 (CI = 1.3–10.0; CV = 0.46; n = 7), and 12.3 (CI = 6.0–25.0; CV = 0.32; n = 8) birds/km², respectively. Table 3 Densities (D; birds/km²) from line transect distance sampling and average encounter rates (E; birds/day) of parakeets (including 95 % confidence intervals) at Parc Provincial de la Rivière Bleue (PPRB) and Parc des Grandes Fougères (PGF) Site Index New Caledonian Parakeet Horned Parakeet PPRB 2004/05 D 13.5 (8.6–21.3) 10.7 (6.8–16.8) E 7.9 (5.4–10.4) 5.9 (4.0–7.8) K 1.7 (1.3–2.1) 1.8 (1.4–2.2) PPRB 2008 D 6.2 (4.2–9.2) 21.3 (15.0–30.1) E 6.6 (4.3–8.9) 8.2 (6.1–10.3) K 0.9 (0.8–1.1) 2.6 (2.3–2.9) PGF D 7.8 (5.5–11.2) 32.1 (24.6–42.0) E 6.3 (5.0–7.6) 13.9 (11.3–16.5) K 1.2 (1.1–1.4) 2.3 (2.1–2.5) Pooled K 1.3 (1.2–1.4) 2.3 (2.1–2.5) The conversion between absolute densities and daily encounter rates is achieved using coefficient K = D/E ## Discussion The behaviour of New Caledonian Parakeets, Horned Parakeets, and Ouvéa Parakeets lends them well to distance sampling. They occur in small clusters, their vocalisations are distinct, and they are relatively easy to detect during foraging periods because they often chatter as they feed. Additionally, none of the species appeared to react to our presence when we searched for them. When parakeets do take flight, they often announce their departure with a series of raucous calls, which makes it possible to identify their initial location. They also tend to fly over short distances and usually take short rests between flights. It is likely that many other parrots share these characteristics, and would also be appropriate subjects for distance sampling. Certain species, like the Rainbow Lorikeet, may prove difficult to survey in rainforest, not only due to their swift flight, and the distances they cover, but also due to their highly variable flock sizes. However, we have successfully estimated densities of Blue-crowned Lorikeets Vini australis (Avibase ID: 8FDE1AA233EB01BE) in relatively open habitats on the island of Futuna (J. Theuerkauf, unpublished data). Thus, it seems that even highly mobile, nectarivorous parrots can be surveyed with distance sampling, provided that the habitat structure does not greatly limit visibility. ### Point transects versus line transects Having experimented with both point and line transect surveys, we consider the former method to be less appropriate for estimating parakeet densities in New Caledonia. One of the main drawbacks of point transect distance sampling is that birds are excluded if they are detected while observers are travelling between points (Thomas et al. 2002). This can result in a critical loss of information for rare species, like the parakeets of New Caledonia. Despite walking briskly between points, we only spent approximately 60 % of the total time counting at points, with the rest of the time spent travelling between points. While it is possible to study a larger area in a shorter amount of time with point transects, this is only achievable if observers move quickly between points. Point transect surveys have been suggested as an alternative method to consider if there are difficulties carrying out line transects due to dense vegetation, primarily because it is easier to reach a point than to navigate along a line in difficult terrain (Buckland et al. 1993, 2008). This may be worthwhile considering if the location and size of clusters can be estimated with some certainty from afar (e.g. from a vantage point overlooking the canopy). However, any errors that occur will become squared during density calculations, therefore accuracy is particularly important during point transect surveys (Marsden 1999). In order to obtain accurate distances one must be able to identify exactly where the bird is located, and in dense vegetation this may only be possible by searching for birds and measuring distances. We found that it was easier to measure perpendicular distances along a transect line than it was to measure straight-line distances from a point in the rainforest. After realising the disadvantages of point transect surveys, we decided to abandon this method. Because of this, and the fact that we only sampled point transects in the afternoon, we are unable to compare the two methods in terms of accuracy or precision. However, other studies have shown that point transects are more biased than line transects (e.g. Raman 2003; Buckland 2006; Cassey et al. 2007; Gale et al. 2009). Line transects should also produce more precise estimates than point transects given the same amount of effort (Casagrande and Beissinger 1997). In order to obtain similar levels of precision, the sample size of point transect surveys should be approximately 25 % larger than that of line transect surveys (Buckland et al. 1993). Point transect surveys typically require a large number of points to obtain reliable data about uncommon species (Gale and Thongaree 2006), and are probably better suited to studying species that occur at higher densities (Greene et al. 2010), or for studying many species at once (Barraclough 2000), as it is easier to focus on the task of recording detections from a fixed point (Buckland et al. 2008). To maximise the efficiency of surveys, we recommend the use of line transects for estimating parakeet densities in New Caledonia. Point transect surveys might be a useful alternative for surveying Rainbow Lorikeets because the inefficiencies associated with this method are probably irrelevant when surveying such a common species. ### Assessing distances and flock sizes Estimating numbers and distances by ear can be imprecise (Hutto and Young 2003; Alldredge et al. 2007b), and yet such estimates are commonly used in distance sampling (e.g. Marsden 1999; Marsden et al. 2000; Jepson et al. 2001; Marsden and Pilgrim 2003; Gale and Thongaree 2006). To compensate for this imprecision, estimates are occasionally assigned to specific distance intervals in the field (Buckland et al. 1993). However, this approach may lead to errors if detections end up in the wrong interval, especially near the transect line. In view of the difficulty in estimating distances and flock sizes in dense rainforests, we recommend tracking down birds whenever possible, so that individuals can be counted and distances can be accurately measured. In this study, we only estimated the location of birds that we were unable to see, and in most of these cases, we were still able to take measurements from where the calls came from. These occasional estimates are unlikely to have a considerable effect on the resulting density because the potential for distance estimation error is low near the transect line, and distant birds have little influence on density estimates. Some studies suggest that it may be possible to achieve relatively robust density estimates despite a high dependence upon aural cues (e.g. Gale et al. 2009). Although we did not calculate the degree of imprecision associated with estimating distances, this could be accomplished by measuring and estimating distances simultaneously. If bias is systematic, an estimate-correction factor could also be calculated, based on the average degree of error associated with distance estimation (Buckland et al. 1993). Hence, whenever distances cannot be measured, the correction factor could be used to adjust estimates accordingly. Distance estimation errors can potentially be reduced further by employing several well-trained observers, rather than just one (Marsden 1999). When multiple observers are present, tasks can be shared and distances can be measured more easily. Furthermore, one observer can remain on the transect line while the other is confirming the location of detections, which is useful to maximise the probability of detecting birds near the line. One of the underlying assumptions associated with distance sampling is that individuals are detected at their initial location (Buckland et al. 1993). The ability to meet this assumption comes into question with mobile species. For example, if parrots fly towards or away from an observer, this can affect detection distances, and may result in biased density estimates. A low detection frequency close to the observer is suggestive of bias, either due to evasive bird movement, or failure to detect birds near the transect line (Buckland et al. 1993; Casagrande and Beissinger 1997). Although the lack of habitat above the road at PPRB may have reduced our ability to detect individuals near the transect line, we compensated for this during the analysis by shifting the transect line to the edge of the road. Having done so, the resulting fall-off in detection frequencies (Fig. 2) suggests that there was minimal movement prior to detection (Buckland et al. 1993; Marsden 1999). Parakeets were usually perched up high in the treetops, and we had no problem approaching them during searches, so we have no reason to believe that birds flew away as we advanced along the transect line. Also, the height of parakeets did not appear to influence detectability as detections were normally distributed with respect to height. During prolonged searches, there is a greater chance that birds will leave their original location, and possibly even enter or exit the study area. Additionally, as distances increase, estimation errors are more likely to result from variations in the terrain (e.g. hills). Observers should therefore focus on locating birds close to the transect line instead of spending time searching for distant birds (Ekblom 2010). This should help to ensure that the probability of detecting birds close to the transect line is high, which is fundamentally important for distance sampling (Buckland et al. 1993; Marsden 1999). In some tropical forests, it is considered acceptable to focus on the nearest 30 m from the transect line (Buckland et al. 2008), and in the forests of Papua New Guinea, a search radius of 50 m was used to survey parrots and hornbills (Marsden and Pilgrim 2003), as well as cockatoos (Marsden et al. 2001). In order to minimise the effort expended searching for parrots in future surveys in New Caledonia, observers should concentrate on locating birds within approximately 50–70 m of the transect line. The vast majority (96 %) of the birds detected in this study were within 50 m of the transect line, and birds located farther away had very little impact on densities. We rarely needed to estimate flock sizes in our study because we searched for any birds heard during surveys. Even when we estimated distances, we were usually close enough to have a good idea of how many birds were present. However, if aural detections are frequent, it may be necessary to compensate for instances where flock sizes are unknown. This can be accomplished by substituting the average flock size from visual detections for the unknown flock size (Marsden 1999; Marsden et al. 2000; Lee and Marsden 2008). ### Birds in flight As previously noted, birds can pose problems in distance sampling due to their high mobility. However, as long as their movement is not in response to observer presence, and their average speed is slow in relation to the observer, then bias is likely to be small (Buckland et al. 1993, 2001, 2008). Parakeets in New Caledonia spend most of their time in the canopy and typically cover relatively short distances during flight, so it is often possible to track them through forest (Legault et al. 2011). Therefore, records of flying parakeets are unlikely to generate substantial bias in density estimates, provided that line transects are used. Rainbow Lorikeets fly much farther and faster than parakeets, often in response to the availability of flowering or fruiting plants (Franklin and Noske 1999; Higgins 1999). Thus, including observations of Rainbow Lorikeets in flight could potentially result in an overestimate of density if many are just passing through the study area. Achieving realistic density estimates may therefore be difficult for this species, and a different strategy may be required to deal with the high proportion of birds observed in flight. Ideally, the distance to flying birds should be recorded when they are perpendicular to the observer (Buckland et al. 2001, 2008). However, it can be difficult to do this accurately in dense rainforests, and some birds are likely to either be lost from view before reaching this point, or only spotted afterwards. Marsden (1999) provides an alternative means of compensating for birds detected in flight, which is based on the average proportion of time that a particular species spends flying versus perching. Marsden (1999) used this method to adjust point count density estimates of Rainbow Lorikeets, but it is also applicable to line transect surveys (Buckland et al. 2001, 2008). In general, we recommend that observers record birds in flight during surveys, as these records can always be omitted later. The decision to include, exclude, or compensate for flying birds in density estimates should be based on the habits of the particular species under study. To ignore birds in flight without taking this into consideration may lead to an underestimate of density, and could make it more difficult to estimate populations of rare or cryptic species. ### Path width If possible, line transect surveys should be carried out along straight, random lines (Buckland et al. 1993, 2001, 2008). Unfortunately, this is not practical in some environments, such as tropical rainforests, where the terrain often makes it difficult to maintain a steady pace, and one’s ability to hear birds may be reduced by the rustling of vegetation while walking. As trails and dirt roads provide access to many remote areas in New Caledonia, we were interested in establishing whether they could be used for surveying parrots. Although paths are not representative of the landscape (Ellingson and Lukacs 2003), they are unlikely to affect the distribution of parrots unless they are so wide that they create considerable disturbances in the surrounding habitat (Hutto et al. 1995). Furthermore, it should not be assumed that all birds are attracted or deterred by paths (Venturato et al. 2010). Our results indicate that densities may be underestimated if surveys are carried out on roads, although this problem can be reduced by compensating for the width of the road during the analysis stage. The lack of vegetation above roads is of particular concern, as this means that only flying birds will be detected directly above the transect line. Large gaps in the canopy may also be a deterrent for parrots, especially for Horned Parakeets as they tend to avoid edges (Legault et al. 2012). Thus, in environments where it is not feasible to position transects randomly, it may be best to conduct surveys along narrow trails, where there is minimal vegetation loss or disturbance. ### Survey period and parrot activity patterns Parrot activity varies over the course of a day, and this can influence the detectability of a species. Ideally, surveys should be carried out during periods of maximum detectability, and minimal bird movement (Marsden 1999). Foraging periods are good for this, as parrots tend to chatter while they feed, and they usually feed for a short while in one location (authors’ observations). In order to standardise density estimates, we recommend that parrot surveys be undertaken from 0.5 to 3 h after dawn, or from 3 to 0.5 h before dusk. These are the main foraging periods for parrots in New Caledonia, and when most parrot encounters take place (Robinet et al. 2003; Legault et al. 2012). These survey times would likely be appropriate for other parrots in the tropics, which typically exhibit a bimodal pattern of activity (e.g. Hardy 1965; Pizo et al. 1997; Gilardi and Munn 1998). Although densities of New Caledonian Parakeets appeared to be lower in the afternoon than in the morning, this may have been caused by small sample sizes. In comparison, there was little difference between morning and afternoon densities of Horned Parakeets. Nevertheless, the possibility that birds may be less active during certain periods of the day should be taken into consideration when planning surveys, as cryptic behaviour may decrease the probability of detecting birds near the transect line, and could potentially reduce density estimates. If time is not a factor, then it might be preferable to only survey transects at one period of the day, such as in the morning. However, by surveying transects during the afternoon as well as the morning, precise density estimates can be attained in half the number of days it would otherwise take. Additionally, surveying at different periods of the day can be useful for avoiding poor weather. Provided that the differences in detectability between survey periods are relatively minor, this should not generate biased density estimates due to the pooling robustness of the detection function (Buckland et al. 2004, 2008). Multiple-covariate distance sampling can also be used to compensate for such variations in detectability, and may be useful to increase the precision of density estimates (Marques et al. 2007). When interpreting the results of our study, it is also important to consider the time of year when we conducted surveys. In 2004/2005, we surveyed transects from November to January, which coincides with the breeding season of parrots in New Caledonia (Hannecart and Létocart 1980, 1983), a time when most trees flower and fruit (Carpenter et al. 2003). However, in 2008 we carried out surveys from March to June. Therefore, any differences in densities between these two periods may be the result of seasonal variations in habitat selection. To avoid this problem, surveys should be conducted at the same location and time of year (Marsden 1999). We believe that the best time of the year to carry out surveys is during the breeding season, roughly from November to February, when parakeets are likely to remain near their nesting areas. Differences in observer ability can also bias density estimates (e.g. Norvell et al. 2003). However, observer turnover is unlikely to have been responsible for the observed variation in densities between seasons as encounter rates and densities were linearly correlated (Fig. 5). This potential source of error could be eliminated by using the same observers each year, or minimised through consistent training. Observer bias can also be reduced by randomly rotating observers between transects (e.g. Peres 1999). ### Survey effort As survey effort increases, more observations are recorded, and the precision of the density estimate usually improves. However, expending an excessive amount of effort to attain a modest increase in precision is an inefficient use of resources, and may be unacceptable if it delays conservation action. On the other hand, if the sample size is inadequate, then little information will be available in relation to density (Buckland et al. 1993). Our surveys provide an indication of the amount of effort that will be required to estimate parakeet densities elsewhere in New Caledonia. In most cases, we obtained stable density estimates with 40–50 observations, yet sites with particularly high densities (e.g. over 20 birds/km²) may require more. Additional effort will improve the precision of the estimate, and may be warranted at key monitoring sites. Buckland et al. (1993) indicate that a sample size of 40 may be adequate under certain circumstances, but generally recommend a sample size of at least 60–80. Achieving such numbers in practice may require a substantial amount of time when studying uncommon or cryptic species. Even if samples are gathered over a long period, there is a possibility that the coefficients of variation will remain high due to the amount of spatial variation associated with species that occur at low densities. As conservation resources are often limited, correlating indices of abundance with absolute densities might provide an efficient alternative for monitoring population trends and estimating population sizes when distance sampling is not practical. Additional testing would be useful, however, as other studies suggest that such correlations may not hold up over time (e.g. Norvell et al. 2003). In conclusion, we encourage the adoption of line transect distance sampling for surveying parakeet populations in New Caledonia, and believe that standardisation of survey techniques will facilitate comparison between different areas and time periods. Our suggestions may be useful to other researchers interested in estimating bird populations in tropical rainforests, and can be adapted to suit different species or environments. ## Acknowledgments This study was part of the research project “Impact of introduced mammals and habitat loss on endemic birds of New Caledonia”, done in cooperation with the Direction de l’Environnement (Province Sud, New Caledonia), which issued all permits for this study, and financed by the Loro Parque Fundación (Spain), Polish Ministry of Science and Higher Education (Grant 2P04F 001 29), Conservation des Espèces et Populations Animales (France), La Fondation Nature et Découvertes (France), Fonds für bedrohte Papageien—Zoologische Gesellschaft für Arten- und Populationsschutz (Germany), and doctoral grants from the University of Tasmania (to A. Legault) and Province Sud (to S. Rouys). Field work on Ouvéa was financed by the Province des Iles Loyauté (New Caledonia), the ASPO (New Caledonia), the Société Calédonienne d’Ornithologie (New Caledonia) and the British Birdwatching Fair (U.K.). We thank A. Barnaud, A. Bauma, S. Baoutuau, M.F. Barré, M. Broersen, M. Capoa, J.F. Chaouri, J.B. Dao, P. Dialla, O. Hebert, B. Michielsen, V. Mindia, T. Sanchez, B. Tangopi, C. Vanhoye, W. Wamo, B. Waneux, L. Wéa and A. Wétéwéa for their help during field work, and A. Richardson (University of Tasmania), F. Huettmann, T. Müller, and an anonymous reviewer for providing valuable feedback on the manuscript. ## Authors and Affiliations • Andrew Legault • 1 • 2 • Jörn Theuerkauf • 2 • Emilie Baby • 3 • 6 • Laetitia Moutin • 3 • Sophie Rouys • 3 • 6 • Maurice Saoumoé • 4 • Ludovic Verfaille • 4 • Nicolas Barré • 5 • Vivien Chartendrault • 5 • Roman Gula • 2 1. 1.School of ZoologyUniversity of TasmaniaHobartAustralia 2. 2.Museum and Institute of ZoologyPolish Academy of SciencesWarsawPoland 3. 3.Conservation Research New CaledoniaNouméa CedexNew Caledonia 4. 4.Province des Iles LoyautéDirection du Développement EconomiqueNew Caledonia 5. 5.Institut Agronomique néo-Calédonien (IAC/CIRAD)PaïtaNew Caledonia 6. 6.Société Calédonienne d’OrnithologieNouméa CedexNew Caledonia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8145150542259216, "perplexity": 3574.923696089957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721558.87/warc/CC-MAIN-20161020183841-00078-ip-10-171-6-4.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/5/lesson/5.1.6/problem/5-62
### Home > AC > Chapter 5 > Lesson 5.1.6 > Problem5-62 5-62. How many yearbooks should your school order? Your student government surveyed three homeroom classes, and $55$ of $90$ students said that they would definitely buy a yearbook. If your school has $2000$ students, approximately how many books should be ordered? Show and organize your work. Set up a proportion to solve this problem. Solve the proportion for $x$. Your school should order about $1222$ yearbooks.
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16743791103363037, "perplexity": 5986.347158014516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00038.warc.gz"}
http://planetmath.org/NumberTheory
# number theory ## Primary tabs Keywords: number, integer, Gauss, Legendre, Euler, Dedekind, Wiles, Weil, Grothendieck, Deligne, Faltings, Serre, prime Type of Math Object: Topic Major Section: Reference Groups audience: ## Mathematics Subject Classification ### Fermat's work in the 1800's Can you help me with this... Assume p is prime. Prove that p divides 2^p-2 . ### Re: Fermat's work in the 1800's Hi, p = 2 is trivial. So assume p not equal 2, therefore gcd(2,p) = 1. But you have 2[2^{p-1}-1]/p. So that you must to show 2^{p-1}-1 divides p. Do a left click on pahio's link, read, and you get it. perucho ### Re: Fermat's work in the 1800's OOOPS! 2^{p-1}-1 divides p???? p divides 2^{p-1}-1 ! Sorry. ### Sequence Consider the sequence a[n]=4+7(n-1)=7n-3. In my text this sequence is represented as follows: a, a+d, a+2d, a+3d,... a+n(n-1)d,... My comment is that the a+n(n-1)d terms represent some of the terms but not each and every successive term. Please feel free to comment. Thanks. z ### Re: Fermat's work in the 1800's OK, I just multiplied Fermat's little theorem by a to get a^p is congruent to a mod p which is the same form as 2^p is congruent to 2 mod p. So p divides a^p -p. ### Correction Sorry that's p divides a^p -a . Must be typo. ### Re: Sequence What would be the typo? The equality 4 + 7(n - 1) = 7n - 3 checks out. To make sense of a, a + d, a + 2d, a + 3d, ... a + n(n - 1)d, ... plug in a = 4 and d = 7. Then the whole thing becomes just another way of writing a + (n - 1)d = 4 + 7(n - 1). ### Re: Fermat's work in the 1800's check out fermats little theorem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627587914466858, "perplexity": 2958.469414102205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
http://gmatclub.com/forum/in-the-xy-plane-what-is-the-slope-of-line-perpendicular-108171.html?kudos=1
Find all School-related info fast with the new School-Specific MBA Forum It is currently 26 Jun 2016, 06:50 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # In the xy – plane, what is the slope of line perpendicular Author Message TAGS: ### Hide Tags Intern Joined: 27 Sep 2010 Posts: 27 Followers: 1 Kudos [?]: 50 [1] , given: 3 In the xy – plane, what is the slope of line perpendicular [#permalink] ### Show Tags 23 Jan 2011, 11:38 1 KUDOS 3 This post was BOOKMARKED 00:00 Difficulty: 55% (hard) Question Stats: 55% (02:09) correct 45% (01:12) wrong based on 119 sessions ### HideShow timer Statistics In the xy – plane, what is the slope of line perpendicular to line K? (1) line K intersects line with equation 2x + 3y = 4 at point Q (2,0) (2) Line K does not intersect line with equation y = 2x-5 [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 33504 Followers: 5931 Kudos [?]: 73501 [3] , given: 9902 Re: what is the slope of line perpendicular to line K? [#permalink] ### Show Tags 23 Jan 2011, 12:04 3 KUDOS Expert's post MichelleSavina wrote: Q) In the xy – plane, what is the slope of line perpendicular to line K? (1) line K intersects line with equation 2x + 3y = 4 at point Q (2,0) (2) Line K does not intersect line with equation y = 2x-5 In the xy – plane, what is the slope of line perpendicular to line K? You should know 2 important things: 1. For one line to be perpendicular to another, the relationship between their slopes has to be negative reciprocal, so if the slope of one line is $$m$$ then the line prependicular to it will have the slope $$-\frac{1}{m}$$. In other words, the two lines are perpendicular if and only the product of their slopes is -1. 2. Parallel lines have the same slope. (1) line K intersects line with equation 2x + 3y = 4 at point Q (2,0) --> clearly insufficient, as we don't know the angle at which the line K intersects this line (we just have some line and know that K intersect it at some point). Not sufficient. (2) Line K does not intersect line with equation y = 2x-5 --> K is parallel to this line so has the same slope --> slope of K=2 --> the line perpendicular to line K will have the slope -1/2. Sufficient. For more on these issues check Coordinate Geometry chapter of Math Book: math-coordinate-geometry-87652.html Hope it helps. _________________ Manager Joined: 16 Feb 2012 Posts: 237 Concentration: Finance, Economics Followers: 7 Kudos [?]: 210 [0], given: 121 Re: In the xy – plane, what is the slope of line perpendicular [#permalink] ### Show Tags 02 May 2012, 00:52 If the statement says that ==> Line K does not intersect line with equation y = 2x-5--> K is parallel to this line so has the same slope. How do I know that these two lines are parallel? _________________ Kudos if you like the post! Failing to plan is planning to fail. Math Expert Joined: 02 Sep 2009 Posts: 33504 Followers: 5931 Kudos [?]: 73501 [0], given: 9902 Re: In the xy – plane, what is the slope of line perpendicular [#permalink] ### Show Tags 02 May 2012, 01:02 Expert's post Stiv wrote: If the statement says that ==> Line K does not intersect line with equation y = 2x-5--> K is parallel to this line so has the same slope. How do I know that these two lines are parallel? From definition: two lines in a plane that do not intersect are called parallel lines. So, K does not intersect line with equation y = 2x-5 means that these lines are parallel. _________________ Manager Joined: 16 Feb 2012 Posts: 237 Concentration: Finance, Economics Followers: 7 Kudos [?]: 210 [0], given: 121 Re: In the xy – plane, what is the slope of line perpendicular [#permalink] ### Show Tags 03 May 2012, 05:41 _________________ Kudos if you like the post! Failing to plan is planning to fail. Math Expert Joined: 02 Sep 2009 Posts: 33504 Followers: 5931 Kudos [?]: 73501 [0], given: 9902 Re: In the xy – plane, what is the slope of line perpendicular [#permalink] ### Show Tags 31 Aug 2013, 06:23 Expert's post Bumping for review and further discussion. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 10192 Followers: 481 Kudos [?]: 124 [0], given: 0 Re: In the xy – plane, what is the slope of line perpendicular [#permalink] ### Show Tags 15 Oct 2014, 15:18 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: In the xy – plane, what is the slope of line perpendicular   [#permalink] 15 Oct 2014, 15:18 Similar topics Replies Last post Similar Topics: 11 In the xy-plane, what is the slope of line l? 3 27 Feb 2012, 08:19 1 In the xy plane, what is the slope of line perpendicular to 2 28 Aug 2011, 12:33 2 In the xy-plane, if line k has negative slope, is the 3 15 Apr 2011, 20:19 If k is a line in xy-plane, what is the slope of k? 3 28 Oct 2010, 08:58 If K is a line in the xy-plane, what is the slope of k? 1. 1 17 Jul 2010, 11:45 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6097711324691772, "perplexity": 2227.3525509107694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00183-ip-10-164-35-72.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/p/phase+equilibrium+diagrams.html
Sample records for phase equilibrium diagrams 1. Para-equilibrium phase diagrams International Nuclear Information System (INIS) Pelton, Arthur D.; Koukkari, Pertti; Pajarre, Risto; Eriksson, Gunnar 2014-01-01 Highlights: • A rapidly cooled system may attain a state of para-equilibrium. • In this state rapidly diffusing elements reach equilibrium but others are immobile. • Application of the Phase Rule to para-equilibrium phase diagrams is discussed. • A general algorithm to calculate para-equilibrium phase diagrams is described. - Abstract: If an initially homogeneous system at high temperature is rapidly cooled, a temporary para-equilibrium state may result in which rapidly diffusing elements have reached equilibrium but more slowly diffusing elements have remained essentially immobile. The best known example occurs when homogeneous austenite is quenched. A para-equilibrium phase assemblage may be calculated thermodynamically by Gibbs free energy minimization under the constraint that the ratios of the slowly diffusing elements are the same in all phases. Several examples of calculated para-equilibrium phase diagram sections are presented and the application of the Phase Rule is discussed. Although the rules governing the geometry of these diagrams may appear at first to be somewhat different from those for full equilibrium phase diagrams, it is shown that in fact they obey exactly the same rules with the following provision. Since the molar ratios of non-diffusing elements are the same in all phases at para-equilibrium, these ratios act, as far as the geometry of the diagram is concerned, like “potential” variables (such as T, pressure or chemical potentials) rather than like “normal” composition variables which need not be the same in all phases. A general algorithm to calculate para-equilibrium phase diagrams is presented. In the limit, if a para-equilibrium calculation is performed under the constraint that no elements diffuse, then the resultant phase diagram shows the single phase with the minimum Gibbs free energy at any point on the diagram; such calculations are of interest in physical vapor deposition when deposition is so rapid that phase 2. Mapping Isobaric Aging onto the Equilibrium Phase Diagram DEFF Research Database (Denmark) Niss, Kristine 2017-01-01 The linear volume relaxation and the nonlinear volume aging of a glass-forming liquid are measured, directly compared, and used to extract the out-of-equilibrium relaxation time. This opens a window to investigate how the relaxation time depends on temperature, structure, and volume in parts...... of phase space that are not accessed by the equilibrium liquid. It is found that the temperature dependence of relaxation time is non-Arrhenius even in the isostructural case—challenging the Adam-Gibbs entropy model. Based on the presented data and the idea that aging happens through quasiequilibrium...... states, we suggest a mapping of the out-of-equilibrium states during isobaric aging to the equilibrium phase diagram. This mapping implies the existence of isostructural lines in the equilibrium phase diagram. The relaxation time is found to depend on the bath temperature, density, and a just single... 3. Mapping Isobaric Aging onto the Equilibrium Phase Diagram. Science.gov (United States) Niss, Kristine 2017-09-15 The linear volume relaxation and the nonlinear volume aging of a glass-forming liquid are measured, directly compared, and used to extract the out-of-equilibrium relaxation time. This opens a window to investigate how the relaxation time depends on temperature, structure, and volume in parts of phase space that are not accessed by the equilibrium liquid. It is found that the temperature dependence of relaxation time is non-Arrhenius even in the isostructural case-challenging the Adam-Gibbs entropy model. Based on the presented data and the idea that aging happens through quasiequilibrium states, we suggest a mapping of the out-of-equilibrium states during isobaric aging to the equilibrium phase diagram. This mapping implies the existence of isostructural lines in the equilibrium phase diagram. The relaxation time is found to depend on the bath temperature, density, and a just single structural parameter, referred to as an effective temperature. 4. The Prognosis of the Phase Equilibrium Diagram of the System Al-Cu-Si Directory of Open Access Journals (Sweden) Florentina Cziple 2007-10-01 Full Text Available The paper presents a model for establishing the mathematical functions of the liquidus and solidus curves, from the binary diagrams Al-Si, Si-Cu, Cu-Al and their use in the prognosis of the phase equilibrium diagram from the ternary system Al-Cu-Si. We have studied the model of the non-ideal liquid solution of the regular type. The calculus and graphic plotting of the equations for the binary systems has been performed on the computer 5. The equilibrium hydrogen pressure-temperature diagram for the liquid sodium-hydrogen-oxygen system International Nuclear Information System (INIS) Knights, C.F.; Whittingham, A.C. 1982-01-01 The underlying equilibria in the sodium-hydrogen-oxygen system are presented in the form of a completmentary hydrogen equilibrium pressure-temperature diagram, constructed by using published data and supplemented by experimental measurements of hydrogen equilibrium pressures over condensed phases in the system. Possible applications of the equilibrium pressure-temperature phase diagram limitations regarding its use are outlined 6. Estimated D2--DT--T2 phase diagram in the three-phase region International Nuclear Information System (INIS) Souers, P.C.; Hickman, R.G.; Tsugawa, R.T. 1976-01-01 A composite of experimental eH 2 -D 2 phase-diagram data at the three-phase line is assembled from the literature. The phase diagram is a smooth cigar shape without a eutectic point, indicating complete miscibility of liquid and solid phases. Additional data is used to estimate the D 2 -T 2 , D 2 DT, and DT-T 2 binary phase diagrams. These are assembled into the ternary D 2 -DT-T 2 phase diagram. A surface representing the chemical equilibrium of the three species is added to the phase diagram. At chemical equilibrium, it is estimated that 50-50 liquid D-T at 19.7 0 K is in equilibrium with 42 mole percent T vapor and 54 percent T solid. Infrared spectroscopy is suggested as a means of component analysis of liquid and solid mixtures 7. The equilibrium phase diagram of the magnesium-copper-yttrium system International Nuclear Information System (INIS) Mezbahul-Islam, Mohammad; Kevorkov, Dmytro; Medraj, Mamoun 2008-01-01 Thermodynamic modelling of the Mg-Cu-Y system is carried out as a part of thermodynamic database construction for Mg alloys. This system is being modelled for the first time using the modified quasichemical model which considers the presence of short range ordering in the liquid. A self-consistent thermodynamic database for the Mg-Cu-Y system was constructed by combining the thermodynamic descriptions of the constituent binaries, Mg-Cu, Cu-Y, and Mg-Y. All the three binaries have been re-optimized based on the experimental phase equilibrium and thermodynamic data available in the literature. The constructed database is used to calculate and predict thermodynamic properties, the binary phase diagrams and liquidus projections of the ternary Mg-Cu-Y system. The current calculation results are in good agreement with the experimental data reported in the literature 8. The equilibrium diagram and some properties of alloys Gd5Sb3-Tb5Sb3 system International Nuclear Information System (INIS) Azizov, Yu.S.; Abulkhaev, V.D.; Ganiev, I.N. 2001-01-01 The purpose of present work is investigation equilibrium diagram of Gd 5 Sb 3 -Tb 5 Sb 3 system in total range of concentrations. Equilibrium diagram of Gd 5 Sb 3 -Tb 5 Sb 3 system investigated by methods of difference-thermal, roentgen-phase and metallographic analyses. For the first time on the base of difference-thermal, roentgen-phase and metallographic analyses was formed the equilibrium diagram of Gd 5 Sb 3 -Tb 5 Sb 3 system. Was determined the cristal-chemical parameters of solid solutions with general formula Gd x Tb 5 - x Sb 3 9. Refined phase diagram of boron nitride International Nuclear Information System (INIS) Solozhenko, V.; Turkevich, V.Z. 1999-01-01 The equilibrium phase diagram of boron nitride thermodynamically calculated by Solozhenko in 1988 has been now refined on the basis of new experimental data on BN melting and extrapolation of heat capacities of BN polymorphs into high-temperature region using the adapted pseudo-Debye model. As compared with the above diagram, the hBN left-reversible cBN equilibrium line is displaced by 60 K toward higher temperatures. The hBN-cBN-L triple point has been calculated to be at 3480 ± 10 K and 5.9 ± 0.1 GPa, while the hBN-L-V triple point is at T = 3400 ± 20 K and p = 400 ± 20 Pa, which indicates that the region of thermodynamic stability of vapor in the BN phase diagram is extremely small. It has been found that the slope of the cBN melting curve is positive whereas the slope of hBN melting curve varies from positive between ambient pressure and 3.4 GPa to negative at higher pressures 10. Phase diagrams of superconducting materials: Metallurgy, fabrication, and applications International Nuclear Information System (INIS) Flukiger, R. 1981-01-01 Because a large number of investigations on superconducting material have been made on insufficiently characterized samples, and with temperature phase diagrams which contained serious errors, phase diagrams are studied. It is seen that the variation of critical temperature as a function of chemical composition for a given compound can be used as a supplementary tool in determining composition with greater accuracy. The consequent search for higher critical temperature value in specified materials has led to a new concept in determining high temperature phase diagrams. Most of this paper is devoted to the study of bulk binary, pseudobinary, or ternary superconductors at their equilibrium state. As will be shown in several cases, these data serve as standard values and are of great help in understanding the superconducting behavior in materials produced by non-equilibrium methods, i.e., splat-cooling, thin film preparation by either sputtering, co-evaporation, or CVD, and diffusion processes in multifilamentary composite wires. An example for the departure from thermal equilibrium is the retention of metastable composition by a fast quenching rate 11. Phase diagram and structural evolution of tin/indium (Sn/In) nanosolder particles: from a non-equilibrium state to an equilibrium state. Science.gov (United States) Shu, Yang; Ando, Teiichi; Yin, Qiyue; Zhou, Guangwen; Gu, Zhiyong 2017-08-31 A binary system of tin/indium (Sn/In) in the form of nanoparticles was investigated for phase transitions and structural evolution at different temperatures and compositions. The Sn/In nanosolder particles in the composition range of 24-72 wt% In were synthesized by a surfactant-assisted chemical reduction method under ambient conditions. The morphology and microstructure of the as-synthesized nanoparticles were analyzed by scanning electron microscopy (SEM), high resolution transmission electron microscopy (HRTEM), selected area electron diffraction (SAED) and X-ray diffraction (XRD). HRTEM and SAED identified InSn 4 and In, with some Sn being detected by XRD, but no In 3 Sn was observed. The differential scanning calorimetry (DSC) thermographs of the as-synthesized nanoparticles exhibited an endothermic peak at around 116 °C, which is indicative of the metastable eutectic melting of InSn 4 and In. When the nanosolders were subjected to heat treatment at 50-225 °C, the equilibrium phase In 3 Sn appeared while Sn disappeared. The equilibrium state was effectively attained at 225 °C. A Tammann plot of the DSC data of the as-synthesized nanoparticles indicated that the metastable eutectic composition is about 62% In, while that of the DSC data of the 225 °C heat-treated nanoparticles yielded a eutectic composition of 54% In, which confirmed the attainment of the equilibrium state at 225 °C. The phase boundaries estimated from the DSC data of heat-treated Sn/In nanosolder particles matched well with those in the established Sn-In equilibrium phase diagram. The phase transition behavior of Sn/In nanosolders leads to a new understanding of binary alloy particles at the nanoscale, and provides important information for their low temperature soldering processing and applications. 12. High temperature phase equilibria and phase diagrams CERN Document Server Kuo, Chu-Kun; Yan, Dong-Sheng 2013-01-01 High temperature phase equilibria studies play an increasingly important role in materials science and engineering. It is especially significant in the research into the properties of the material and the ways in which they can be improved. This is achieved by observing equilibrium and by examining the phase relationships at high temperature. The study of high temperature phase diagrams of nonmetallic systems began in the early 1900s when silica and mineral systems containing silica were focussed upon. Since then technical ceramics emerged and more emphasis has been placed on high temperature 13. The Computerised Calculus in the Prognosis of the Phase Equilibrium Diagram of the Ternary System Al-Cu-Si Directory of Open Access Journals (Sweden) Florentina A. Cziple 2006-10-01 Full Text Available The paper presents a model for establishing the mathematical functions of the liquidus and solidus curves, from the binary diagrams Al-Si, Si-Cu, Cu-Al and their use in the prognosis of the phase equilibrium diagram from the ternary system Al-Cu-Si. We have studied the model of the non-ideal liquid solution of the regular type. The calculus and graphic plotting of the equations for the binary systems has been performed on the computer with the software programmes MathCad 2000 Professional, Statistica 5, Curve Expert, and for the ternary system Al-Cu-Si, with the 3D StudioMax software 14. A proposed phase equilibrium diagram for Pt-Zr system International Nuclear Information System (INIS) Arias, D.E.; Gribaudo, L. 1993-01-01 A revision of the phase diagram of the Pt-Zr system is presented using up to date information from recent publications. The proposed change concerning the invariant transformation in the Pt-rich zone is supported by simplified thermodynamic evaluations. (author). 12 refs., 1 fig 15. The structural phase diagram and oxygen equilibrium partial pressure of YBa2CU3O6+x studied by neutron powder diffraction and gas volumetry DEFF Research Database (Denmark) Andersen, N.H.; Lebech, B.; Poulsen, H.F. 1990-01-01 An experimental technique based on neutron powder diffraction and gas volumetry is presented and used to study the structural phase diagram of YBa2Cu3O6+x under equilibrium conditions in an extended part of (x, T)-phase (0.15 16. CALPHAD calculation of phase diagrams : a comprehensive guide CERN Document Server Saunders, N; Miodownik, A P 1998-01-01 This monograph acts as a benchmark to current achievements in the field of Computer Coupling of Phase Diagrams and Thermochemistry, often called CALPHAD which is an acronym for Computer CALculation of PHAse Diagrams. It also acts as a guide to both the basic background of the subject area and the cutting edge of the topic, combining comprehensive discussions of the underlying physical principles of the CALPHAD method with detailed descriptions of their application to real complex multi-component materials. Approaches which combine both thermodynamic and kinetic models to interpret non-equilibrium phase transformations are also reviewed. 17. Thermodynamic analysis of 6xxx series Al alloys: Phase fraction diagrams Directory of Open Access Journals (Sweden) Cui S. 2018-01-01 Full Text Available Microstructural evolution of 6xxx Al alloys during various metallurgical processes was analyzed using accurate thermodynamic database. Phase fractions of all the possible precipitate phases which can form in the as-cast and equilibrium states of the Al-Mg-Si-Cu-Fe-Mn-Cr alloys were calculated over the technically useful composition range. The influence of minor elements such as Cu, Fe, Mn, and Cr on the amount of each type of precipitate in the as-cast and equilibrium conditions were analyzed. Phase fraction diagrams at 500 °C were mapped in the composition range of 0-1.1 wt.% Mg and 0-0.7 wt.% Si to investigate the as-homogenized microstructure. In addition, phase fraction diagram of Mg2Si at 177 °C was mapped to understand the microstructure after final annealing of 6xxx Al alloy. Based on the calculated diagrams, the design strategy of 6xxx Al alloy to produce highest strength due to Mg2Si is discussed. 18. The phase equilibrium diagrams as a tool for the design and use of refractories; Los diagramas de equilibrio de fases como una herramienta para el diseno y comprension del comportamiento en servicio de los materiales refractarios Energy Technology Data Exchange (ETDEWEB) Aza, A. H. de; Pena, P.; Caballero, A.; Aza, S. de 2011-07-01 Refractories are complex materials used at high temperature, in severely corrosive atmospheres and in contact with aggressive liquids. The high temperatures imply that such systems tend to equilibrium and this is frequently attained during service; at least local equilibrium is achieved. This allows the basic principles of phase diagrams to be used in this technology. Traditionally, refractories have been designed to be close to equilibrium so that in-service changes were restricted. Currently, additions of raw materials are often made that will react in use, in a controlled manner, to give favorable effects under the service conditions. Equilibrium diagrams are valid not only for determining the thermodynamic tendency but also for predicting the final equilibrium state and to know the way through which the material moves into the final state. In this context equilibrium diagrams become a powerful tool for a better understanding of the behavior of refractories during service. After a general consideration on the importance of phase equilibrium diagrams in this field, criteria for using equilibrium diagrams, as a tool for improving traditional refractories and/or designing advanced or new refractories, will be given. Pertinent examples in different systems will be discussed. This paper compiles and reviews the last plenary lecture given by Professor Salvador De Aza on the subject. (Author) 58 refs. 19. The Zr-Pt system. Experimental determination of the phase equilibrium conditions, and obtention of the diagram by thermodynamical modeling International Nuclear Information System (INIS) Alonso, Regina P. 1997-01-01 Two regions in the zirconium-platinum system (Zr-Pt) were investigated, namely, the zirconium rich and the platinum rich regions. With this purpose, five alloys were obtained. The performed experiences consisted on heat treatments and electrical resistivity variations with temperature measurements. The appearing phases were analyzed by optical and scanning electron microscopy (SEM), quantitative microanalysis and X-ray diffraction techniques. Besides that, the existing phases in the rich zirconium region between 0 and 50 % at. Pt were thermodynamically modelled and the resulting diagram was calculated by means of the Thermocalc computational program. Several proposals were formulated: a) A change in the eutectoid transformation temperature βZr ↔ αZr + pp (800 C degrees according to this work); b) The existence of the phase Zr 3 Pt in the equilibrium diagram; c) The existence of the peritectic transformation Liquid + Zr 5 Pt 3 ↔ Zr 3 Pt; d) The occurrence of the two - phases region ZrPt 3 + ZrPt 8 between 1050 and 1320 C degrees, and finally; e) The occurrence of the peritectic transformation ZrPt 3 + Liquid ↔ γPt was verified. (author) 20. Determination of the UO2-ZrO2-BaO equilibrium diagram International Nuclear Information System (INIS) Paschoal, J.O.A.; Kleykanp, H.; Thuemmler, F. 1984-01-01 It is determined the equilibrium diagram of UO 2 - ZrO 2 - BaO to interpret and predict changes in the chemical properties of ceramic (oxide) nuclear fuels during irradiation. The isothermal section of the system at 1700 0 C was determined experimentally, utilizing the techniques of ceramography, X-ray diffraction analysis, microprobe analysis and differential thermal analysis. The solid solubility limits at 1700 0 C between UO 2 and ZrO 2 , UO 2 and BaO, ZrO 2 and BaO, ZrO 2 and BaO and BaUO 3 and BaZrO 3 is presented. The influence of oxygen potential in relation to the different phases is discussed and the phase diagram of the system presented. (M.C.K.) [pt 1. The Establishment, Plotting and Statistic– Mathematical Interpretation of the Liquidus Surface from the Phase Equilibrium Diagram of the Ternary System Al-Cu-Si Directory of Open Access Journals (Sweden) Florentina A. Cziple 2006-10-01 Full Text Available The paper forwards the conclusions of a survey performed on a mathematical model of the phase equilibrium from the ternary system Al-Cu-Si. The author presents the calculus of the statistic equation of the liquidus surface model from this diagram, the plotting and statistical-mathematical interpretation of the results obtained. 2. Applications of phase diagrams in metallurgy and ceramics International Nuclear Information System (INIS) Carter, G.C. 1978-03-01 The workshop represents an effort to coordinate and reinforce the current efforts on compilation of phase diagrams of alloys and ceramics. Many research groups and individual scientists throughout the world are concerned with phase equilibrium data. Specialized expertise exists in small institutions as well as large laboratories. If this talent can be effecively utilized through a cooperative effort, the needs for such data can be met. The Office of Standard Reference Data, which serves as the program management office for the National Standard Reference Data System, is eager to work with all groups concerned with this problem. Through a cooperative international effort we can carry out a task which has become too large for an individual. Volume 2 presents computational techniques for phase diagram construction 3. Phase relationships in Cu-rich corner of the Cu-Cr-Zr phase diagram International Nuclear Information System (INIS) Zeng, K.J.; Haemaelaeinen, M.; Lilius, K. 1995-01-01 In the available experimental information on the Cu-Cr-Zr ternary system, there exist different opinions concerning the phase relationships in the Cu-rich corner of Cu-Cr-Zr phase diagram. Glazov et al. and Zakharov et al. investigated the Cu-rich corner of the Cu-Cr-Zr phase diagram within the composition range up to 3.5 Cr and 3.5 Zr (wt. %). A quasi-eutectic reaction L → (Cu) + αCr 2 Zr was observed to occur at 1,020 C and several isothermal sections were constructed within the temperature range from 600 to 1,000 C to show the (Cu)-αCr 2 Zr two phase equilibrium. Therefore, a pseudobinary Cu-Cr 2 Zr system was supposed. Afterwards, Dawakatsu et al, Fedorov et al, and Kuznetsov et al studied the cu-rich corner of the phase diagram in a wider composition range up to 5 Cr and 20 Zr (at.%). Contrary to Glazov et al. and Zakharov et al., they found no Cr 2 Zr phase in their samples. Hence, the pseudobinary Cu-Cr 2 Zr system does not exist. In this study an experimental investigation is presented on the phase relationships in Cu-rich corner of the Cu-Cr-Zr phase diagram at 940 C in order to clear up the confusion 4. Equilibrium triple point pressure and pressure-temperature phase diagram of polyethylene NARCIS (Netherlands) Hikosaka, M.; Tsukijima, K.; Rastogi, S.; Keller, A. 1992-01-01 The equil. triple point and pressure and temp. phase diagrams of polyethylene were obtained by in situ optical microscopic and x-ray observations of the melting temp. of hexagonal and orthorhombic isolated extended-chain single crystals at high pressure. The melting temps. of extended-chain crystals 5. Equilibrium phase diagram of the Ag-Au-Pb ternary system International Nuclear Information System (INIS) Hassam, S.; Bahari, Z. 2005-01-01 The phase diagram of the ternary system Ag-Au-Pb has been established using differential thermal analysis and X-ray powder diffraction analysis. Four vertical sections were studied: X Pb = 0.40, X Au /X Pb = 1/3, X Ag /X Au = 4/1 and X Ag /X Au = 1/1. Two ternary transitory peritectics and one ternary eutectic were characterized. A schematic representation of the ternary equilibria is given 6. Phase diagram of the ABC model with nonconserving processes International Nuclear Information System (INIS) Lederhendler, A; Cohen, O; Mukamel, D 2010-01-01 The three species ABC model of driven particles on a ring is generalized to include vacancies and particle-nonconserving processes. The model exhibits phase separation at high densities. For equal average densities of the three species, it is shown that although the dynamics is local, it obeys detailed balance with respect to a Hamiltonian with long-range interactions, yielding a nonadditive free energy. The phase diagrams of the conserving and nonconserving models, corresponding to the canonical and grand-canonical ensembles, respectively, are calculated in the thermodynamic limit. Both models exhibit a transition from a homogeneous to a phase-separated state, although the phase diagrams are shown to differ from each other. This conforms with the expected inequivalence of ensembles in equilibrium systems with long-range interactions. These results are based on a stability analysis of the homogeneous phase and exact solution of the continuum equations of the models. They are supported by Monte Carlo simulations. This study may serve as a useful starting point for analyzing the phase diagram for unequal densities, where detailed balance is not satisfied and thus a Hamiltonian cannot be defined 7. Kinetics of the (solid + solid) transformations for the piracetam trimorphic system: Incidence on the construction of the p–T equilibrium phase diagram International Nuclear Information System (INIS) Corvis, Yohann; Spasojević-de Biré, Anne; Alzina, Camille 2016-01-01 Highlights: • Thermal analyses and X-ray diffraction experiments are performed. • Scan-rate dependence of the transition points is highlighted. • A new phase diagram of piracetam is proposed. • The new hierarchy of polymorphs stability is now coherent with all published data. - Abstract: The three common polymorphs of piracetam have been characterized by associating thermal analysis, X-ray diffraction and densimetry. DSC experiments showed that the (solid + solid) transition temperature between Forms II and I and between Forms III and I is scan-rate dependent. The transition temperatures decrease when the DSC scan rate decreases and the thermodynamic temperatures were confirmed by isothermal X-ray diffraction. These new results in terms of temperature and enthalpy of transition allow us to propose a new equilibrium phase diagram establishing the relative thermodynamic stability of the three common polymorphs of piracetam as a function of the temperature and the pressure. The diagram suggests that Form II presents a small stability domain located just above the stability domain of Form I. As a consequence, Form I should transform into Form II, which itself can turn into Form III when placed under pressure. 8. Simple method for the calculation and use of CVD phase diagrams with applications to the Ti-B-Cl-H system, 1200 to 8000K International Nuclear Information System (INIS) Randich, E.; Gerlach, T.M. 1980-03-01 A simple method for calculating multi-component gas-solid equilibrium phase diagrams for chemical vapor deposition (CVD) systems is presented. The method proceeds in three steps: dtermination of stable solid assemblages, evaluation of gas-solid stability relations, and calcuation of conventional phase diagrams using a new free energy minimization technique. The phase diagrams can be used to determine (1) bulk compositions and phase fields accessible by CVD techniques; (2) expected condensed phases for various starting gas mixtures; and (3) maximum equilibrium yields for specific CVD process variables. The three step thermodynamic method is used to calcuate phase diagrams for the example CVD system Ti-B-Cl-H at 1200 and 800 0 K. Examples of applications of the diagrams for yield optimization and experimental accessibility studies are presented and discussed. Experimental verification of the TiB 2 + Gas/Gas phase field boundary at 1200 0 K, H/Cl = 1 confirms the calculated boundary and indicates that equilibrium is nearly and rapidly approached under laboratory conditions 9. Evaluation of self-interaction parameters from binary phase diagrams International Nuclear Information System (INIS) Ellison, T.L. 1977-10-01 The feasibility of calculating Wagner self-interaction parameters from binary phase diagrams was examined. The self-interaction parameters of 22 non-ferrous liquid solutions were calculated utilizing an equation based on the equality of the chemical potentials of a component in two equilibrium phases. Utilization of the equation requires the evaluation of the first and second derivatives of various liquidus and solidus data at infinite dilution of the solute component. Several numerical methods for evaluating the derivatives of tabular data were examined. A method involving power series curve fitting and subsequent differentiation of the power series was found to be the most suitable for the interaction parameter calculations. Comparison of the calculated self-interaction parameters with values obtained from thermodynamic measurements indicates that the Wagner self-interaction parameter can be successfully calculated from binary phase diagrams 10. Cluster Mean-Field Approach to the Steady-State Phase Diagram of Dissipative Spin Systems Directory of Open Access Journals (Sweden) Jiasen Jin 2016-07-01 Full Text Available We show that short-range correlations have a dramatic impact on the steady-state phase diagram of quantum driven-dissipative systems. This effect, never observed in equilibrium, follows from the fact that ordering in the steady state is of dynamical origin, and is established only at very long times, whereas in thermodynamic equilibrium it arises from the properties of the (free energy. To this end, by combining the cluster methods extensively used in equilibrium phase transitions to quantum trajectories and tensor-network techniques, we extend them to nonequilibrium phase transitions in dissipative many-body systems. We analyze in detail a model of spin-1/2 on a lattice interacting through an XYZ Hamiltonian, each of them coupled to an independent environment that induces incoherent spin flips. In the steady-state phase diagram derived from our cluster approach, the location of the phase boundaries and even its topology radically change, introducing reentrance of the paramagnetic phase as compared to the single-site mean field where correlations are neglected. Furthermore, a stability analysis of the cluster mean field indicates a susceptibility towards a possible incommensurate ordering, not present if short-range correlations are ignored. 11. Zr-Fe-Sn Ternary System Phase Diagrams- New Experimental Results International Nuclear Information System (INIS) Nieva, N; Gomez, A; Arias, D 2004-01-01 New experimental results for the Zr-Fe-Sn ternary system are presented in this paper. The phases present and equilibrium relations for the 900 o C isothermal on the central zone of the Gibbs triangle are analysed. A set of ternary alloys was designed and obtained, and they were analysed by semi quantitative SEM- EDS, XRD, and metallographic samples. The resulting ternary phase diagrams are presented here (JCH) 12. CERPHASE: Computer-generated phase diagrams International Nuclear Information System (INIS) Ruys, A.J.; Sorrell, C.C.; Scott, F.H. 1990-01-01 CERPHASE is a collection of computer programs written in the programming language basic and developed for the purpose of teaching the principles of phase diagram generation from the ideal solution model of thermodynamics. Two approaches are used in the generation of the phase diagrams: freezing point depression and minimization of the free energy of mixing. Binary and ternary phase diagrams can be generated as can diagrams containing the ideal solution parameters used to generate the actual phase diagrams. Since the diagrams generated utilize the ideal solution model, data input required from the operator is minimal: only the heat of fusion and melting point of each component. CERPHASE is menu-driven and user-friendly, containing simple instructions in the form of screen prompts as well as a HELP file to guide the operator. A second purpose of CERPHASE is in the prediction of phase diagrams in systems for which no experimentally determined phase diagrams are available, enabling the estimation of suitable firing or sintering temperatures for otherwise unknown systems. Since CERPHASE utilizes ideal solution theory, there are certain limitations imposed on the types of systems that can be predicted reliably. 6 refs., 13 refs 13. Sedimentation stacking diagram of binary colloidal mixtures and bulk phases in the plane of chemical potentials International Nuclear Information System (INIS) Heras, Daniel de las; Schmidt, Matthias 2015-01-01 We give a full account of a recently proposed theory that explicitly relates the bulk phase diagram of a binary colloidal mixture to its phase stacking phenomenology under gravity (de las Heras and Schmidt 2013 Soft Matter 9 8636). As we demonstrate, the full set of possible phase stacking sequences in sedimentation-diffusion equilibrium originates from straight lines (sedimentation paths) in the chemical potential representation of the bulk phase diagram. From the analysis of various standard topologies of bulk phase diagrams, we conclude that the corresponding sedimentation stacking diagrams can be very rich, even more so when finite sample height is taken into account. We apply the theory to obtain the stacking diagram of a mixture of nonadsorbing polymers and colloids. We also present a catalog of generic phase diagrams in the plane of chemical potentials in order to facilitate the practical application of our concept, which also generalizes to multi-component mixtures. (paper) 14. Stereo 3D spatial phase diagrams Energy Technology Data Exchange (ETDEWEB) Kang, Jinwu, E-mail: kangjw@tsinghua.edu.cn; Liu, Baicheng, E-mail: liubc@tsinghua.edu.cn 2016-07-15 Phase diagrams serve as the fundamental guidance in materials science and engineering. Binary P-T-X (pressure–temperature–composition) and multi-component phase diagrams are of complex spatial geometry, which brings difficulty for understanding. The authors constructed 3D stereo binary P-T-X, typical ternary and some quaternary phase diagrams. A phase diagram construction algorithm based on the calculated phase reaction data in PandaT was developed. And the 3D stereo phase diagram of Al-Cu-Mg ternary system is presented. These phase diagrams can be illustrated by wireframe, surface, solid or their mixture, isotherms and isopleths can be generated. All of these can be displayed by the three typical display ways: electronic shutter, polarization and anaglyph (for example red-cyan glasses). Especially, they can be printed out with 3D stereo effect on paper, and watched by the aid of anaglyph glasses, which makes 3D stereo book of phase diagrams come to reality. Compared with the traditional illustration way, the front of phase diagrams protrude from the screen and the back stretches far behind of the screen under 3D stereo display, the spatial structure can be clearly and immediately perceived. These 3D stereo phase diagrams are useful in teaching and research. - Highlights: • Stereo 3D phase diagram database was constructed, including binary P-T-X, ternary, some quaternary and real ternary systems. • The phase diagrams can be watched by active shutter or polarized or anaglyph glasses. • The print phase diagrams retains 3D stereo effect which can be achieved by the aid of anaglyph glasses. 15. Thermodynamic behavior of poly(3-alkyl thiophene) blends: Equilibrium cocrystal formation and phase segregation. Science.gov (United States) Pal, Susmita; Nandi, Arun K 2005-02-24 The equilibrium cocrystal formation of poly(3-alkyl thiophene) (P3AT) blends has been studied by isothermal cocrystallization in a differential scanning calorimeter (DSC-7). The equilibrium melting points (T(m)0) of the cocrystals are measured using the Hoffman-Weeks extrapolation procedure. The equilibrium phase diagrams are of three different types: (a) concave upward, (b) linear, and (c) linear with phase separation at higher content of lower melting component. The phase diagram nature depends on the regioregularity difference and also on the difference in the number of carbon atoms in the pendent alkyl group of the components. The origin of biphasic nature of type "c" phase diagram has been explored from the glass transition temperature (Tg) measurement using a dynamic mechanical analyzer. The biphasic compositions show two glass transition temperatures (Tg) as well as two beta transition temperatures (T beta). The T(g)s of phase-separated regions correspond to almost the component values but the T(beta)s correspond to that of a lower (T beta) component value, and the other is higher than that of the higher (T beta) component value. Possible reasons are discussed from the interchain lamella thickness in the P3AT blends and molecular modeling using molecular mechanics program. 16. Phase diagrams from ab-initio calculations: Re-W and Fe-B Energy Technology Data Exchange (ETDEWEB) Hammerschmidt, Thomas; Bialon, Arthur; Palumbo, Mauro; Fries, Suzana G.; Drautz, Ralf [ICAMS, Ruhr-Universitaet Bochum (Germany) 2011-07-01 The CALPHAD (CaLculation of Phase Diagrams) method relies on Gibbs energy databases and is of limited predictive power in cases where only limited experimental data is available for constructing the Gibbs energy databases. This is problematic for, e.g., the calculation of the phase transformation kinetics within phase field simulations that not only require the thermodynamic equilibrium data but also information on metastable phases. Such information is difficult to obtain directly from experiment but ab-initio calculations may supplement experimental databases as they comprise metastable phases and arbitrary chemical compositions. We present simulations for two prototypical systems: Re-W and Fe-B. For both systems we calculate the heat of formation for an extensive set of structures using ab-initio calculations and employ the total energies in CALPHAD in order to determine the corresponding phase diagrams. We account for the configurational entropy within the Bragg-Williams approximation and neglect the phenomenological excess-term that is commonly used in CALPHAD as well as the contribution of phonons and electronic excitations to the free energy. According to our calculations the complex intermetallic phases in Re-W are stabilized by the configurational entropy. For Fe-B, we calculate metastable and stable phase diagrams including recently predicted new stable phases. 17. Two-Phase Equilibrium Properties in Charged Topological Dilaton AdS Black Holes Directory of Open Access Journals (Sweden) Hui-Hua Zhao 2016-01-01 Full Text Available We discuss phase transition of the charged topological dilaton AdS black holes by Maxwell equal area law. The two phases involved in the phase transition could coexist and we depict the coexistence region in P-v diagrams. The two-phase equilibrium curves in P-T diagrams are plotted, the Clapeyron equation for the black hole is derived, and the latent heat of isothermal phase transition is investigated. We also analyze the parameters of the black hole that could have an effect on the two-phase coexistence. The results show that the black holes may go through a small-large phase transition similar to that of a usual nongravity thermodynamic system. 18. Structural phase diagram and equilibrium oxygen partial pressure of YBa2Cu3O6+x DEFF Research Database (Denmark) Andersen, N.H.; Lebech, B.; Poulsen, H.F. 1990-01-01 of the ordering of oxygen. Oxygen equilibrium partial pressure shows significant variations with temperature and concentration which indicate that x = 0.15 and x = 0.92 are minimum and maximum oxygen concentrations. Measurements of oxygen in-diffusion flow show relaxation type behaviour: View the MathML source......An experimental technique by which in-situ gas volumetric measurements are carried out on a neutron powder diffractometer, is presented and used for simultaneous studies of oxygen equilibrium partial pressure and the structural phase diagram of YBa2Cu3O6 + x. Experimental data was collected under...... near equilibrium conditions at 350 points in (x,T)-space with 0.15 gas law in connection with iodiometric titration and structural analyses. The temperature... 19. Use of S-α diagram for representing tokamak equilibrium International Nuclear Information System (INIS) Takahashi, H.; Chance, M.; Kessel, C.; LeBlanc, B.; Manickam, J.; Okabayashi, M. 1991-05-01 A use of the S-α diagram is proposed as a tool for representing the plasma equilibrium with a qualitative characterization of its stability through pattern recognition. The diagram is an effective tool for visually presenting the relationship between the shear and dimensionless pressure gradient of an equilibrium. In the PBX-M tokamak, an H-mode operating regime with high poloidal β and L-mode regime with high toroidal β, obtained using different profile modification techniques, are found to have distinct S-α trajectory patterns. Pellet injection into a plasma in the H-mode regime with high toroidal β, obtained using different profile modification techniques, are found to have distinct S-α trajectory patterns. Pellet injection into a plasma in the H-mode regime results in favorable qualities of both regimes. The β collapse process and ELM event also manifest themselves as characteristic changes in the S-α pattern 20. Comments on the equilibrium diagram of the Ti-Zr system International Nuclear Information System (INIS) Ruch, M.; Arias, D. 1993-01-01 The Ti-Zr system is a continuous series of solid solutions in both the α- and β-phases, with a congruent minimum at Ti-50at%Zr. The equilibrium diagram has been reviewed by Murray in 1981, who accepts the α/β temperature for this minimum determined by Farrar and Adler by metallographic techniques. Etchessahar and Debuigne measured by dilatometry a transformation temperature of (894 ±)K and (859±2)K for α/α + β and β/α + β respectively, and later in a high temperature Calvet microcalorimeter, 883K. Blacktop et al find that this value is consistent with their measurements of the α/β transformation temperature in Ti-40%Zr and Ti-60%Zr in a high temperature calorimeter. In the present work, the α/β transformation temperature was measured by several techniques. The effect of impurities is considered in both transformation temperature and microstructure of product phases 1. 450 {sup o}C isothermal section of the Fe-Zn-Si ternary phase diagram Energy Technology Data Exchange (ETDEWEB) Su, Xuping [Inst. of Materials Research, School of Mechanical Engineering, Xiangtan Univ., Xiangtan, Hunan (China); Univ. of Toronto, Dept. of Materials Science and Engineering, Toronto, Ontario (Canada); Tang, Nai-Yong [Cominco Ltd., Product Technology Centre, Mississauga, Ontario (Canada); Toguri, J.M. [Univ. of Toronto, Dept. of Materials Science and Engineering, Toronto, Ontario (Canada) 2001-07-01 The 450 {sup o}C isothermal section of the Fe-Zn-Si ternary phase diagram has been determined experimentally using optical microscopy, scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) and X-ray diffractometry. The focus of the work has been concentrated on the Zn-rich corner which is relevant to general galvanizing. The present study has confirmed the existence of the equilibrium state between the liquid, the {zeta} phase and the FeSi phase. This three phase equilibrium state prevents the equilibrium between the liquid and the {delta} phase suggested by some researchers. Experimental results indicate that Si solubility in all four binary Zn-Fe compounds is limited. The Fe solubility in molten Zn was found to decrease with increasing Si content in the melt. The liquid phase boundary was determined using a model based phenomenological approach. (author) 2. Glass and liquid phase diagram of a polyamorphic monatomic system Science.gov (United States) Reisman, Shaina; Giovambattista, Nicolas 2013-02-01 We perform out-of-equilibrium molecular dynamics (MD) simulations of a monatomic system with Fermi-Jagla (FJ) pair potential interactions. This model system exhibits polyamorphism both in the liquid and glass state. The two liquids, low-density (LDL) and high-density liquid (HDL), are accessible in equilibrium MD simulations and can form two glasses, low-density (LDA) and high-density amorphous (HDA) solid, upon isobaric cooling. The FJ model exhibits many of the anomalous properties observed in water and other polyamorphic liquids and thus, it is an excellent model system to explore qualitatively the thermodynamic properties of such substances. The liquid phase behavior of the FJ model system has been previously characterized. In this work, we focus on the glass behavior of the FJ system. Specifically, we perform systematic isothermal compression and decompression simulations of LDA and HDA at different temperatures and determine "phase diagrams" for the glass state; these phase diagrams varying with the compression/decompression rate used. We obtain the LDA-to-HDA and HDA-to-LDA transition pressure loci, PLDA-HDA(T) and PHDA-LDA(T), respectively. In addition, the compression-induced amorphization line, at which the low-pressure crystal (LPC) transforms to HDA, PLPC-HDA(T), is determined. As originally proposed by Poole et al. [Phys. Rev. E 48, 4605 (1993)], 10.1103/PhysRevE.48.4605 simulations suggest that the PLDA-HDA(T) and PHDA-LDA(T) loci are extensions of the LDL-to-HDL and HDL-to-LDL spinodal lines into the glass domain. Interestingly, our simulations indicate that the PLPC-HDA(T) locus is an extension, into the glass domain, of the LPC metastability limit relative to the liquid. We discuss the effects of compression/decompression rates on the behavior of the PLDA-HDA(T), PHDA-LDA(T), PLPC-HDA(T) loci. The competition between glass polyamorphism and crystallization is also addressed. At our "fast rate," crystallization can be partially suppressed and the 3. Algorithmic phase diagrams Science.gov (United States) Hockney, Roger 1987-01-01 Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation. 4. Phase diagram of nanoscale alloy particles used for vapor-liquid-solid growth of semiconductor nanowires. Science.gov (United States) Sutter, Eli; Sutter, Peter 2008-02-01 We use transmission electron microscopy observations to establish the parts of the phase diagram of nanometer sized Au-Ge alloy drops at the tips of Ge nanowires (NWs) that determine their temperature-dependent equilibrium composition and, hence, their exchange of semiconductor material with the NWs. We find that the phase diagram of the nanoscale drop deviates significantly from that of the bulk alloy, which explains discrepancies between actual growth results and predictions on the basis of the bulk-phase equilibria. Our findings provide the basis for tailoring vapor-liquid-solid growth to achieve complex one-dimensional materials geometries. 5. Phase diagrams of the elements International Nuclear Information System (INIS) Young, D.A. 1975-01-01 A summary of the pressure-temperature phase diagrams of the elements is presented, with graphs of the experimentally determined solid-solid phase boundaries and melting curves. Comments, including theoretical discussion, are provided for each diagram. The crystal structure of each solid phase is identified and discussed. This work is aimed at encouraging further experimental and theoretical research on phase transitions in the elements 6. The structural phase diagram and oxygen equilibrium partial pressure of YBa 2Cu 3O 6+ x studied by neutron powder diffraction and gas volumetry Science.gov (United States) Andersen, N. H.; Lebech, B.; Poulsen, H. F. 1990-12-01 An experimental technique based on neutron powder diffraction and gas volumetry is presented and used to study the structural phase diagram of YBa 2Cu 3O 6+ x under equilibrium conditions in an extended part of ( x, T)-phase (0.15< x<0.92 and 25° C< T<725°C). Our experimental observations lend strong support to a recent two-dimensional anisotropic next-nearest-neighbour Ising model calculation (the ASYNNNI model) of the basal plane oxygen ordering based of first principle interaction parameters. Simultaneous measurements of the oxygen equilibrium partial pressure show anomalies, one of which proves the thermodynamic stability of the orthorhombic OII double cell structure. Striking similarity with predictions of recent model calculations support that another anomaly may be interpreted to result from local one-dimensional fluctuations in the distribution of oxygen atoms in the basal plane of tetragonal YBCO. Our pressure data also indicate that x=0.92 is a maximum obtainable oxygen concentration for oxygen pressures below 760 Torr. 7. Another dimension to metamorphic phase equilibria: the power of interactive movies for understanding complex phase diagram sections Science.gov (United States) Moulas, E.; Caddick, M. J.; Tisato, N.; Burg, J.-P. 2012-04-01 The investigation of metamorphic phase equilibria, using software packages that perform thermodynamic calculations, involves a series of important assumptions whose validity can often be questioned but are difficult to test. For example, potential influences of deformation on phase relations, and modification of effective reactant composition (X) at successive stages of equilibrium may both introduce significant uncertainty into phase diagram calculations. This is generally difficult to model with currently available techniques, and is typically not well quantified. We present here a method to investigate such phenomena along pre-defined Pressure-Temperature (P-T) paths, calculating local equilibrium via Gibbs energy minimization. An automated strategy to investigate complex changes in the effective equilibration composition has been developed. This demonstrates the consequences of specified X modification and, more importantly, permits automated calculation of X changes that are likely along the requested path if considering several specified processes. Here we describe calculations considering two such processes and show an additional example of a metamorphic texture that is difficult to model with current techniques. Firstly, we explore the assumption that although water saturation and bulk-rock equilibrium are generally considered to be valid assumptions in the calculation of phase equilibria, the saturation of thermodynamic components ignores mechanical effects that the fluid/melt phase can impose on the rock, which in turn can modify the effective equilibrium composition. Secondly, we examine how mass fractionation caused by porphyroblast growth at low temperatures or progressive melt extraction at high temperatures successively modifies X out of the plane of the initial diagram, complicating the process of determining best-fit P-T paths for natural samples. In particular, retrograde processes are poorly modeled without careful consideration of prograde 8. Phase equilibrium engineering CERN Document Server Brignole, Esteban Alberto 2013-01-01 Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and 9. Phase diagram of power law and Lennard-Jones systems: Crystal phases International Nuclear Information System (INIS) Travesset, Alex 2014-01-01 An extensive characterization of the low temperature phase diagram of particles interacting with power law or Lennard-Jones potentials is provided from Lattice Dynamical Theory. For power law systems, only two lattice structures are stable for certain values of the exponent (or softness) (A15, body centered cube (bcc)) and two more (face centered cubic (fcc), hexagonal close packed (hcp)) are always stable. Among them, only the fcc and bcc are equilibrium states. For Lennard-Jones systems, the equilibrium states are either hcp or fcc, with a coexistence curve in pressure and temperature that shows reentrant behavior. The hcp solid never coexists with the liquid. In all cases analyzed, for both power law and Lennard-Jones potentials, the fcc crystal has higher entropy than the hcp. The role of anharmonic terms is thoroughly analyzed and a general thermodynamic integration to account for them is proposed 10. Urea-temperature phase diagrams capture the thermodynamics of denatured state expansion that accompany protein unfolding Science.gov (United States) Tischer, Alexander; Auton, Matthew 2013-01-01 We have analyzed the thermodynamic properties of the von Willebrand factor (VWF) A3 domain using urea-induced unfolding at variable temperature and thermal unfolding at variable urea concentrations to generate a phase diagram that quantitatively describes the equilibrium between native and denatured states. From this analysis, we were able to determine consistent thermodynamic parameters with various spectroscopic and calorimetric methods that define the urea–temperature parameter plane from cold denaturation to heat denaturation. Urea and thermal denaturation are experimentally reversible and independent of the thermal scan rate indicating that all transitions are at equilibrium and the van't Hoff and calorimetric enthalpies obtained from analysis of individual thermal transitions are equivalent demonstrating two-state character. Global analysis of the urea–temperature phase diagram results in a significantly higher enthalpy of unfolding than obtained from analysis of individual thermal transitions and significant cross correlations describing the urea dependence of and that define a complex temperature dependence of the m-value. Circular dichroism (CD) spectroscopy illustrates a large increase in secondary structure content of the urea-denatured state as temperature increases and a loss of secondary structure in the thermally denatured state upon addition of urea. These structural changes in the denatured ensemble make up ∼40% of the total ellipticity change indicating a highly compact thermally denatured state. The difference between the thermodynamic parameters obtained from phase diagram analysis and those obtained from analysis of individual thermal transitions illustrates that phase diagrams capture both contributions to unfolding and denatured state expansion and by comparison are able to decipher these contributions. PMID:23813497 11. Phase diagrams for an ideal gas mixture of fermionic atoms and bosonic molecules DEFF Research Database (Denmark) Williams, J. E.; Nygaard, Nicolai; Clark, C. W. 2004-01-01 We calculate the phase diagrams for a harmonically trapped ideal gas mixture of fermionic atoms and bosonic molecules in chemical and thermal equilibrium, where the internal energy of the molecules can be adjusted relative to that of the atoms by use of a tunable Feshbach resonance. We plot...... diagrams obtained in recent experiments on the Bose-Einstein condensation to Bardeen-Cooper-Schrieffer crossover, in which the condensate fraction is plotted as a function of the initial temperature of the Fermi gas measured before a sweep of the magnetic field through the resonance region.... 12. Ring diagrams and phase transitions International Nuclear Information System (INIS) Takahashi, K. 1986-01-01 Ring diagrams at finite temperatures carry most infrared-singular parts among Feynman diagrams. Their effect to effective potentials are in general so significant that one must incorporate them as well as 1-loop diagrams. The author expresses these circumstances in some examples of supercooled phase transitions 13. Phase diagrams of (vapour + liquid) equilibrium for binary mixtures of α,α,α-trifluorotoluene with ethanol, or benzene, or chloroform at pressure 101.4 kPa International Nuclear Information System (INIS) 2008-01-01 (Vapour + liquid) equilibrium (VLE) of binary mixtures of (ethanol + α,α,α-trifluorotoluene), (benzene + α,α,α-trifluorotoluene), and (chloroform + α,α,α-trifluorotoluene) have been investigated at the pressure 101.4 kPa using the dynamic-ebulliometry method over the whole composition range. The correlated VLE phase diagrams were adequately described by means of NRTL and UNIQUAC thermodynamic models. Fair attractive energies in the first two systems are capable to yield azeotropes, while moderate repulsive energies in the later system make it zeotrope 14. Phase equilibrium study of the binary systems (N-hexyl-3-methylpyridinium tosylate ionic liquid + water, or organic solvent) International Nuclear Information System (INIS) Domanska, Urszula; Krolikowski, Marek 2011-01-01 Highlights: → Synthesis, DSC, and measurements of phase equilibrium of N-hexyl-3-methylpyridinium tosylate. → Solvents used: water, alcohols, benzene, alkylbenzenes, and aliphatic hydrocarbons. → Correlation with UNIQUAC, Wilson and NRTL models. → Comparison with different tosylate-based ILs. - Abstract: The (solid + liquid) phase equilibrium (SLE) and (liquid + liquid) phase equilibrium (LLE) for the binary systems ionic liquid (IL) N-hexyl-3-methylpyridinium tosylate (p-toluenesulfonate), {([HM 3 Py][TOS] + water, or an alcohol (1-butanol, or 1-hexanol, or 1-octanol, or 1-decanol), or an aromatic hydrocarbon (benzene, toluene, or ethylbenzene, or propylbenzene), or an alkane (n-hexane, n-heptane, n-octane)} have been determined at ambient pressure using a dynamic method. Simple eutectic systems with complete miscibility in the liquid phase were observed for the systems involving water and alcohols. The phase equilibrium diagrams of IL and aromatic or aliphatic hydrocarbons exhibit eutectic systems with immiscibility in the liquid phase with an upper critical solution temperature as for most of the ILs. The correlation of the experimental data has been carried out using the UNIQUAC, Wilson and the non-random two liquid (NRTL) correlation equations. The results reported here have been compared with analogous phase diagrams reported by our group previously for systems containing the tosylate-based ILs. 15. Characterization of Low-Symmetry Structures from Phase Equilibrium of Fe-Al System-Microstructures and Mechanical Properties. Science.gov (United States) Matysik, Piotr; Jóźwiak, Stanisław; Czujko, Tomasz 2015-03-04 Fe-Al intermetallic alloys with aluminum content over 60 at% are in the area of the phase equilibrium diagram that is considerably less investigated in comparison to the high-symmetry Fe₃Al and FeAl phases. Ambiguous crystallographic information and incoherent data referring to the phase equilibrium diagrams placed in a high-aluminum range have caused confusions and misinformation. Nowadays unequivocal material properties description of FeAl₂, Fe₂Al₅ and FeAl₃ intermetallic alloys is still incomplete. In this paper, the influence of aluminum content and processing parameters on phase composition is presented. The occurrence of low-symmetry FeAl₂, Fe₂Al₅ and FeAl₃ structures determined by chemical composition and phase transformations was defined by scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDS) examinations. These results served to verify diffraction investigations (XRD) and to explain the mechanical properties of cast materials such as: hardness, Young's modulus and fracture toughness evaluated using the nano-indentation technique. 16. Common phase diagram for low-dimensional superconductors International Nuclear Information System (INIS) Michalak, Rudi 2003-01-01 A phenomenological phase diagram which has been derived for high-temperature superconductors from NMR Knight-shift measurements of the pseudogap is compared to the phase diagram that is obtained for organic superconductors and spin-ladder superconductors, both low-dimensional systems. This is contrasted to the phase diagram of some Heavy Fermion superconductors, i.e. superconductors not constrained to a low dimensionality 17. Wave packet dynamics, time scales and phase diagram in the IBM-Lipkin-Meshkov-Glick model Science.gov (United States) Castaños, Octavio; de los Santos, Francisco; Yáñez, Rafael; Romera, Elvira 2018-02-01 We derive the phase diagram of a scalar two-level boson model by studying the equilibrium and stability properties of its energy surface. The plane of control parameters is enlarged with respect to previous studies. We then analyze the time evolution of wave packets centered around the ground state at various quantum phase transition boundary lines. In particular, classical and revival times are computed numerically. 18. Scheil-Gulliver Constituent Diagrams Science.gov (United States) Pelton, Arthur D.; Eriksson, Gunnar; Bale, Christopher W. 2017-06-01 During solidification of alloys, conditions often approach those of Scheil-Gulliver cooling in which it is assumed that solid phases, once precipitated, remain unchanged. That is, they no longer react with the liquid or with each other. In the case of equilibrium solidification, equilibrium phase diagrams provide a valuable means of visualizing the effects of composition changes upon the final microstructure. In the present study, we propose for the first time the concept of Scheil-Gulliver constituent diagrams which play the same role as that in the case of Scheil-Gulliver cooling. It is shown how these diagrams can be calculated and plotted by the currently available thermodynamic database computing systems that combine Gibbs energy minimization software with large databases of optimized thermodynamic properties of solutions and compounds. Examples calculated using the FactSage system are presented for the Al-Li and Al-Mg-Zn systems, and for the Au-Bi-Sb-Pb system and its binary and ternary subsystems. 19. Comparison of forcefields for molecular dynamics simulations of hydrocarbon phase diagrams Science.gov (United States) Pisarev, V. V.; Zakharov, S. A. 2018-01-01 Molecular dynamics calculations of vapor-liquid equilibrium of methane-n-butane mixture are performed. Three force-field models are tested: the TraPPE-UA united-atom forcefield, LOPLS-AA all-atom forcefield and a fully flexible version of the TraPPE-EH all-atom forcefield. All those forcefields reproduce well the composition of liquid phase in the mixture as a function of pressure at the 300 K isotherm, while significant discrepancies from experimental data are observed in the saturated vapor compositions with OPLS-AA and TraPPE-UA forcefields. The best agreement with the experimental phase diagram is found with TraPPE-EH forcefield which accurately reproduces compositions of both liquid and vapor phase. This forcefield can be recommended for simulation of two-phase hydrocarbon systems. 20. Phase diagram of classical electronic bilayers International Nuclear Information System (INIS) Ranganathan, S; Johnson, R E 2006-01-01 Extensive molecular dynamics calculations have been performed on classical, symmetric electronic bilayers at various values of the coupling strength Γ and interlayer separation d to delineate its phase diagram in the Γ-d plane. We studied the diffusion, the amplitude of the main peak of the intralayer static structure factor and the peak positions of the intralayer pair correlation function with the aim of defining equivalent signatures of freezing and constructing the resulting phase diagram. It is found that for Γ greater than 75, crystalline structures exist for a certain range of interlayer separations, while liquid phases are favoured at smaller and larger d. It is seen that there is good agreement between our phase diagram and previously published ones 1. Phase diagram of classical electronic bilayers Energy Technology Data Exchange (ETDEWEB) Ranganathan, S [Department of Physics, Royal Military College of Canada, Kingston, Ontario K7K 7B4 (Canada); Johnson, R E [Department of Mathematics and Computer Science, Royal Military College of Canada, Kingston, Ontario K7K 7B4 (Canada) 2006-04-28 Extensive molecular dynamics calculations have been performed on classical, symmetric electronic bilayers at various values of the coupling strength {gamma} and interlayer separation d to delineate its phase diagram in the {gamma}-d plane. We studied the diffusion, the amplitude of the main peak of the intralayer static structure factor and the peak positions of the intralayer pair correlation function with the aim of defining equivalent signatures of freezing and constructing the resulting phase diagram. It is found that for {gamma} greater than 75, crystalline structures exist for a certain range of interlayer separations, while liquid phases are favoured at smaller and larger d. It is seen that there is good agreement between our phase diagram and previously published ones. 2. Solid Phase Equilibrium Relations in the CaO-SiO2-Nb2O5-La2O3 System at 1273 K Science.gov (United States) Qiu, Jiyu; Liu, Chengjun 2018-02-01 Silicate slag system with additions Nb and RE formed in the utilization of REE-Nb-Fe ore deposit resources in China has industrial uses as a metallurgical slag system. The lack of a phase diagram, theoretical, and thermodynamic information for the multi-component system restrict the comprehensive utilization process. In the current work, solid phase equilibrium relations in the CaO-SiO2-Nb2O5-La2O3 quaternary system at 1273 K (1000 °C) were investigated experimentally by the high-temperature equilibrium experiment followed by X-ray diffraction, scanning electron microscope, and energy dispersive spectrometer. Six spatial independent tetrahedron fields in the CaO-SiO2-Nb2O5-La2O3 system phase diagram were determined by the Gibbs Phase Rule. The current work combines the mass fraction of equilibrium phase and corresponding geometric relation. A determinant method was deduced to calculate the mass fraction of equilibrium phase in quaternary system according to the Mass Conservation Law, the Gibbs Phase Rule, the Lever's Rule, and the Cramer Law. 3. Lattice and Phase Diagram in QCD International Nuclear Information System (INIS) Lombardo, Maria Paola 2008-01-01 Model calculations have produced a number of very interesting expectations for the QCD Phase Diagram, and the task of a lattice calculations is to put these studies on a quantitative grounds. I will give an overview of the current status of the lattice analysis of the QCD phase diagram, from the quantitative results of mature calculations at zero and small baryochemical potential, to the exploratory studies of the colder, denser phase. 4. Characterization of Low-Symmetry Structures from Phase Equilibrium of Fe-Al System—Microstructures and Mechanical Properties Directory of Open Access Journals (Sweden) Piotr Matysik 2015-03-01 Full Text Available Fe-Al intermetallic alloys with aluminum content over 60 at% are in the area of the phase equilibrium diagram that is considerably less investigated in comparison to the high-symmetry Fe3Al and FeAl phases. Ambiguous crystallographic information and incoherent data referring to the phase equilibrium diagrams placed in a high-aluminum range have caused confusions and misinformation. Nowadays unequivocal material properties description of FeAl2, Fe2Al5 and FeAl3 intermetallic alloys is still incomplete. In this paper, the influence of aluminum content and processing parameters on phase composition is presented. The occurrence of low-symmetry FeAl2, Fe2Al5 and FeAl3 structures determined by chemical composition and phase transformations was defined by scanning electron microscopy (SEM and energy-dispersive X-ray spectroscopy (EDS examinations. These results served to verify diffraction investigations (XRD and to explain the mechanical properties of cast materials such as: hardness, Young’s modulus and fracture toughness evaluated using the nano-indentation technique. 5. Multicritical phase diagrams of the Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: the pair approximation and the path probability method with pair distribution International Nuclear Information System (INIS) Keskin, Mustafa; Erdinc, Ahmet 2004-01-01 As a continuation of the previously published work, the pair approximation of the cluster variation method is applied to study the temperature dependences of the order parameters of the Blume-Emery-Griffiths model with repulsive biquadratic coupling on a body centered cubic lattice. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. We study the dynamics of the model by the path probability method with pair distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagram in addition to the equilibrium phase diagram and also the first-order phase transition line for the unstable branches of the quadrupole order parameter is superimposed on the phase diagrams. It is found that the metastable phase diagram and the first-order phase boundary for the unstable quadrupole order parameter always exist at the low temperatures which are consistent with experimental and theoretical works 6. Study of equilibrium phase diagrams of the system In-As-Pb International Nuclear Information System (INIS) Baranov, A.N.; Gorelenok, A.A.; Litvak, A.M.; Sherstnev, V.V.; Yakovlev, Yu.P. 1992-01-01 Experimental data on fusibility diagram of In-As-Pb system are presented. Model calculation of fusibility diagram of this system and corresponding binary subsystems, using EF LCP method was conducted. Preliminary studies demonstrate the possibility of changing type of eigendefect in InAs, when growing from arsenous carbon. This becomes posible when using lead as neutral solvent 7. Studies on the phase diagram of LiBr-SrBr2 system International Nuclear Information System (INIS) Mahendran, K.H.; Sujatha, K.; Sridharan, R.; Gnanasekaran, T. 2003-01-01 Binary LiBr-SrBr 2 system was investigated using differential scanning calorimetry (DSC) and the equilibrium phases at different compositions were identified using X-ray diffraction (XRD). This system has a compound LiSr 2 Br 5 , and exhibits a eutectic reaction between this compound and LiBr at 434 deg. C and the eutectic has a composition of 35 mol% SrBr 2 . The compound LiSr 2 Br 5 undergoes peritectic decomposition at 484 deg. C. From the DSC and XRD results, phase diagram of the LiBr-SrBr 2 system is constructed 8. Method of non-interacting thermodynamic calculation of binary phase diagrams containing p disordered phases with variable composition and q phases with constant composition at (p, q) ≤ 10 International Nuclear Information System (INIS) Udovskij, A.L.; Karpushkin, V.N.; Nikishina, E.A. 1991-01-01 Method of non-interacting thermodynamic calculation of state diagram of binary systems contacting p disordered phases with variable composition and q phases with constant composition for (p, q) ≤ 10 case is developed. Determination of all possible solutions of phase equilibrium equations is realized in the method. Certain application examples of computer-realized method of T-x thermodynamic calculation using PC for Cr-W, Ni-W, Ni-Al, Ni-Re binary systems are given 9. Pitfalls and feedback when constructing topological pressure-temperature phase diagrams Science.gov (United States) Ceolin, R.; Toscani, S.; Rietveld, Ivo B.; Barrio, M.; Tamarit, J. Ll. 2017-04-01 The stability hierarchy between different phases of a chemical compound can be accurately reproduced in a topological phase diagram. This type of phase diagrams may appear to be the result of simple extrapolations, however, experimental complications quickly increase in the case of crystalline trimorphism (and higher order polymorphism). To ensure the accurate positioning of stable phase domains, a topological phase diagram needs to be consistent. This paper gives an example of how thermodynamic feedback can be used in the topological construction of phase diagrams to ensure overall consistency in a phase diagram based on the case of piracetam crystalline trimorphism. 10. Mapping Isobaric Aging onto the Equilibrium Phase Diagram DEFF Research Database (Denmark) Niss, Kristine 2017-01-01 The linear volume relaxation and the nonlinear volume aging of a glass-forming liquid are measured, directly compared, and used to extract the out-of-equilibrium relaxation time. This opens a window to investigate how the relaxation time depends on temperature, structure, and volume in parts of p... 11. Phase diagram of an extended Agassi model Science.gov (United States) García-Ramos, J. E.; Dukelsky, J.; Pérez-Fernández, P.; Arias, J. M. 2018-05-01 Background: The Agassi model [D. Agassi, Nucl. Phys. A 116, 49 (1968), 10.1016/0375-9474(68)90482-X] is an extension of the Lipkin-Meshkov-Glick (LMG) model [H. J. Lipkin, N. Meshkov, and A. J. Glick, Nucl. Phys. 62, 188 (1965), 10.1016/0029-5582(65)90862-X] that incorporates the pairing interaction. It is a schematic model that describes the interplay between particle-hole and pair correlations. It was proposed in the 1960s by D. Agassi as a model to simulate the properties of the quadrupole plus pairing model. Purpose: The aim of this work is to extend a previous study by Davis and Heiss [J. Phys. G: Nucl. Phys. 12, 805 (1986), 10.1088/0305-4616/12/9/006] generalizing the Agassi model and analyze in detail the phase diagram of the model as well as the different regions with coexistence of several phases. Method: We solve the model Hamiltonian through the Hartree-Fock-Bogoliubov (HFB) approximation, introducing two variational parameters that play the role of order parameters. We also compare the HFB calculations with the exact ones. Results: We obtain the phase diagram of the model and classify the order of the different quantum phase transitions appearing in the diagram. The phase diagram presents broad regions where several phases, up to three, coexist. Moreover, there is also a line and a point where four and five phases are degenerated, respectively. Conclusions: The phase diagram of the extended Agassi model presents a rich variety of phases. Phase coexistence is present in extended areas of the parameter space. The model could be an important tool for benchmarking novel many-body approximations. 12. Vortex phase diagram and vortex dynamics at low temperature in a thick a-MgxB1-x film International Nuclear Information System (INIS) Okuma, S.; Kohara, M. 2007-01-01 We report on the equilibrium vortex phase diagram and vortex dynamics at low temperature T in a thick amorphous (a-)Mg x B 1-x film based on the measurements of the dc resistivity ρ and time (t)-dependent component of the flux-flow voltage, δV(t), respectively. Both ρ(T) in perpendicular fields and the vortex phase diagram are qualitatively similar to those for the a-Mo x Si 1-x films, in which evidence for the quantum-vortex-liquid (QVL) phase has been obtained. In either material system we observe anomalous vortex flow with the asymmetric distribution of δV(t) in the QVL phase, suggesting that the anomalous flow is a universal phenomenon commonly observed for disordered amorphous films, independent of material 13. Non-equilibrium phase transitions CERN Document Server Henkel, Malte; Lübeck, Sven 2009-01-01 This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference. 14. Ferroelectric Phase Diagram of PVDF:PMMA OpenAIRE Li, Mengyuan; Stingelin, Natalie; Michels, Jasper J.; Spijkman, Mark-Jan; Asadi, Kamal; Feldman, Kirill; Blom, Paul W. M.; de Leeuw, Dago M. 2012-01-01 We have investigated the ferroelectric phase diagram of poly(vinylidene fluoride) (PVDF) and poly(methyl methacrylate) (PMMA). The binary nonequilibrium temperature composition diagram was determined and melting of alpha- and beta-phase PVDF was identified. Ferroelectric beta-PVDF:PMMA blend films were made by melting, ice quenching, and subsequent annealing above the glass transition temperature of PMMA, close to the melting temperature of PVDF. Addition of PMMA suppresses the crystallizatio... 15. Thermodynamic studies of mixtures for topical anesthesia: Lidocaine-salol binary phase diagram Energy Technology Data Exchange (ETDEWEB) Lazerges, Mathieu [Laboratoire de Chimie Physique (EA 4066), Faculte des Sciences Pharmaceutiques et Biologiques, Universite Paris Descartes, 4 Avenue de l' Observatoire, 75270 Paris Cedex 06 (France); Rietveld, Ivo B., E-mail: ivo.rietveld@parisdescartes.fr [Laboratoire de Chimie Physique (EA 4066), Faculte des Sciences Pharmaceutiques et Biologiques, Universite Paris Descartes, 4 Avenue de l' Observatoire, 75270 Paris Cedex 06 (France); Corvis, Yohann; Ceolin, Rene; Espeau, Philippe [Laboratoire de Chimie Physique (EA 4066), Faculte des Sciences Pharmaceutiques et Biologiques, Universite Paris Descartes, 4 Avenue de l' Observatoire, 75270 Paris Cedex 06 (France) 2010-01-10 The lidocaine-salol binary system has been investigated by differential scanning calorimetry, direct visual observations, and X-ray powder diffraction, resulting in a temperature-composition phase diagram with a eutectic equilibrium. The eutectic mixture, found at 0.423 {+-} 0.007 lidocaine mole-fraction, melts at 18.2 {+-} 0.5 {sup o}C with an enthalpy of 17.3 {+-} 0.5 kJ mol{sup -1}. This indicates that the liquid phase around the eutectic composition is stable at room temperature. Moreover, the undercooled liquid mixture does not easily crystallize. The present binary mixture exhibits eutectic behavior similar to the prilocaine-lidocaine mixture in the widely used EMLA topical anesthetic preparation. 16. Thermodynamic studies of mixtures for topical anesthesia: Lidocaine-salol binary phase diagram International Nuclear Information System (INIS) Lazerges, Mathieu; Rietveld, Ivo B.; Corvis, Yohann; Ceolin, Rene; Espeau, Philippe 2010-01-01 The lidocaine-salol binary system has been investigated by differential scanning calorimetry, direct visual observations, and X-ray powder diffraction, resulting in a temperature-composition phase diagram with a eutectic equilibrium. The eutectic mixture, found at 0.423 ± 0.007 lidocaine mole-fraction, melts at 18.2 ± 0.5 o C with an enthalpy of 17.3 ± 0.5 kJ mol -1 . This indicates that the liquid phase around the eutectic composition is stable at room temperature. Moreover, the undercooled liquid mixture does not easily crystallize. The present binary mixture exhibits eutectic behavior similar to the prilocaine-lidocaine mixture in the widely used EMLA topical anesthetic preparation. 17. Aggregation of flexible polyelectrolytes: Phase diagram and dynamics. Science.gov (United States) Tom, Anvy Moly; Rajesh, R; Vemparala, Satyavani 2017-10-14 Similarly charged polymers in solution, known as polyelectrolytes, are known to form aggregated structures in the presence of oppositely charged counterions. Understanding the dependence of the equilibrium phases and the dynamics of the process of aggregation on parameters such as backbone flexibility and charge density of such polymers is crucial for insights into various biological processes which involve biological polyelectrolytes such as protein, DNA, etc. Here, we use large-scale coarse-grained molecular dynamics simulations to obtain the phase diagram of the aggregated structures of flexible charged polymers and characterize the morphology of the aggregates as well as the aggregation dynamics, in the presence of trivalent counterions. Three different phases are observed depending on the charge density: no aggregation, a finite bundle phase where multiple small aggregates coexist with a large aggregate and a fully phase separated phase. We show that the flexibility of the polymer backbone causes strong entanglement between charged polymers leading to additional time scales in the aggregation process. Such slowing down of the aggregation dynamics results in the exponent, characterizing the power law decay of the number of aggregates with time, to be dependent on the charge density of the polymers. These results are contrary to those obtained for rigid polyelectrolytes, emphasizing the role of backbone flexibility. 18. Phase diagram of spiking neural networks. Science.gov (United States) Seyed-Allaei, Hamed 2015-01-01 In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. 19. Phase Equilibrium and Austenite Decomposition in Advanced High-Strength Medium-Mn Bainitic Steels Directory of Open Access Journals (Sweden) 2016-10-01 Full Text Available The work addresses the phase equilibrium analysis and austenite decomposition of two Nb-microalloyed medium-Mn steels containing 3% and 5% Mn. The pseudobinary Fe-C diagrams of the steels were calculated using Thermo-Calc. Thermodynamic calculations of the volume fraction evolution of microstructural constituents vs. temperature were carried out. The study comprised the determination of the time-temperature-transformation (TTT diagrams and continuous cooling transformation (CCT diagrams of the investigated steels. The diagrams were used to determine continuous and isothermal cooling paths suitable for production of bainite-based steels. It was found that the various Mn content strongly influences the hardenability of the steels and hence the austenite decomposition during cooling. The knowledge of CCT diagrams and the analysis of experimental dilatometric curves enabled to produce bainite-austenite mixtures in the thermomechanical simulator. Light microscopy (LM, scanning electron microscopy (SEM, and transmission electron microscopy (TEM were used to assess the effect of heat treatment on morphological details of produced multiphase microstructures. 20. The coupling of thermochemistry and phase diagrams for group III-V semiconductor systems. Final report Energy Technology Data Exchange (ETDEWEB) Anderson, T.J. 1998-07-21 The project was directed at linking the thermochemical properties of III-V compound semiconductors systems with the reported phase diagrams. The solid-liquid phase equilibrium problem was formulated and three approaches to calculating the reduced standard state chemical potential were identified and values were calculated. In addition, thermochemical values for critical properties were measured using solid state electrochemical techniques. These values, along with the standard state chemical potentials and other available thermochemical and phase diagram data, were combined with a critical assessment of selected III-V systems. This work was culminated with a comprehensive assessment of all the III-V binary systems. A novel aspect of the experimental part of this project was the demonstration of the use of a liquid encapsulate to measure component activities by a solid state emf technique in liquid III-V systems that exhibit high vapor pressures at the measurement temperature. 1. The nuclear liquid-vapor phase transition: Equilibrium between phases or free decay in vacuum? International Nuclear Information System (INIS) Phair, L.; Moretto, L.G.; Elliott, J.B.; Wozniak, G.J. 2002-01-01 Recent analyses of multifragmentation in terms of Fisher's model and the related construction of a phase diagram brings forth the problem of the true existence of the vapor phase and the meaning of its associated pressure. Our analysis shows that a thermal emission picture is equivalent to a Fisher-like equilibrium description which avoids the problem of the vapor and explains the recently observed Boltzmann-like distribution of the emission times. In this picture a simple Fermi gas thermometric relation is naturally justified. Low energy compound nucleus emission of intermediate mass fragments is shown to scale according to Fisher's formula and can be simultaneously fit with the much higher energy ISiS multifragmentation data 2. Computation of Phase Equilibrium and Phase Envelopes DEFF Research Database (Denmark) Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp formulate the involved equations in terms of the fugacity coefficients. We present expressions for the first-order derivatives. Such derivatives are necessary in computationally efficient gradient-based methods for solving the vapor-liquid equilibrium equations and for computing phase envelopes. Finally, we......In this technical report, we describe the computation of phase equilibrium and phase envelopes based on expressions for the fugacity coefficients. We derive those expressions from the residual Gibbs energy. We consider 1) ideal gases and liquids modeled with correlations from the DIPPR database...... and 2) nonideal gases and liquids modeled with cubic equations of state. Next, we derive the equilibrium conditions for an isothermal-isobaric (constant temperature, constant pressure) vapor-liquid equilibrium process (PT flash), and we present a method for the computation of phase envelopes. We... 3. Using the Logarithmic Concentration Diagram, Log "C", to Teach Acid-Base Equilibrium Science.gov (United States) Kovac, Jeffrey 2012-01-01 Acid-base equilibrium is one of the most important and most challenging topics in a typical general chemistry course. This article introduces an alternative to the algebraic approach generally used in textbooks, the graphical log "C" method. Log "C" diagrams provide conceptual insight into the behavior of aqueous acid-base systems and allow… 4. Infrared thermography method for fast estimation of phase diagrams Energy Technology Data Exchange (ETDEWEB) Palomo Del Barrio, Elena [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Cadoret, Régis [Centre National de la Recherche Scientifique, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France); Daranlot, Julien [Solvay, Laboratoire du Futur, 178 Av du Dr Schweitzer, 33608 Pessac (France); Achchaq, Fouzia, E-mail: fouzia.achchaq@u-bordeaux.fr [Université de Bordeaux, Institut de Mécanique et d’Ingénierie, Esplanade des Arts et Métiers, 33405 Talence (France) 2016-02-10 Highlights: • Infrared thermography is proposed to determine phase diagrams in record time. • Phase boundaries are detected by means of emissivity changes during heating. • Transition lines are identified by using Singular Value Decomposition techniques. • Different binary systems have been used for validation purposes. - Abstract: Phase change materials (PCM) are widely used today in thermal energy storage applications. Pure PCMs are rarely used because of non adapted melting points. Instead of them, mixtures are preferred. The search of suitable mixtures, preferably eutectics, is often a tedious and time consuming task which requires the determination of phase diagrams. In order to accelerate this screening step, a new method for estimating phase diagrams in record time (1–3 h) has been established and validated. A sample composed by small droplets of mixtures with different compositions (as many as necessary to have a good coverage of the phase diagram) deposited on a flat substrate is first prepared and cooled down to ambient temperature so that all droplets crystallize. The plate is then heated at constant heating rate up to a sufficiently high temperature for melting all the small crystals. The heating process is imaged by using an infrared camera. An appropriate method based on singular values decomposition technique has been developed to analyze the recorded images and to determine the transition lines of the phase diagram. The method has been applied to determine several simple eutectic phase diagrams and the reached results have been validated by comparison with the phase diagrams obtained by Differential Scanning Calorimeter measurements and by thermodynamic modelling. 5. Metastable and equilibrium phase diagrams of unconjugated bilirubin IXα as functions of pH in model bile systems: Implications for pigment gallstone formation Science.gov (United States) Berman, Marvin D. 2014-01-01 Metastable and equilibrium phase diagrams for unconjugated bilirubin IXα (UCB) in bile are yet to be determined for understanding the physical chemistry of pigment gallstone formation. Also, UCB is a molecule of considerable biomedical importance because it is a potent antioxidant and an inhibitor of atherogenesis. We employed principally a titrimetric approach to obtain metastable and equilibrium UCB solubilities in model bile systems composed of taurine-conjugated bile salts, egg yolk lecithin (mixed long-chain phosphatidylcholines), and cholesterol as functions of total lipid concentration, biliary pH values, and CaCl2 plus NaCl concentrations. Metastable and equilibrium precipitation pH values were obtained, and average pKa values of the two carboxyl groups of UCB were calculated. Added lecithin and increased temperature decreased UCB solubility markedly, whereas increases in bile salt concentrations and molar levels of urea augmented solubility. A wide range of NaCl and cholesterol concentrations resulted in no specific effects, whereas added CaCl2 produced large decreases in UCB solubilities at alkaline pH values only. UV-visible absorption spectra were consistent with both hydrophobic and hydrophilic interactions between UCB and bile salts that were strongly influenced by pH. Reliable literature values for UCB compositions of native gallbladder biles revealed that biles from hemolytic mice and humans with black pigment gallstones are markedly supersaturated with UCB and exhibit more acidic pH values, whereas biles from nonstone control animals and patients with cholesterol gallstone are unsaturated with UCB. PMID:25359538 6. A Three-dimensional Topological Model of Ternary Phase Diagram International Nuclear Information System (INIS) Mu, Yingxue; Bao, Hong 2017-01-01 In order to obtain a visualization of the complex internal structure of ternary phase diagram, the paper realized a three-dimensional topology model of ternary phase diagram with the designed data structure and improved algorithm, under the guidance of relevant theories of computer graphics. The purpose of the model is mainly to analyze the relationship between each phase region of a ternary phase diagram. The model not only obtain isothermal section graph at any temperature, but also extract a particular phase region in which users are interested. (paper) 7. Phase diagrams of ferroelectric nanocrystals strained by an elastic matrix Science.gov (United States) Nikitchenko, A. I.; Azovtsev, A. V.; Pertsev, N. A. 2018-01-01 Ferroelectric crystallites embedded into a dielectric matrix experience temperature-dependent elastic strains caused by differences in the thermal expansion of the crystallites and the matrix. Owing to the electrostriction, these lattice strains may affect polarization states of ferroelectric inclusions significantly, making them different from those of a stress-free bulk crystal. Here, using a nonlinear thermodynamic theory, we study the mechanical effect of elastic matrix on the phase states of embedded single-domain ferroelectric nanocrystals. Their equilibrium polarization states are determined by minimizing a special thermodynamic potential that describes the energetics of an ellipsoidal ferroelectric inclusion surrounded by a linear elastic medium. To demonstrate the stability ranges of such states for a given material combination, we construct a phase diagram, where the inclusion’s shape anisotropy and temperature are used as two parameters. The ‘shape-temperature’ phase diagrams are calculated numerically for PbTiO3 and BaTiO3 nanocrystals embedded into representative dielectric matrices generating tensile (silica glass) or compressive (potassium silicate glass) thermal stresses inside ferroelectric inclusions. The developed phase maps demonstrate that the joint effect of thermal stresses and matrix-induced elastic clamping of ferroelectric inclusions gives rise to several important features in the polarization behavior of PbTiO3 and BaTiO3 nanocrystals. In particular, the Curie temperature displays a nonmonotonic variation with the ellipsoid’s aspect ratio, being minimal for spherical inclusions. Furthermore, the diagrams show that the polarization orientation with respect to the ellipsoid’s symmetry axis is controlled by the shape anisotropy and the sign of thermal stresses. Under certain conditions, the mechanical inclusion-matrix interaction qualitatively alters the evolution of ferroelectric states on cooling, inducing a structural transition 8. Phase diagrams of high-order critical phenomene and high-temperature equilibria in the H2O-HgI2-PbI2 system International Nuclear Information System (INIS) Valyashko, V.M.; Urusova, M.A. 1996-01-01 The paper studies the principal schemes of complete state diagram of volatile component-two non-volatile components three-component system with tricritical point and sequence of phase transformations at variation of temperature, pressure and composition of mixture. H 2 O-HgI 2 -PbI 2 system, solid phase dissolving process, stratification of solutions and critical phenomena under 200-400 deg C are studied experimentally. General nature of the system phase diagram and parameters of three-phase equilibrium critical point (tricritical point), that is, gas-liquid 1 -liquid 2 are determined. 17 refs., 8 figs., 3 tabs 9. Cosolutes effects on aqueous two-phase systems equilibrium formation studied by physical approaches. Science.gov (United States) Bertoluzzo, M Guadalupe; Rigatuso, Rubén; Farruggia, Beatriz; Nerli, Bibiana; Picó, Guillermo 2007-10-01 The effect of urea and sodium salts of monovalent halides on the aqueous polyethyleneglycol solution and binodal diagrams of polyethyleneglycol-potassium phosphate (polyethyleneglycol of molecular mass 1500, 4000, 6000 and 8000) were studied using different physical approaches. The effect of these solutes on the binodal diagram for polyethyleneglycol-potassium phosphate was also investigated. The cosolutes affected in a significant manner the water structured around the ethylene chain of polyethyleneglycol inducing a lost of this. The equilibrium curves for the aqueous two-phase systems were fitting very well by a sigmoidal function with two parameters, which are closely related with the cosolute structure making or breaking capacity on the water ordered. 10. Phase Diagram and Electronic Structure of Praseodymium and Plutonium Directory of Open Access Journals (Sweden) Nicola Lanatà 2015-01-01 Full Text Available We develop a new implementation of the Gutzwiller approximation in combination with the local density approximation, which enables us to study complex 4f and 5f systems beyond the reach of previous approaches. We calculate from first principles the zero-temperature phase diagram and electronic structure of Pr and Pu, finding good agreement with the experiments. Our study of Pr indicates that its pressure-induced volume-collapse transition would not occur without change of lattice structure—contrarily to Ce. Our study of Pu shows that the most important effect originating the differentiation between the equilibrium densities of its allotropes is the competition between the Peierls effect and the Madelung interaction and not the dependence of the electron correlations on the lattice structure. 11. Random matrix models for phase diagrams International Nuclear Information System (INIS) Vanderheyden, B; Jackson, A D 2011-01-01 We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied. 12. Towards construction of quasi-binary UAI3-USi3 phase diagram International Nuclear Information System (INIS) Rafailov, Gennady; Uziel, Asaf; White, Avner; Meshi, Louisa; Dahan, Itzhak 2014-01-01 Ternary U-Al-Si system has been extensively investigated due to the high potential of Uranium alloyed with Silicon as low-enriched fuel. Another interest in the U-Al-Si ternary system originates from the use of Aluminum alloy, where Silicon is a major alloying element, as U-fuel cladding. In this system, UAl3 and USi3 phases are of special importance. Since UAl3 and USi3 are isostructural and follow the Hume-Rothery rules closely, it would be expected that their quasi-binary phase diagram will be isomorphous. However, previous studies have shown that this system does not display complete liquid and solid solubility. Moreover, conflicting results were reported regarding the phases found . In current work, several compositions were cast and then heat-treated in order to reach equilibrium for subsequent characterization of Si-rich part of the USi3-UAl3 quasi-binary phase diagram. The as-cast and heat-treated alloys were characterized by scanning and transmission electron microscopy and X-ray diffraction (XRD) methods. Quantitative results were obtained from Rietveld analysis performed on XRD data. The results show that the ordered U(Si,Al)3 phase, identified in an earlier study of the Al-rich region is present also in the Si-rich region (studied in present research). Furthermore, ordered phase exhibited substantial stability over quite large range of compositions and temperature. Our results unambiguously point out that this quasi-binary system contains an order-disorder transformation and not a miscibility gap at low temperatures in the studied range of compositions 13. Kinetic attractor phase diagrams of active nematic suspensions: the dilute regime. Science.gov (United States) Forest, M Gregory; Wang, Qi; Zhou, Ruhai 2015-08-28 Large-scale simulations by the authors of the kinetic-hydrodynamic equations for active polar nematics revealed a variety of spatio-temporal attractors, including steady and unsteady, banded (1d) and cellular (2d) spatial patterns. These particle scale activation-induced attractors arise at dilute nanorod volume fractions where the passive equilibrium phase is isotropic, whereas all previous model simulations have focused on the semi-dilute, nematic equilibrium regime and mostly on low-moment orientation tensor and polarity vector models. Here we extend our previous results to complete attractor phase diagrams for active nematics, with and without an explicit polar potential, to map out novel spatial and dynamic transitions, and to identify some new attractors, over the parameter space of dilute nanorod volume fraction and nanorod activation strength. The particle-scale activation parameter corresponds experimentally to a tunable force dipole strength (so-called pushers with propulsion from the rod tail) generated by active rod macromolecules, e.g., catalysis with the solvent phase, ATP-induced propulsion, or light-activated propulsion. The simulations allow 2d spatial variations in all flow and orientational variables and full spherical orientational degrees of freedom; the attractors correspond to numerical integration of a coupled system of 125 nonlinear PDEs in 2d plus time. The phase diagrams with and without the polar interaction potential are remarkably similar, implying that polar interactions among the rodlike particles are not essential to long-range spatial and temporal correlations in flow, polarity, and nematic order. As a general rule, above a threshold, low volume fractions induce 1d banded patterns, whereas higher yet still dilute volume fractions yield 2d patterns. Again as a general rule, varying activation strength at fixed volume fraction induces novel dynamic transitions. First, stationary patterns saturate the instability of the isotropic 14. Calculation of Fe–B–V ternary phase diagram International Nuclear Information System (INIS) Homolová, Viera; Kroupa, Aleš; Výrostková, Anna 2012-01-01 Highlights: ► Phase diagram of Fe–B–V system was modelled by CALPHAD method. ► Database for thermodynamic calculations for Fe–B–V system was created. ► The new ternary phase was found in 67Fe–18B–15V [in at.%] alloy. - Abstract: The phase equilibria of the Fe–B–V ternary system are studied experimentally and theoretically in this paper. Phase diagram of the system was modelled by CALPHAD method. Boron was modelled as an interstitial element in the FCC and BCC solid solutions. The calculations of isothermal sections of phase diagram are compared with our experimental results at 903 and 1353 K and with available literature experimental data. New ternary phase (with chemical composition 28Fe32V40B in at.%) was found in 67Fe–18B–15V alloy [in at.%]. Further experimental studies for the determination of exact nature of the ternary phase including crystallographic information are necessary. 15. Metastable and equilibrium phase diagrams of unconjugated bilirubin IXα as functions of pH in model bile systems: Implications for pigment gallstone formation. Science.gov (United States) Berman, Marvin D; Carey, Martin C 2015-01-01 Metastable and equilibrium phase diagrams for unconjugated bilirubin IXα (UCB) in bile are yet to be determined for understanding the physical chemistry of pigment gallstone formation. Also, UCB is a molecule of considerable biomedical importance because it is a potent antioxidant and an inhibitor of atherogenesis. We employed principally a titrimetric approach to obtain metastable and equilibrium UCB solubilities in model bile systems composed of taurine-conjugated bile salts, egg yolk lecithin (mixed long-chain phosphatidylcholines), and cholesterol as functions of total lipid concentration, biliary pH values, and CaCl2 plus NaCl concentrations. Metastable and equilibrium precipitation pH values were obtained, and average pKa values of the two carboxyl groups of UCB were calculated. Added lecithin and increased temperature decreased UCB solubility markedly, whereas increases in bile salt concentrations and molar levels of urea augmented solubility. A wide range of NaCl and cholesterol concentrations resulted in no specific effects, whereas added CaCl2 produced large decreases in UCB solubilities at alkaline pH values only. UV-visible absorption spectra were consistent with both hydrophobic and hydrophilic interactions between UCB and bile salts that were strongly influenced by pH. Reliable literature values for UCB compositions of native gallbladder biles revealed that biles from hemolytic mice and humans with black pigment gallstones are markedly supersaturated with UCB and exhibit more acidic pH values, whereas biles from nonstone control animals and patients with cholesterol gallstone are unsaturated with UCB. Copyright © 2015 the American Physiological Society. 16. Cation disorder and gas phase equilibrium in an YBa 2Cu 3O 7- x superconducting thin film Science.gov (United States) Shin, Dong Chan; Ki Park, Yong; Park, Jong-Chul; Kang, Suk-Joong L.; Yong Yoon, Duk 1997-02-01 YBa 2Cu 3O 7- x superconducting thin films have been grown by in situ off-axis rf sputtering with varying oxygen pressure, Ba/Y ratio in a target, and deposition temperature. With decreasing oxygen pressure, increasing Ba/Y ratio, increasing deposition temperature, the critical temperature of the thin films decreased and the c-axis length increased. The property change of films with the variation of deposition variables has been explained by a gas phase equilibrium of the oxidation reaction of Ba and Y. Applying Le Chatelier's principle to the oxidation reaction, we were able to predict the relation of deposition variables and the resultant properties of thin films; the prediction was in good agreement with the experimental results. From the relation between the three deposition variables and gas phase equilibrium, a 3-dimensional processing diagram was introduced. This diagram has shown that the optimum deposition condition of YBa 2Cu 3O 7- x thin films is not a fixed point but can be varied. The gas phase equilibrium can also be applied to the explanation of previous results that good quality films were obtained at low deposition temperature using active species, such as O, O 3, and O 2+. 17. Phase diagrams of diluted transverse Ising nanowire International Nuclear Information System (INIS) Bouhou, S.; Essaoudi, I.; Ainane, A.; Saber, M.; Ahuja, R.; Dujardin, F. 2013-01-01 In this paper, the phase diagrams of diluted Ising nanowire consisting of core and surface shell coupling by J cs exchange interaction are studied using the effective field theory with a probability distribution technique, in the presence of transverse fields in the core and in the surface shell. We find a number of characteristic phenomena. In particular, the effect of concentration c of magnetic atoms, the exchange interaction core/shell, the exchange in surface and the transverse fields in core and in surface shell of phase diagrams are investigated. - Highlights: ► We use the EFT to investigate the phase diagrams of Ising transverse nanowire. ► Ferrimagnetic and ferromagnetic cases are investigated. ► The effects of the dilution and the transverse fields in core and shell are studied. ► Behavior of the transition temperature with the exchange interaction is given 18. Stable and metastable equilibrium states of the Zr-O system International Nuclear Information System (INIS) Versaci, R.A.; Abriata, J.P.; Garces, J.; Comision Nacional de Energia Atomica, San Carlos de Bariloche 1987-01-01 The precise knowledge of the phase diagrams is of fundamental importance for the comprehension of processes like soldering and thermal treatment. The Zr-O diagram has been widely studied, mainly in the zone corresponding to ZrO 2 . A critical analysis of the existing information about this diagram is presented. Furthermore, a lot of information about the phase equilibrium, metastable phase, crystal structure, thermodynamic properties and a possible diagram for pressures higher than one atmosphere is presented. (M.E.L.) [es 19. Phase diagram of the disordered Bose-Hubbard model International Nuclear Information System (INIS) Gurarie, V.; Pollet, L.; Prokof'ev, N. V.; Svistunov, B. V.; Troyer, M. 2009-01-01 We establish the phase diagram of the disordered three-dimensional Bose-Hubbard model at unity filling which has been controversial for many years. The theorem of inclusions, proven by Pollet et al. [Phys. Rev. Lett. 103, 140402 (2009)] states that the Bose-glass phase always intervenes between the Mott insulating and superfluid phases. Here, we note that assumptions on which the theorem is based exclude phase transitions between gapped (Mott insulator) and gapless phases (Bose glass). The apparent paradox is resolved through a unique mechanism: such transitions have to be of the Griffiths type when the vanishing of the gap at the critical point is due to a zero concentration of rare regions where extreme fluctuations of disorder mimic a regular gapless system. An exactly solvable random transverse field Ising model in one dimension is used to illustrate the point. A highly nontrivial overall shape of the phase diagram is revealed with the worm algorithm. The phase diagram features a long superfluid finger at strong disorder and on-site interaction. Moreover, bosonic superfluidity is extremely robust against disorder in a broad range of interaction parameters; it persists in random potentials nearly 50 (!) times larger than the particle half-bandwidth. Finally, we comment on the feasibility of obtaining this phase diagram in cold-atom experiments, which work with trapped systems at finite temperature. 20. Phase Diagrams of Electrostatically Self-Assembled Amphiplexes Energy Technology Data Exchange (ETDEWEB) V Stanic; M Mancuso; W Wong; E DiMasi; H Strey 2011-12-31 We present the phase diagrams of electrostatically self-assembled amphiplexes (ESA) comprised of poly(acrylic acid) (PAA), cetyltrimethylammonium chloride (CTACl), dodecane, pentanol, and water at three different NaCl salt concentrations: 100, 300, and 500 mM. This is the first report of phase diagrams for these quinary complexes. Adding a cosurfactant, we were able to swell the unit cell size of all long-range ordered phases (lamellar, hexagonal, Pm3n, Ia3d) by almost a factor of 2. The added advantage of tuning the unit cell size makes such complexes (especially the bicontinuous phases) attractive for applications in bioseparation, drug delivery, and possibly in oil recovery. 1. Standard values of fugacity for sulfur which are self-consistent with the low-pressure phase diagram Energy Technology Data Exchange (ETDEWEB) Marriott, Robert A., E-mail: rob.marriott@ucalgary.ca [Alberta Sulphur Research Ltd., University of Calgary, Alberta (Canada); Wan, Herman H. [Alberta Sulphur Research Ltd., University of Calgary, Alberta (Canada) 2011-08-15 Highlights: > We have provided a method for calculating the fugacity for elemental sulfur. > Calculated sulfur fugacities can be used in sulfur equilibrium models. > The sulfur fugacities also can be used to locate the phase changes in the low-pressure phase diagram. > We have measured the 'natural' melting point of sulfur, and found it to be T = 388.5 {+-} 0.2 K. - Abstract: A method for calculating the fugacity of pure sulfur in the {alpha}-solid, {beta}-solid and liquid phase regions has been reported for application to industrial equilibrium conditions, e.g., high-pressure solubility of sulfur in sour gas. The fugacity calculations are self-consistent with the low-pressure phase diagram. As recently discussed by Ferreira and Lobo , empirical fitting of the experimental data does not yield consistent behaviour for the low-pressure phase diagram of elemental sulfur. In particular, there is a discrepancy between the vapour pressure of {beta}-solid (monoclinic) and liquid sulfur at the fusion temperature. We have provided an alternative semi-empirical approach which allows one to calculate values of the fugacity at conditions removed from the conditions of the pure sulfur phase transitions. For our approach, we have forced the liquid vapour pressure to equal the {beta}-solid vapour pressure at the {beta}-l-g triple point corresponding to the 'natural' fusion temperature for {beta}-solid. Many studies show a higher 'observed' fusion temperature for elemental sulfur. The non-reversible conditions for 'observed' fusion conditions for elemental sulfur result from a kinetically hindered melt which causes some thermodynamic measurements to be related to a metastable S{sub 8} liquid. We have measured the 'natural' fusion temperature, T{sub fus}{sup {beta}}(exp.)=(388.5{+-}0.2)K at p = 89.9 kPa, which is consistent with literature fusion data at higher-pressures. Using our semi-empirical approach, we have used or found the 2. Phase Stability Diagrams for High Temperature Corrosion Processes Directory of Open Access Journals (Sweden) J. J. Ramos-Hernandez 2013-01-01 Full Text Available Corrosion phenomena of metals by fused salts depend on chemical composition of the melt and environmental conditions of the system. Detail knowledge of chemistry and thermodynamic of aggressive species formed during the corrosion process is essential for a better understanding of materials degradation exposed to high temperature. When there is a lack of kinetic data for the corrosion processes, an alternative to understand the thermodynamic behavior of chemical species is to utilize phase stability diagrams. Nowadays, there are several specialized software programs to calculate phase stability diagrams. These programs are based on thermodynamics of chemical reactions. Using a thermodynamic data base allows the calculation of different types of phase diagrams. However, sometimes it is difficult to have access to such data bases. In this work, an alternative way to calculate phase stability diagrams is presented. The work is exemplified in the Na-V-S-O and Al-Na-V-S-O systems. This system was chosen because vanadium salts is one of the more aggressive system for all engineering alloys, especially in those processes where fossil fuels are used. 3. Magnetic Phase Diagram of α-RuCl3 Science.gov (United States) Sears, Jennifer; Kim, Young-June; Zhao, Yang; Lynn, Jeffrey The layered honeycomb material α-RuCl3 is thought to possess unusual magnetic interactions including a strong bond-dependent Kitaev term, offering a potential opportunity to study a material near a well understood spin liquid phase. Although this material orders magnetically at low temperatures and is thus not a realization of a Kitaev spin liquid, it does show a broad continuum of magnetic excitations reminiscent of that expected for the spin liquid phase. It has also been proposed that a magnetic field could destabilize the magnetic order in this material and induce a transition into a spin liquid phase. Low temperature magnetization and specific heat measurements in this material have suggested a complex magnetic phase diagram with multiple unidentified magnetic phases present at low temperature. This has provided motivation for our work characterizing the magnetic transitions and phase diagram in α-RuCl3. I will present detailed bulk measurements combined with magnetic neutron diffraction measurements to map out the phase diagram and identify the various phases present. 4. Solid gas reaction phase diagram under high gas pressure International Nuclear Information System (INIS) Ishizaki, K. 1992-01-01 This paper reports that to evaluate which are the stable phases under high gas pressure conditions, a solid-gas reaction phase diagram under high gas pressure (HIP phase diagram) has been proposed by the author. The variables of the diagram are temperature, reactant gas partial pressure and total gas pressure. Up to the present time the diagrams have been constructed using isobaric conditions. In this work, the stable phases for a real HIP process were evaluated assuming an isochoric condition. To understand the effect of the total gas pressure on stability is of primary importance. Two possibilities were considered and evaluated, those are: the total gas pressure acts as an independent variable, or it only affects the fugacity values. The results of this work indicate that the total gas pressure acts as an independent variable, and in turn also affects the fugacity values 5. Phase diagrams of diluted transverse Ising nanowire Energy Technology Data Exchange (ETDEWEB) Bouhou, S.; Essaoudi, I. [Laboratoire de Physique des Matériaux et Modélisation, des Systèmes, (LP2MS), Unité Associée au CNRST-URAC 08, University of Moulay Ismail, Physics Department, Faculty of Sciences, B.P. 11201 Meknes (Morocco); Ainane, A., E-mail: ainane@pks.mpg.de [Laboratoire de Physique des Matériaux et Modélisation, des Systèmes, (LP2MS), Unité Associée au CNRST-URAC 08, University of Moulay Ismail, Physics Department, Faculty of Sciences, B.P. 11201 Meknes (Morocco); Max-Planck-Institut für Physik Complexer Systeme, Nöthnitzer Str. 38 D-01187 Dresden (Germany); Saber, M. [Laboratoire de Physique des Matériaux et Modélisation, des Systèmes, (LP2MS), Unité Associée au CNRST-URAC 08, University of Moulay Ismail, Physics Department, Faculty of Sciences, B.P. 11201 Meknes (Morocco); Max-Planck-Institut für Physik Complexer Systeme, Nöthnitzer Str. 38 D-01187 Dresden (Germany); Ahuja, R. [Condensed Matter Theory Group, Department of Physics and Astronomy, Uppsala University, 75120 Uppsala (Sweden); Dujardin, F. [Laboratoire de Chimie et Physique des Milieux Complexes (LCPMC), Institut de Chimie, Physique et Matériaux (ICPM), 1 Bd. Arago, 57070 Metz (France) 2013-06-15 In this paper, the phase diagrams of diluted Ising nanowire consisting of core and surface shell coupling by J{sub cs} exchange interaction are studied using the effective field theory with a probability distribution technique, in the presence of transverse fields in the core and in the surface shell. We find a number of characteristic phenomena. In particular, the effect of concentration c of magnetic atoms, the exchange interaction core/shell, the exchange in surface and the transverse fields in core and in surface shell of phase diagrams are investigated. - Highlights: ► We use the EFT to investigate the phase diagrams of Ising transverse nanowire. ► Ferrimagnetic and ferromagnetic cases are investigated. ► The effects of the dilution and the transverse fields in core and shell are studied. ► Behavior of the transition temperature with the exchange interaction is given. 6. A new inorganic atmospheric aerosol phase equilibrium model (UHAERO Directory of Open Access Journals (Sweden) N. R. Amundson 2006-01-01 Full Text Available A variety of thermodynamic models have been developed to predict inorganic gas-aerosol equilibrium. To achieve computational efficiency a number of the models rely on a priori specification of the phases present in certain relative humidity regimes. Presented here is a new computational model, named UHAERO, that is both efficient and rigorously computes phase behavior without any a priori specification. The computational implementation is based on minimization of the Gibbs free energy using a primal-dual method, coupled to a Newton iteration. The mathematical details of the solution are given elsewhere. The model computes deliquescence behavior without any a priori specification of the relative humidities of deliquescence. Also included in the model is a formulation based on classical theory of nucleation kinetics that predicts crystallization behavior. Detailed phase diagrams of the sulfate/nitrate/ammonium/water system are presented as a function of relative humidity at 298.15 K over the complete space of composition. 7. Application of dual-anneal diffusion multiples to the effective study of phase diagrams and phase transformations in the Fe–Cr–Ni system International Nuclear Information System (INIS) Cao, Siwei; Zhao, Ji-Cheng 2015-01-01 A dual-anneal diffusion multiple (DADM) approach is developed for effective determination of intermediate-temperature phase diagrams that are critical to the establishment of reliable thermodynamic databases. A large amount of phase equilibrium data was obtained from DADMs to construct the Fe–Cr–Ni isothermal sections at 1200, 900, 800 and 700 °C. The DADM approach is also a systematic and effective way to study phase precipitation from wide ranges of compositions, thus generating rich atlases of microstructures induced by various transformations. The results from this study indicate that the body-centered cubic to sigma phase transformation in the Fe–Cr–Ni system took place initially through a massive transformation mechanism 8. Unified Phase Diagram for Iron-Based Superconductors. Science.gov (United States) Gu, Yanhong; Liu, Zhaoyu; Xie, Tao; Zhang, Wenliang; Gong, Dongliang; Hu, Ding; Ma, Xiaoyan; Li, Chunhong; Zhao, Lingxiao; Lin, Lifang; Xu, Zhuang; Tan, Guotai; Chen, Genfu; Meng, Zi Yang; Yang, Yi-Feng; Luo, Huiqian; Li, Shiliang 2017-10-13 High-temperature superconductivity is closely adjacent to a long-range antiferromagnet, which is called a parent compound. In cuprates, all parent compounds are alike and carrier doping leads to superconductivity, so a unified phase diagram can be drawn. However, the properties of parent compounds for iron-based superconductors show significant diversity and both carrier and isovalent dopings can cause superconductivity, which casts doubt on the idea that there exists a unified phase diagram for them. Here we show that the ordered moments in a variety of iron pnictides are inversely proportional to the effective Curie constants of their nematic susceptibility. This unexpected scaling behavior suggests that the magnetic ground states of iron pnictides can be achieved by tuning the strength of nematic fluctuations. Therefore, a unified phase diagram can be established where superconductivity emerges from a hypothetical parent compound with a large ordered moment but weak nematic fluctuations, which suggests that iron-based superconductors are strongly correlated electron systems. 9. Unified Phase Diagram for Iron-Based Superconductors Science.gov (United States) Gu, Yanhong; Liu, Zhaoyu; Xie, Tao; Zhang, Wenliang; Gong, Dongliang; Hu, Ding; Ma, Xiaoyan; Li, Chunhong; Zhao, Lingxiao; Lin, Lifang; Xu, Zhuang; Tan, Guotai; Chen, Genfu; Meng, Zi Yang; Yang, Yi-feng; Luo, Huiqian; Li, Shiliang 2017-10-01 High-temperature superconductivity is closely adjacent to a long-range antiferromagnet, which is called a parent compound. In cuprates, all parent compounds are alike and carrier doping leads to superconductivity, so a unified phase diagram can be drawn. However, the properties of parent compounds for iron-based superconductors show significant diversity and both carrier and isovalent dopings can cause superconductivity, which casts doubt on the idea that there exists a unified phase diagram for them. Here we show that the ordered moments in a variety of iron pnictides are inversely proportional to the effective Curie constants of their nematic susceptibility. This unexpected scaling behavior suggests that the magnetic ground states of iron pnictides can be achieved by tuning the strength of nematic fluctuations. Therefore, a unified phase diagram can be established where superconductivity emerges from a hypothetical parent compound with a large ordered moment but weak nematic fluctuations, which suggests that iron-based superconductors are strongly correlated electron systems. 10. Theoretical Prediction of Melting Relations in the Deep Mantle: the Phase Diagram Approach Science.gov (United States) Belmonte, D.; Ottonello, G. A.; Vetuschi Zuccolini, M.; Attene, M. 2016-12-01 Despite the outstanding progress in computer technology and experimental facilities, understanding melting phase relations in the deep mantle is still an open challenge. In this work a novel computational scheme to predict melting relations at HP-HT by a combination of first principles DFT calculations, polymer chemistry and equilibrium thermodynamics is presented and discussed. The adopted theoretical framework is physically-consistent and allows to compute multi-component phase diagrams relevant to Earth's deep interior in a broad range of P-T conditions by a convex-hull algorithm for Gibbs free energy minimisation purposely developed for high-rank simplexes. The calculated phase diagrams are in turn used as a source of information to gain new insights on the P-T-X evolution of magmas in the deep mantle, providing some thermodynamic constraints to both present-day and early Earth melting processes. High-pressure melting curves of mantle silicates are also obtained as by-product of phase diagram calculation. Application of the above method to the MgO-Al2O3-SiO2 (MAS) ternary system highlights as pressure effects are not only able to change the nature of melting of some minerals (like olivine and pyroxene) from eutectic to peritectic (and vice versa), but also simplify melting relations by drastically reducing the number of phases with a primary phase field at HP-HT conditions. It turns out that mineral phases like Majorite-Pyrope garnet and Anhydrous Phase B (Mg14Si5O24), which are often disregarded in modelling melting processes of mantle assemblages, are stable phases at solidus or liquidus conditions in a P-T range compatible with the mantle transition zone (i.e. P = 16 - 23 GPa and T = 2200 - 2700 °C) when their thermodynamic and thermophysical properties are properly assessed. Financial support to the Senior Author (D.B.) during his stay as Invited Scientist at the Institut de Physique du Globe de Paris (IPGP, Paris) is warmly acknowledged. 11. Phase shifts of the paired wings of butterfly diagrams International Nuclear Information System (INIS) Li Kejun; Liang Hongfei; Feng Wen 2010-01-01 Sunspot groups observed by the Royal Greenwich Observatory/US Air Force/NOAA from 1874 May to 2008 November and the Carte Synoptique solar filaments from 1919 March to 1989 December are used to investigate the relative phase shift of the paired wings of butterfly diagrams of sunspot and filament activities. Latitudinal migration of sunspot groups (or filaments) does asynchronously occur in the northern and southern hemispheres, and there is a relative phase shift between the paired wings of their butterfly diagrams in a cycle, making the paired wings spatially asymmetrical on the solar equator. It is inferred that hemispherical solar activity strength should evolve in a similar way within the paired wings of a butterfly diagram in a cycle, demonstrating the paired wings phenomenon and showing the phase relationship between the northern and southern hemispherical solar activity strengths, as well as a relative phase shift between the paired wings of a butterfly diagram, which should bring about almost the same relative phase shift of hemispheric solar activity strength. (research papers) 12. Vortex phase diagram and vortex dynamics at low temperature in a thick a-Mg{sub x}B{sub 1-x} film Energy Technology Data Exchange (ETDEWEB) Okuma, S. [Research Center for Low Temperature Physics, Tokyo Institute of Technology, 2-12-1, Ohokayama, Meguro-ku, Tokyo 152-8551 (Japan)], E-mail: sokuma@o.cc.titech.ac.jp; Kohara, M. [Research Center for Low Temperature Physics, Tokyo Institute of Technology, 2-12-1, Ohokayama, Meguro-ku, Tokyo 152-8551 (Japan) 2007-09-01 We report on the equilibrium vortex phase diagram and vortex dynamics at low temperature T in a thick amorphous (a-)Mg{sub x}B{sub 1-x} film based on the measurements of the dc resistivity {rho} and time (t)-dependent component of the flux-flow voltage, {delta}V(t), respectively. Both {rho}(T) in perpendicular fields and the vortex phase diagram are qualitatively similar to those for the a-Mo{sub x}Si{sub 1-x} films, in which evidence for the quantum-vortex-liquid (QVL) phase has been obtained. In either material system we observe anomalous vortex flow with the asymmetric distribution of {delta}V(t) in the QVL phase, suggesting that the anomalous flow is a universal phenomenon commonly observed for disordered amorphous films, independent of material. 13. Binary and ternary solid-liquid phase equilibrium for the systems formed by succinic acid, urea and diethylene glycol: Determination and modelling International Nuclear Information System (INIS) Li, Yanxun; Li, Congcong; Han, Shuo; Zhao, Hongkun 2017-01-01 Highlights: • Solubility of succinic acid in diethylene glycol was determined. • Solubility of succinic acid + urea + diethylene glycol was determined. • Three ternary phase diagrams were constructed for the ternary system. • The ternary phase diagrams were correlated using NRTL model. - Abstract: In this work, the solid-liquid phase equilibrium for binary system of succinic acid + diethylene glycol at the temperatures ranging from (298.15 to 333.15) K and ternary system of (succinic acid + urea + diethylene glycol) at 298.15 K, 313.15 K and 333.15 K was built by using the isothermal saturation method under atmospheric pressure (101.2 kPa), and the solubilities were determined by a high-performance liquid chromatography. The solid-phases formed in the ternary system of ((succinic acid + urea + diethylene glycol)) were confirmed by Schreinemaker’s method of wet residue, which corresponded to urea, succinic acid, and adduct 2:1 urea-succinic acid (mole ratio). Three isothermal phase diagrams for the ternary system were constructed based on the measured mutual solubility. Each isothermal phase diagram included six crystallization fields, three invariant curves, two invariant points and two co-saturated points. The crystalline region of adduct 2:1 urea-succinic acid is larger than those of the other two solids. The solubility of succinic acid in diethylene glycol was correlated with the modified Apelblat equation, λh equation and NRTL model; and the mutual solubility of the ternary ((succinic acid + urea + diethylene glycol)) system was correlated and calculated by the NRTL model. The interaction parameters’ values of succinic acid-urea were acquired. The value of RMSD was 7.11 × 10 −3 for the ternary system. The calculation results had good agreement with the experiment values. Furthermore, the densities of equilibrium liquid phase were acquired. The phase diagrams and the thermodynamic model of the ternary system could provide the basis for design of 14. Magnetic phase diagrams of UNiGe International Nuclear Information System (INIS) Nakotte, H.; Hagmusa, I.H.; Klaasse, J.C.P.; Hagmusa, I.H.; Klaasse, J.C.P. 1997-01-01 UNiGe undergoes two magnetic transitions in zero field. Here, the magnetic diagrams of UNiGe for B parallel b and B parallel c are reported. We performed temperatures scans of the magnetization in static magnetic fields up to 19.5T applied along the b and c axes. For both orientations 3 magnetic phases have been identified in the B-T diagrams. We confirmed the previously reported phase boundaries for B parallel c, and in addition we determined the location of the phase boundaries for B parallel b. We discuss a possible relationship of the two zero-field antiferromagnetic phases (commensurate: T<42K; incommensurate: 42K< T<50K) and the field-induced phase, which, at low temperatures, occurs between 18 and 25T or 4 and 10T for B parallel b or B parallel c, respectively. Finally, we discuss the field dependence of the electronic contribution γ to the specific heat for B parallel c up to 17.5T, and we find that its field dependence is similar to the one found in more itinerant uranium compounds 15. On the phase diagram of non-spherical nanoparticles CERN Document Server Wautelet, M; Hecq, M 2003-01-01 The phase diagram of nanoparticles is known to be a function of their size. In the literature, this is generally demonstrated for cases where their shape is spherical. Here, it is shown theoretically that the phase diagram of non-spherical particles may be calculated from the spherical case, at the same surface area/volume ratio, both with and without surface segregation, provided the surface tension is considered to be isotropic. 16. Calculation of liquid-liquid equilibrium of aqueous two-phase systems using a chemical-theory-based excess Gibbs energy model Directory of Open Access Journals (Sweden) Pessôa Filho P. A. 2004-01-01 Full Text Available Mixtures containing compounds that undergo hydrogen bonding show large deviations from ideal behavior. These deviations can be accounted for through chemical theory, according to which the formation of a hydrogen bond can be treated as a chemical reaction. This chemical equilibrium needs to be taken into account when applying stability criteria and carrying out phase equilibrium calculations. In this work, we illustrate the application of the stability criteria to establish the conditions under which a liquid-phase split may occur and the subsequent calculation of liquid-liquid equilibrium using a chemical-theory-modified Flory-Huggins equation to describe the non ideality of aqueous two-phase systems composed of poly(ethylene glycol and dextran. The model was found to be able to correlate ternary liquid-liquid diagrams reasonably well by simple adjustment of the polymer-polymer binary interaction parameter. 17. Phase diagram of Ti-B-C system in the temperature range of 300-3500 K International Nuclear Information System (INIS) Gusev, A.I. 1996-01-01 Calculation of phase equilibrium in the ternary system Ti-B-C in the areas of the TiC y -TiB 2 and B 4 C y -TiB 2 cross sections as well as partial construction of three-dimensional (spatial)diagram of the Ti-B-C system within the temperature range of 300-3500 K is carried out. The form of the isothermal cross section of the ternary system remains almost unchanged up to 1900 K. The most essential change is related to disordering of the low-temperature ordered phases Ti 2 C, Ti 3 C and Ti 6 C 5 of the titanium carbide at T > 950 K [ru 18. Dynamic phase transitions and dynamic phase diagrams of the Ising model on the Shastry-Sutherland lattice Energy Technology Data Exchange (ETDEWEB) Deviren, Şeyma Akkaya, E-mail: sadeviren@nevsehir.edu.tr [Department of Science Education, Education Faculty, Nevsehir Hacı Bektaş Veli University, 50300 Nevşehir (Turkey); Deviren, Bayram [Department of Physics, Nevsehir Hacı Bektaş Veli University, 50300 Nevsehir (Turkey) 2016-03-15 The dynamic phase transitions and dynamic phase diagrams are studied, within a mean-field approach, in the kinetic Ising model on the Shastry-Sutherland lattice under the presence of a time varying (sinusoidal) magnetic field by using the Glauber-type stochastic dynamics. The time-dependence behavior of order parameters and the behavior of average order parameters in a period, which is also called the dynamic order parameters, as a function of temperature, are investigated. Temperature dependence of the dynamic magnetizations, hysteresis loop areas and correlations are investigated in order to characterize the nature (first- or second-order) of the dynamic phase transitions as well as to obtain the dynamic phase transition temperatures. We present the dynamic phase diagrams in the magnetic field amplitude and temperature plane. The phase diagrams exhibit a dynamic tricritical point and reentrant phenomena. The phase diagrams also contain paramagnetic (P), Néel (N), Collinear (C) phases, two coexistence or mixed regions, (N+C) and (N+P), which strongly depend on interaction parameters. - Highlights: • Dynamic magnetization properties of spin-1/2 Ising model on SSL are investigated. • Dynamic magnetization, hysteresis loop area, and correlation have been calculated. • The dynamic phase diagrams are constructed in (T/|J|, h/|J|) plane. • The phase diagrams exhibit a dynamic tricritical point and reentrant phenomena. 19. Modelling of diffusion from equilibrium diffraction fluctuations in ordered phases International Nuclear Information System (INIS) Arapaki, E.; Argyrakis, P.; Tringides, M.C. 2008-01-01 Measurements of the collective diffusion coefficient D c at equilibrium are difficult because they are based on monitoring low amplitude concentration fluctuations generated spontaneously, that are difficult to measure experimentally. A new experimental method has been recently used to measure time-dependent correlation functions from the diffraction intensity fluctuations and was applied to measure thermal step fluctuations. The method has not been applied yet to measure superstructure intensity fluctuations in surface overlayers and to extract D c . With Monte Carlo simulations we study equilibrium fluctuations in Ising lattice gas models with nearest neighbor attractive and repulsive interactions. The extracted diffusion coefficients are compared to the ones obtained from equilibrium methods. The new results are in good agreement with the results from the other methods, i.e., D c decreases monotonically with coverage Θ for attractive interactions and increases monotonically with Θ for repulsive interactions. Even the absolute value of D c agrees well with the results obtained with the probe area method. These results confirm that this diffraction based method is a novel, reliable way to measure D c especially within the ordered region of the phase diagram when the superstructure spot has large intensity 20. Exploring the QCD phase diagram through relativistic heavy ion collisions Directory of Open Access Journals (Sweden) 2014-03-01 Full Text Available We present a review of the studies related to establishing the QCD phase diagram through high energy nucleus-nucleus collisions. We particularly focus on the experimental results related to the formation of a quark-gluon phase, crossover transition and search for a critical point in the QCD phase diagram. 1. Phase equilibrium condition of marine carbon dioxide hydrate International Nuclear Information System (INIS) Sun, Shi-Cai; Liu, Chang-Ling; Ye, Yu-Guang 2013-01-01 Highlights: ► CO 2 hydrate phase equilibrium was studied in simulated marine sediments. ► CO 2 hydrate equilibrium temperature in NaCl and submarine pore water was depressed. ► Coarse-grained silica sand does not affect CO 2 hydrate phase equilibrium. ► The relationship between equilibrium temperature and freezing point was discussed. - Abstract: The phase equilibrium of ocean carbon dioxide hydrate should be understood for ocean storage of carbon dioxide. In this paper, the isochoric multi-step heating dissociation method was employed to investigate the phase equilibrium of carbon dioxide hydrate in a variety of systems (NaCl solution, submarine pore water, silica sand + NaCl solution mixture). The experimental results show that the depression in the phase equilibrium temperature of carbon dioxide hydrate in NaCl solution is caused mainly by Cl − ion. The relationship between the equilibrium temperature and freezing point in NaCl solution was discussed. The phase equilibrium temperature of carbon dioxide hydrate in submarine pore water is shifted by −1.1 K to lower temperature region than that in pure water. However, the phase equilibrium temperature of carbon dioxide hydrate in mixture samples of coarsed-grained silica sand and NaCl solution is in agreement with that in NaCl solution with corresponding concentrations. The relationship between the equilibrium temperature and freezing point in mixture samples was also discussed. 2. Constituent phase diagrams of the Al-Cu-Fe-Mg-Ni-Si system and their application to the analysis of aluminium piston alloys Energy Technology Data Exchange (ETDEWEB) Belov, N.A. [Moscow Institute of Steel and Alloys, Leninsky prosp. 4, Moscow 119049 (Russian Federation); Eskin, D.G. [Netherlands Institute for Metals Research, Rotterdamseweg 137, 2628AL Delft (Netherlands)]. E-mail: deskin@nimr.nl; Avxentieva, N.N. [Moscow Institute of Steel and Alloys, Leninsky prosp. 4, Moscow 119049 (Russian Federation) 2005-10-15 The evaluation of phase equilibria in quinary systems that constitute the commercially important Al-Cu-Fe-Mg-Ni-Si alloying system is performed in the compositional range of casting alloys by means of metallography, electron probe microanalysis, X-ray diffractometry, differential scanning calorimetry, and by the analysis of phase equilibria in the constituent systems of lesser dimensionality. Suggested phase equilibria are illustrated by bi-, mono- and invariant solidification reactions, polythermal diagrams of solidification, distributions of phase fields in the solid state, and isothermal and polythermal sections. Phase composition of as-cast alloys is analyzed in terms of non-equilibrium solidification. It is shown that the increase in copper concentration in piston Al-Si alloys results in the decrease in the equilibrium solidus from 540 to 505 deg C. Under non-equilibrium solidification conditions, piston alloys finish solidification at {approx}505 deg C. Iron is bound in the quaternary Al{sub 8}FeMg{sub 3}Si{sub 6} phase in low-iron alloys and in the ternary Al{sub 9}FeNi and Al{sub 5}FeSi phases in high-iron alloys. 3. Re-determination of succinonitrile (SCN) camphor phase diagram Science.gov (United States) Teng, Jing; Liu, Shan 2006-04-01 Low-melting temperature transparent organic materials have been extensively used to study the pattern formation and microstructure evolution. It proves to be very challenging to accurately determine the phase diagram since there is no viable way to measure the composition microscopically. In this paper, we presented the detailed experimental characterization of the phase diagram of succinonitrile (SCN)-camphor binary system. Differential scanning calorimetry, a ring-heater, and the directional solidification technique have been combined to determine the details of the phase diagram by using the purified materials. The advantages and disadvantages have been discussed for the different experimental techniques. SCN and camphor constitute a simple binary eutectic system with the eutectic composition at 23.6 wt% camphor and eutectic temperature at 37.65 °C. The solidus and the solubility of the SCN base solid solution have been precisely determined for the first time in this binary system. 4. The phase diagram of KNO3-KClO3 International Nuclear Information System (INIS) Zhang Xuejun; Tian Jun; Xu Kangcheng; Gao Yici 2004-01-01 The binary phase diagram of KNO 3 -KClO 3 is studied by means of differential scanning calorimetry (DSC) and high-temperature X-ray diffraction. The limited solid solutions, K(NO 3 ) 1-x (ClO 3 ) x (0 3 ) 1-x (ClO 3 ) x (0.90 3 -based solid solutions and KClO 3 -based solid solutions phase, respectively. For KNO 3 -based solid solutions, KNO 3 ferroelectric phase can be stable from 423 to 223 K as a result of substituting of NO 3 by ClO 3 -radicals. The temperatures for solidus and liquidus have been determined based on limited solid solutions. Two models, Henrian solution and regular solution theory for KNO 3 -based (α) phase and KClO 3 -based (β) phase, respectively, are employed to reproduce solidus and liquidus of the phase diagram. The results are in good agreement with the DSC data. The thermodynamic properties for α and β solid solutions have been derived from an optimization procedure using the experimental data. The calculated phase diagram and optimized thermodynamic parameters are thermodynamically self-consistent 5. Measurement and calculation of solid–liquid equilibrium for ternary systems of 3,4-dichloronitrobenzene + 2,3-dichloronitrobenzene + ethanol/n-propanol International Nuclear Information System (INIS) Li, Rongrong; Han, Shuo; Du, Cunbin; Meng, Long; Wang, Jian; Zhao, Hongkun 2016-01-01 Highlights: • Solid–liquid-phase equilibrium for two ternary systems was determined. • Six ternary phase diagrams were constructed for the two ternary systems. • The ternary phase diagrams were calculated by NRTL model. - Abstract: The stable (solid + liquid) phase equilibrium in ternary systems of 3,4-dichloronitrobenzene + 2,3-dichloronitrobenzene + ethanol and 3,4-dichloronitrobenzene + 2,3-dichloronitrobenzene + n-propanol at three temperatures was determined by means of an isothermal solution saturation method under pressure p = 101.2 kPa. The isothermal phase diagrams of the two ternary systems were plotted on the basis of the experimental mutual solubility data. The equilibrium solids formed in the two systems were identified by Schreinemakers’ method of wet residues. It was found that each phase diagram included one co-saturated point, two co-saturated curves and three crystallization zones in the ternary systems of 3,4-dichloronitrobenzene + 2,3-dichloronitrobenzene + ethanol and 3,4-dichloronitrobenzene + 2,3-dichloronitrobenzene + n-propanol. Two pure solid phases were formed in the studied systems, which were pure 2,3-dichloronitrobenzene and pure 3,4-dichloronitrobenzene. The crystallization zone of 3,4-dichloronitrobenzene was smaller than that of 2,3-dichloronitrobenzene at a given temperature, which showed that 2,3-dichloronitrobenzene could be easily separated from the solution. Furthermore, the solid–liquid phase diagrams were calculated by using NRTL model. The calculated phase diagrams agreed well with the experimental ones. Knowledge of (solid + liquid) phase equilibrium and ternary phase diagrams would be very valuable to design and optimize the solvent crystallization process of 2,3-dichloronitrobenzene and other crystallization processes involving the two ternary systems. 6. Conformational properties of rigid-chain amphiphilic macromolecules : The phase diagram NARCIS (Netherlands) Markov, V. A.; Vasilevskaya, V. V.; Khalatur, P. G.; ten Brinke, G.; Khokhlov, A. R. The coil-globule transition in rigid-chain amphiphilic macromolecules was studied by means of computer simulation, and the phase diagrams for such molecules in the solvent quality-persistence length coordinates were constructed. It was shown that the type of phase diagram depends to a substantial 7. Unexpectedly normal phase behavior of single homopolymer chains International Nuclear Information System (INIS) Paul, W.; Strauch, T.; Rampf, F.; Binder, K. 2007-01-01 Employing Monte Carlo simulations, we show that the topology of the phase diagram of a single flexible homopolymer chain changes in dependence on the range of an attractive square well interaction between the monomers. For a range of attraction larger than a critical value, the equilibrium phase diagram of the single polymer chain and the corresponding polymer solution phase diagram exhibit vapor (swollen coil, dilute solution), liquid (collapsed globule, dense solution), and solid phases. Otherwise, the liquid-vapor transition vanishes from the equilibrium phase diagram for both the single chain and the polymer solution. This change in topology of the phase diagram resembles the behavior known for colloidal dispersions. The interplay of enthalpy and conformational entropy in the polymer case thus can lead to the same topology of phase diagrams as the interplay of enthalpy and translational entropy in simple liquids 8. Analyzing phase diagrams and phase transitions in networked competing populations Science.gov (United States) Ni, Y.-C.; Yin, H. P.; Xu, C.; Hui, P. M. 2011-03-01 Phase diagrams exhibiting the extent of cooperation in an evolutionary snowdrift game implemented in different networks are studied in detail. We invoke two independent payoff parameters, unlike a single payoff often used in most previous works that restricts the two payoffs to vary in a correlated way. In addition to the phase transition points when a single payoff parameter is used, phase boundaries separating homogeneous phases consisting of agents using the same strategy and a mixed phase consisting of agents using different strategies are found. Analytic expressions of the phase boundaries are obtained by invoking the ideas of the last surviving patterns and the relative alignments of the spectra of payoff values to agents using different strategies. In a Watts-Strogatz regular network, there exists a re-entrant phenomenon in which the system goes from a homogeneous phase into a mixed phase and re-enters the homogeneous phase as one of the two payoff parameters is varied. The non-trivial phase diagram accompanying this re-entrant phenomenon is quantitatively analyzed. The effects of noise and cooperation in randomly rewired Watts-Strogatz networks are also studied. The transition between a mixed phase and a homogeneous phase is identify to belong to the directed percolation universality class. The methods used in the present work are applicable to a wide range of problems in competing populations of networked agents. 9. Phase diagram of supercooled water confined to hydrophilic nanopores Science.gov (United States) Limmer, David T.; Chandler, David 2012-07-01 We present a phase diagram for water confined to cylindrical silica nanopores in terms of pressure, temperature, and pore radius. The confining cylindrical wall is hydrophilic and disordered, which has a destabilizing effect on ordered water structure. The phase diagram for this class of systems is derived from general arguments, with parameters taken from experimental observations and computer simulations and with assumptions tested by computer simulation. Phase space divides into three regions: a single liquid, a crystal-like solid, and glass. For large pores, radii exceeding 1 nm, water exhibits liquid and crystal-like behaviors, with abrupt crossovers between these regimes. For small pore radii, crystal-like behavior is unstable and water remains amorphous for all non-zero temperatures. At low enough temperatures, these states are glasses. Several experimental results for supercooled water can be understood in terms of the phase diagram we present. 10. Phase diagram distortion from traffic parameter averaging. NARCIS (Netherlands) Stipdonk, H. Toorenburg, J. van & Postema, M. 2010-01-01 Motorway traffic congestion is a major bottleneck for economic growth. Therefore, research of traffic behaviour is carried out in many countries. Although well describing the undersaturated free flow phase as an almost straight line in a (k,q)-phase diagram, congested traffic observations and 11. Phase diagram of nuclear 'pasta' and its uncertainties in supernova cores International Nuclear Information System (INIS) Sonoda, Hidetaka; Watanabe, Gentaro; Sato, Katsuhiko; Yasuoka, Kenji; Ebisuzaki, Toshikazu 2008-01-01 We examine the model dependence of the phase diagram of inhomogeneous nulcear matter in supernova cores using the quantum molecular dynamics (QMD). Inhomogeneous matter includes crystallized matter with nonspherical nuclei--''pasta'' phases--and the liquid-gas phase-separating nuclear matter. Major differences between the phase diagrams of the QMD models can be explained by the energy of pure neutron matter at low densities and the saturation density of asymmetric nuclear matter. We show the density dependence of the symmetry energy is also useful to understand uncertainties of the phase diagram. We point out that, for typical nuclear models, the mass fraction of the pasta phases in the later stage of the collapsing cores is higher than 10-20% 12. Determination and thermodynamic modeling of solid–liquid phase equilibrium for 3,5-dichloroaniline in pure solvents and ternary 3,5-dichloroaniline + 1,3,5-trichlorobenzene + toluene system International Nuclear Information System (INIS) Li, Rongrong; Du, Cunbin; Meng, Long; Han, Shuo; Wang, Jian; Zhao, Hongkun 2016-01-01 Highlights: • Solubility of 3,5-dichloroaniline in seven organic solvents were determined. • Solid–liquid phase equilibrium for ternary system was measured. • The binary and ternary phase diagrams were constructed. • The phase diagrams were correlated with thermodynamic models. - Abstract: The solid–liquid phase equilibrium data for 3,5-dichloroaniline in n-propanol, isopropanol, n-butanol, isobutanol, toluene, ethyl acetate and acetone at (283.15 to 308.15) K were determined experimentally by gas chromatography under 101.3 kPa. The solubility of 3,5-dichloroaniline in these solvents decreased according to the following order: ethyl acetate > (acetone, toluene) for the solvents of ethyl acetate, acetone, and toluene; and for the other solvents, (isopropanol, n-butanol) > n-propanol > isobutanol. According to the solubility of 3,5-dichloroaniline in pure solvents, the solid–liquid phase equilibrium for the ternary mixture of 3,5-dichloroaniline + 1,3,5-trichlorobenzene + toluene were measured by using an isothermal saturation method at three temperatures of 283.15, 293.15, and 303.15 K under 101.3 kPa, and the corresponding isothermal phase diagrams were constructed. Two pure solids were formed in the ternary system at a fixed temperature, which were pure 3,5-dichloroaniline and pure 1,3,5-trichlorobenzene and were identified by Schreinemakers’ method of wet residue. The temperature dependence of 3,5-dichloroaniline solubility in pure solvents was correlated by the modified Apelblat equation, λh equation, Wilson model and NRTL model; and the ternary solid–liquid phase equilibrium of 3,5-dichloroaniline + 1,3,5-trichlorobenzene + toluene were described by the Wilson model and NRTL model. Results showed that calculated solubility values with these models agreed well with the experimental ones for the studied binary and ternary systems. The solid–liquid equilibrium and the thermodynamic models for the binary and ternary systems can offer the 13. Clapeyron equation and phase equilibrium properties in higher dimensional charged topological dilaton AdS black holes with a nonlinear source Energy Technology Data Exchange (ETDEWEB) Li, Huai-Fan; Zhao, Hui-Hua; Zhang, Li-Chun; Zhao, Ren [Shanxi Datong University, Institute of Theoretical Physics, Datong (China); Shanxi Datong University, Department of Physics, Datong (China) 2017-05-15 Using Maxwell's equal area law, we discuss the phase transition of higher dimensional charged topological dilaton AdS black hole with a nonlinear source. The coexisting region of the two phases is found and we depict the coexistence region in the P-v diagrams. The two-phase equilibrium curves in the P-T diagrams are plotted, and we take the first order approximation of volume v in the calculation. To better compare with a general thermodynamic system, the Clapeyron equation is derived for a higher dimensional charged topological black hole with a nonlinear source. The latent heat of an isothermal phase transition is investigated. We also study the effect of the parameters of the black hole on the region of two-phase coexistence. The results show that the black hole may go through a small-large phase transition similar to those of usual non-gravity thermodynamic systems. (orig.) 14. Experimental determination of the Ta–Ge phase diagram Energy Technology Data Exchange (ETDEWEB) Araújo Pinto da Silva, Antonio Augusto, E-mail: aaaps@ppgem.eel.usp.br [EEL/USP – Escola de Engenharia de Lorena (EEL), Universidade de São Paulo (USP), Pólo Urbo-Industrial Gleba AI-6, 12602-810 Lorena, SP (Brazil); Coelho, Gilberto Carvalho [EEL/USP – Escola de Engenharia de Lorena (EEL), Universidade de São Paulo (USP), Pólo Urbo-Industrial Gleba AI-6, 12602-810 Lorena, SP (Brazil); UniFoa – Centro Universitário de Volta Redonda, Núcleo de Pesquisa, Campus Três Poços, Avenida Paulo Erlei Alves Abrantes, 1325, Bairro Três Poços, 27240-560 Volta Redonda, RJ (Brazil); Nunes, Carlos Angelo; Suzuki, Paulo Atsushi [EEL/USP – Escola de Engenharia de Lorena (EEL), Universidade de São Paulo (USP), Pólo Urbo-Industrial Gleba AI-6, 12602-810 Lorena, SP (Brazil); Fiorani, Jean Marc; David, Nicolas; Vilasi, Michel [Université de Lorraine, Institut Jean Lamour, Faculté des Sciences et Technologies, BP 70239, F-54506 Vandoeuvre-lès-Nancy (France) 2013-11-05 Highlights: •Ta–Ge phase diagram propose for the first time. •The phase αTa{sub 5}Ge{sub 3} was not observed in samples investigated in this work. •Three eutectics reactions where determined with the liquid compositions at 20.5; 28.0; 97.0 at.% Ge. -- Abstract: In the present work, the Ta–Ge phase diagram has been experimentally studied, considering the inexistence of a Ta–Ge phase diagram in the literature. The samples were prepared via arc melting and characterized by Scanning Electron Microscopy (SEM), Energy Dispersive Spectroscopy (EDS) and X-ray Diffraction (XRD). The intermetallics phases βTa{sub 3}Ge, αTa{sub 3}Ge, βTa{sub 5}Ge{sub 3} and TaGe{sub 2} where confirmed in this system. Three eutectics reactions where determined with the liquid compositions at 20.5; 28.0; 97.0 at.% Ge. The phases βTa{sub 3}Ge and βTa{sub 5}Ge{sub 3} solidifies congruently while TaGe{sub 2} is formed through a peritectic transformation. The temperature of the Ta-rich eutectic (L ↔ Ta{sub ss} + βTa{sub 3}Ge) was measured by the Pirani-Alterthum method at 2440 °C and the Ge-rich eutectic (L ↔ TaGe{sub 2} + Ge{sub ss}) by DTA at 937 °C. 15. Phase equilibrium of the Gd-Fe-Co system at 873 K International Nuclear Information System (INIS) Huang Jinli; Zhong Haichang; Xia Xiuwen; He Wei; Zhu Jinming; Deng Jianqiu; Zhuang Yinghong 2009-01-01 Phase equilibrium of the ternary Gd-Fe-Co system at 873 K was investigated by using X-ray diffraction technique, electron probe microanalysis, metallographic analysis and differential thermal analysis. The 873 K isothermal section of the phase diagram of the Gd-Fe-Co ternary system consists of 11 single-phase regions, 16 two-phase regions and 6 three-phase regions. Three pairs of corresponding compounds of Gd-Co and Gd-Fe, i.e., Gd 2 Co 17 and Gd 2 Fe 17 , GdCo 3 and GdFe 3 , GdCo 2 and GdFe 2 , form a continuous series of solid solution. The compound Gd 2 Co 7-x Fe x was found to have a broad solubility range from 0 to 31 at.% Fe. The maximum solubility of Co in Gd 6 Fe 23 is about 7 at.% Co. At 873 K, the maximum solubilities of Fe in Gd 3 Co and Gd 4 Co 3 are about 3 and 1 at.% Fe, respectively. No ternary compound was found in all ternary alloy samples 16. Theoretical Studies of Aqueous Systems above 25 deg C. 1. Fundamental Concepts for Equilibrium Diagrams and some General Features of the Water System Energy Technology Data Exchange (ETDEWEB) Lewis, Derek 1971-09-15 The illustration of thermodynamic data on aqueous systems is discussed and diagrams are described that are useful for bringing together the large numbers of data that are relevant to technological problems such as corrosion, mass-transport and deposition. Two kinds of logarithmic equilibrium diagram are particularly useful, namely, diagrams depicting the variation with pH or pe of the concentrations of ionic species relative to that of a chosen reference ion and diagrams depicting the fields of conditions of pH and pe in which the various species in any given system predominate or are stable. Such diagrams clearly and concisely illustrate the data and greatly simplify the comparison of the states of a system at different temperatures. Estimates of the equilibrium constants for the redox and the acid-base dissociation of water up to 375 C are reported and some general features of aqueous systems at elevated temperatures are discussed 17. Phase stabilities at a glance: Stability diagrams of nickel dipnictides International Nuclear Information System (INIS) Bachhuber, F.; Rothballer, J.; Weihrich, R.; Söhnel, T. 2013-01-01 In the course of the recent advances in chemical structure prediction, a straightforward type of diagram to evaluate phase stabilities is presented based on an expedient example. Crystal structures and energetic stabilities of dipnictides NiPn 2 (Pn = N, P, As, Sb, Bi) are systematically investigated by first principles calculations within the framework of density functional theory using the generalized gradient approximation to treat exchange and correlation. These dipnictides show remarkable polymorphism that is not yet understood systematically and offers room for the discovery of new phases. Relationships between the concerned structures including the marcasite, the pyrite, the arsenopyrite/CoSb 2 , and the NiAs 2 types are highlighted by means of common structural fragments. Electronic stabilities of experimentally known and related AB 2 structure types are presented graphically in so-called stability diagrams. Additionally, competing binary phases are taken into consideration in the diagrams to evaluate the stabilities of the title compounds with respect to decomposition. The main purpose of the stability diagrams is the introduction of an image that enables the estimation of phase stabilities at a single glance. Beyond that, some of the energetically favored structure types can be identified as potential new phases 18. Matrix model approximations of fuzzy scalar field theories and their phase diagrams Energy Technology Data Exchange (ETDEWEB) Tekel, Juraj [Department of Theoretical Physics, Faculty of Mathematics, Physics and Informatics, Comenius University, Mlynska Dolina, Bratislava, 842 48 (Slovakia) 2015-12-29 We present an analysis of two different approximations to the scalar field theory on the fuzzy sphere, a nonperturbative and a perturbative one, which are both multitrace matrix models. We show that the former reproduces a phase diagram with correct features in a qualitative agreement with the previous numerical studies and that the latter gives a phase diagram with features not expected in the phase diagram of the field theory. 19. Equations of State and Phase Diagrams of Ammonia Science.gov (United States) Glasser, Leslie 2009-01-01 We present equations of state relating the phases and a three-dimensional phase diagram for ammonia with its solid, liquid, and vapor phases, based on fitted authentic experimental data and including recent information on the high-pressure solid phases. This presentation follows similar articles on carbon dioxide and water published in this… 20. Phase diagrams for surface alloys DEFF Research Database (Denmark) Christensen, Asbjørn; Ruban, Andrei; Stoltze, Per 1997-01-01 We discuss surface alloy phases and their stability based on surface phase diagrams constructed from the surface energy as a function of the surface composition. We show that in the simplest cases of pseudomorphic overlayers there are four generic classes of systems, characterized by the sign...... is based on density-functional calculations using the coherent-potential approximation and on effective-medium theory. We give self-consistent density-functional results for the segregation energy and surface mixing energy for all combinations of the transition and noble metals. Finally we discuss... 1. Magnetic phase diagram of a nanocone International Nuclear Information System (INIS) Suarez, O; Vargas, P; Escrig, J; Landeros, P; Albir, D; Laroze, D 2008-01-01 In this work we analyze the magnetic properties of truncated conical nanoparticles. Based on the continuous magnetic model we find expressions for the total energy in three different magnetic configurations. Finally, we calculate the magnetic phase diagram as function of the geometrical parameters. 2. Magnetic phase diagram of a nanocone Energy Technology Data Exchange (ETDEWEB) Suarez, O; Vargas, P [Departamento de Fisica, Universidad Tecnica Federico Santa MarIa, P. O. Box 110-V, Valparaiso (Chile); Escrig, J; Landeros, P; Albir, D [Universidad de Santiago de Chile, Depatamento de Fisica, Casilla 307, Correo 2, Santiago (Chile); Laroze, D [Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso, P. O. Box 4059, Valparaiso (Chile)], E-mail: omar.suarez@postgrado.usm.cl 2008-11-01 In this work we analyze the magnetic properties of truncated conical nanoparticles. Based on the continuous magnetic model we find expressions for the total energy in three different magnetic configurations. Finally, we calculate the magnetic phase diagram as function of the geometrical parameters. 3. Phase diagram of SnTe-CdSe cross-section of SnTe+CdSe reversible SnSe+CdTe ternary reciprocal system International Nuclear Information System (INIS) Dubrovin, I.V.; Budennaya, L.D.; Mizetskaya, I.B.; Sharkina, Eh.V. 1986-01-01 Phase equilibrium diagram of SnTe-CdSe cross-section of Sn, Cd long Te, Se ternary reciprocal system is investigated using the methods of differential thermal, X-ray phase, and microstructural analyses. Maximum length of solid solutions on the base of SnTe corresponds to approximately 14 mol.% at 1050 K and approximately 3 mol.% of CdSe at 670 K. Region of solid solutions on the base of CdSe corresponds to less than 1 mol.% of SnTe at room temperature. SnTe-CdSe cross-section is not a quasibinar one. Equilibrium is shifted to the left in the SnTe+CdSe reversible SnSe+CdTe reciprocal system 4. The TbBr3–LiBr binary system: Experimental thermodynamic investigation and assessment of phase diagram International Nuclear Information System (INIS) Rycerz, L.; Gong, W.; Gaune-Escard, M. 2013-01-01 Highlights: ► DSC measurements for the (LiBr + TbBr 3 ) system. ► congruently Li3TbBr 6 and incongruently melting Li5TbBr 8 compounds. ► Thermodynamic description of the liquid phase in the (LiBr + TbBr 3 ) system. ► Assessment with a two-sublattice ionic solution model. - Abstract: DSC was used to study the phase equilibrium in the TbBr 3 –LiBr binary system. The results obtained provided a basis for constructing the phase diagram of this system. It exhibits two compounds: Li 5 TbBr 8 , which decomposes in the solid state at 611 K, and Li 3 TbBr 6 , which melts congruently at 785 K with the related enthalpy 59.1 kJ·mol −1 . The binary LiBr–TbBr 3 system was then optimized using the available experimental information on phase diagram and thermodynamic properties. A two-sub-lattice ionic solution model (Li + ) P :(Br − , TbBr 6 −3 , TbBr 3 ) Q was adopted to describe the liquid phase. The present assessment of the binary LiBr–TbBr 3 system was in good agreement with the corresponding experimental data and confirmed their consistency. 5. Separating NaCl and AlCl3·6H2O Crystals from Acidic Solution Assisted by the Non-Equilibrium Phase Diagram of AlCl3-NaCl-H2O(-HCl Salt-Water System at 353.15 K Directory of Open Access Journals (Sweden) Huaigang Cheng 2017-08-01 Full Text Available Extracting AlCl3·6H2O from acid leaching solution through crystallization is one of the key processes to extracting aluminum from fly ash, coal gangue and other industrial solid wastes. However, the obtained products usually have low purity and a key problem is the lack of accurate data for phase equilibrium. This paper presented the non-equilibrium phase diagrams of AlCl3-NaCl-H2O (HCl salt-water systems under continuous heating and evaporation conditions, which were the main components of the acid leaching solution obtained through a sodium-assisted activation hydrochloric acid leaching process. The ternary system was of a simple eutonic type under different acidities. There were three crystalline regions; the crystalline regions of AlCl3·6H2O, NaCl and the mixture AlCl3·6H2O/NaCl, respectively. The phase diagram was used to optimize the crystallization process of AlCl3·6H2O and NaCl. A process was designed to evaporate and remove NaCl at the first stage of the evaporation process, and then continue to evaporate and crystallize AlCl3·6H2O after solid-liquid separation. The purities of the final salt products were 99.12% for NaCl and up to 97.35% for AlCl3·6H2O, respectively. 6. Nonequilibrium phase diagram of a one-dimensional quasiperiodic system with a single-particle mobility edge Science.gov (United States) Purkayastha, Archak; Dhar, Abhishek; Kulkarni, Manas 2017-11-01 We investigate and map out the nonequilibrium phase diagram of a generalization of the well known Aubry-André-Harper (AAH) model. This generalized AAH (GAAH) model is known to have a single-particle mobility edge which also has an additional self-dual property akin to that of the critical point of the AAH model. By calculating the population imbalance, we get hints of a rich phase diagram. We also find a fascinating connection between single particle wave functions near the mobility edge of the GAAH model and the wave functions of the critical AAH model. By placing this model far from equilibrium with the aid of two baths, we investigate the open system transport via system size scaling of nonequilibrium steady state (NESS) current, calculated by fully exact nonequilibrium Green's function (NEGF) formalism. The critical point of the AAH model now generalizes to a critical' line separating regions of ballistic and localized transport. Like the critical point of the AAH model, current scales subdiffusively with system size on the critical' line (I ˜N-2 ±0.1 ). However, remarkably, the scaling exponent on this line is distinctly different from that obtained for the critical AAH model (where I ˜N-1.4 ±0.05 ). All these results can be understood from the above-mentioned connection between states near the mobility edge of the GAAH model and those of the critical AAH model. A very interesting high temperature nonequilibrium phase diagram of the GAAH model emerges from our calculations. 7. On the question of calculation methods of phase diagrams International Nuclear Information System (INIS) Vasil'ev, M.V. 1983-01-01 The technique of determining interaction parameters of components of binary alloys is suggested. U-Mo and Cu-Al systems are used as example with the aid of experimental state diagrams. It is shown that the search for new regularities is necessary with the aim of analytical description of state diagrams and forecast of the shape of phase equilibria curves in real systems. Optimum combinations of experimental investigations with the aim of reliable determination of supporting points and forecasting possibilities of typical equations can considerably decrease the volume of experimental work when preparing state diagrams, in cases of repeated state diagrams of more reliable state diagrams with the application of more advanced methods of investigation. The translation of state diagrams from geometric to analytical language with the use of typical equations opens up new possibilities for establishing a compact information bank for state diagrams 8. Phase Diagram of the Ethylene Glycol-Dimethylsulfoxide System Science.gov (United States) Solonina, I. A.; Rodnikova, M. N.; Kiselev, M. R.; Khoroshilov, A. V.; Shirokova, E. V. 2018-05-01 The phase diagram of ethylene glycol (EG)-dimethylsulfoxide (DMSO) system is studied in the temperature range of +25 to -140°C via differential scanning calorimetry. It is established that the EG-DMSO system is characterized by strong overcooling of the liquid phase, a glass transition at -125°C, and the formation of a compound with the composition of DMSO · 2EG. This composition has a melting temperature of -60°C, which is close to those of neighboring eutectics (-75 and -70°C). A drop in the baseline was observed in the temperature range of 8 to -5°C at DMSO concentrations of 5-50 mol %, indicating the existence of a phase separation area in the investigated system. The obtained data is compared to the literature data on the H2O-DMSO phase diagram. 9. Multicritical phase diagrams of the ferromagnetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: The cluster variation method and the path probability method with the point distribution Energy Technology Data Exchange (ETDEWEB) Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)], E-mail: keskin@erciyes.edu.tr; Canko, Osman [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2008-01-15 We study the thermal variations of the ferromagnetic spin-3/2 Blume-Emery-Griffiths (BEG) model with repulsive biquadratic coupling by using the lowest approximation of the cluster variation method (LACVM) in the absence and presence of the external magnetic field. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. The classification of the stable, metastable and unstable states is made by comparing the free energy values of these states. We also study the dynamics of the model by using the path probability method (PPM) with the point distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagrams in addition to the equilibrium phase diagrams in the (kT/J, K/J) and (kT/J, D/J) planes. It is found that the metastable phase diagrams always exist at the low temperatures, which are consistent with experimental and theoretical works. 10. Multicritical phase diagrams of the ferromagnetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: The cluster variation method and the path probability method with the point distribution International Nuclear Information System (INIS) Keskin, Mustafa; Canko, Osman 2008-01-01 We study the thermal variations of the ferromagnetic spin-3/2 Blume-Emery-Griffiths (BEG) model with repulsive biquadratic coupling by using the lowest approximation of the cluster variation method (LACVM) in the absence and presence of the external magnetic field. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. The classification of the stable, metastable and unstable states is made by comparing the free energy values of these states. We also study the dynamics of the model by using the path probability method (PPM) with the point distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagrams in addition to the equilibrium phase diagrams in the (kT/J, K/J) and (kT/J, D/J) planes. It is found that the metastable phase diagrams always exist at the low temperatures, which are consistent with experimental and theoretical works 11. Liquid-liquid equilibrium of water + PEG 8000 + magnesium sulfate or sodium sulfate aqueous two-phase systems at 35°C: experimental determination and thermodynamic modeling Directory of Open Access Journals (Sweden) B. D. Castro 2005-09-01 Full Text Available Liquid-liquid extraction using aqueous two-phase systems is a highly efficient technique for separation and purification of biomolecules due to the mild properties of both liquid phases. Reliable data on the phase behavior of these systems are essential for the design and operation of new separation processes; several authors reported phase diagrams for polymer-polymer systems, but data on polymer-salt systems are still relatively scarce. In this work, experimental liquid-liquid equilibrium data on water + polyethylene glycol 8000 + magnesium sulfate and water + polyethylene glycol 8000 + sodium sulfate aqueous two-phase systems were obtained at 35°C. Both equilibrium phases were analyzed by lyophilization and ashing. Experimental results were correlated with a mass-fraction-based NRTL activity coefficient model. New interaction parameters were estimated with the Simplex method. The mean deviations between the experimental and calculated compositions in both equilibrium phases is about 2%. 12. Phase diagram of a Lennard-Jones solid International Nuclear Information System (INIS) Choi, Y.; Ree, T.; Ree, F.H. 1993-01-01 A phase diagram of a Lennard-Jones solid at kT/ε≥0.8 is constructed by our recent perturbation theory. It shows the stability of the face-centered-cubic phase except within a small pressure and temperature domain, where the hexagonal-close packed phase may occur. The theory predicts anharmonic contributions to the Helmholtz free energy (important to the crystal stability) in good agreement with Monte Carlo data 13. Non-equilibrium phase transitions in complex plasma International Nuclear Information System (INIS) Suetterlin, K R; Raeth, C; Ivlev, A V; Thomas, H M; Khrapak, S; Zhdanov, S; Rubin-Zuzic, M; Morfill, G E; Wysocki, A; Loewen, H; Goedheer, W J; Fortov, V E; Lipaev, A M; Molotkov, V I; Petrov, O F 2010-01-01 Complex plasma being the 'plasma state of soft matter' is especially suitable for investigations of non-equilibrium phase transitions. Non-equilibrium phase transitions can manifest in dissipative structures or self-organization. Two specific examples are lane formation and phase separation. Using the permanent microgravity laboratory PK-3 Plus, operating onboard the International Space Station, we performed unique experiments with binary mixtures of complex plasmas that showed both lane formation and phase separation. These observations have been augmented by comprehensive numerical and theoretical studies. In this paper we present an overview of our most important results. In addition we put our results in context with research of complex plasmas, binary systems and non-equilibrium phase transitions. Necessary and promising future complex plasma experiments on phase separation and lane formation are briefly discussed. 14. Color superconductivity. Phase diagrams and Goldstone bosons in the color-flavor locked phase Energy Technology Data Exchange (ETDEWEB) Kleinhaus, Verena 2009-04-29 The phase diagram of strongly interacting matter is studied with great experimental and theoretical effort and is one of the most fascinating research areas in modern particle physics. It is believed that color superconducting phases, in which quarks form Cooper pairs, appear at very high densities and low temperatures. Such phases could appear in the cores of neutron stars. In this work color superconducting phases are studied within the Nambu-Jona-Lasinio model. First of all, the phase diagram of neutral matter in beta equilibrium is calculated for two different diquark couplings. To this end, we determine the dynamical quark masses self-consistently together with the order parameters of color superconductivity. The interplay between neutrality and quark masses results in an interesting phase structure, in particular for the smaller diquark coupling. In the following, we additionally include a conserved lepton number to map the situation in the first few seconds of the evolution of a protoneutron star when neutrinos are trapped. This has a huge influence on the phase structure and favors the 2SC phase compared to the CFL phase. In the second part of this work we concentrate on the CFL phase which is characterized by a special symmetry breaking pattern. The properties of the resulting nine pseudoscalar Goldstone bosons (GB) are studied by solving the Bethe-Salpeter equation for quark-quark scattering. The GB are the lowest-lying excitations in the CFL phase and therefore play an important role for the thermodynamics of the system. The properties of the GB can also be described by the low-energy effective theory (LEET) for the CFL phase. There the respective low-energy constants are derived for asymptotically high densities where the strong force is weak and can be treated perturbatively. Our aim is the comparison of our results with these predictions, on the one hand to check our model in the weak-coupling limit and on the other hand to derive information about 15. Color superconductivity: Phase diagrams and Goldstone bosons in the color-flavor locked phase International Nuclear Information System (INIS) Kleinhaus, Verena 2009-01-01 The phase diagram of strongly interacting matter is studied with great experimental and theoretical effort and is one of the most fascinating research areas in modern particle physics. It is believed that color superconducting phases, in which quarks form Cooper pairs, appear at very high densities and low temperatures. Such phases could appear in the cores of neutron stars. In this work color superconducting phases are studied within the Nambu-Jona-Lasinio model. First of all, the phase diagram of neutral matter in beta equilibrium is calculated for two different diquark couplings. To this end, we determine the dynamical quark masses self-consistently together with the order parameters of color superconductivity. The interplay between neutrality and quark masses results in an interesting phase structure, in particular for the smaller diquark coupling. In the following, we additionally include a conserved lepton number to map the situation in the first few seconds of the evolution of a protoneutron star when neutrinos are trapped. This has a huge influence on the phase structure and favors the 2SC phase compared to the CFL phase. In the second part of this work we concentrate on the CFL phase which is characterized by a special symmetry breaking pattern. The properties of the resulting nine pseudoscalar Goldstone bosons (GB) are studied by solving the Bethe-Salpeter equation for quark-quark scattering. The GB are the lowest-lying excitations in the CFL phase and therefore play an important role for the thermodynamics of the system. The properties of the GB can also be described by the low-energy effective theory (LEET) for the CFL phase. There the respective low-energy constants are derived for asymptotically high densities where the strong force is weak and can be treated perturbatively. Our aim is the comparison of our results with these predictions, on the one hand to check our model in the weak-coupling limit and on the other hand to derive information about 16. The phase diagram of water at negative pressures: virtual ices. Science.gov (United States) Conde, M M; Vega, C; Tribello, G A; Slater, B 2009-07-21 The phase diagram of water at negative pressures as obtained from computer simulations for two models of water, TIP4P/2005 and TIP5P is presented. Several solid structures with lower densities than ice Ih, so-called virtual ices, were considered as possible candidates to occupy the negative pressure region of the phase diagram of water. In particular the empty hydrate structures sI, sII, and sH and another, recently proposed, low-density ice structure. The relative stabilities of these structures at 0 K was determined using empirical water potentials and density functional theory calculations. By performing free energy calculations and Gibbs-Duhem integration the phase diagram of TIP4P/2005 was determined at negative pressures. The empty hydrates sII and sH appear to be the stable solid phases of water at negative pressures. The phase boundary between ice Ih and sII clathrate occurs at moderate negative pressures, while at large negative pressures sH becomes the most stable phase. This behavior is in reasonable agreement with what is observed in density functional theory calculations. 17. Phase Diagrams of Strongly Interacting Theories DEFF Research Database (Denmark) Sannino, Francesco 2010-01-01 We summarize the phase diagrams of SU, SO and Sp gauge theories as function of the number of flavors, colors, and matter representation as well as the ones of phenomenologically relevant chiral gauge theories such as the Bars-Yankielowicz and the generalized Georgi-Glashow models. We finally report... 18. Determination of the quaternary phase diagram of the water-ethylene glycol-sucrose-NaCl system and a comparison between two theoretical methods for synthetic phase diagrams. Science.gov (United States) Han, Xu; Liu, Yang; Critser, John K 2010-08-01 Characterization of the thermodynamic properties of multi-solute aqueous solutions is of critical importance for biological and biochemical research. For example, the phase diagrams of aqueous systems, containing salts, saccharides, and plasma membrane permeating solutes, are indispensible in the field of cryobiology and pharmacology. However, only a few ternary phase diagrams are currently available for these systems. In this study, an auto-sampler differential scanning calorimeter (DSC) was used to determine the quaternary phase diagram of the water-ethylene glycol-sucrose-NaCl system. To improve the accuracy of melting point measurement, a "mass-redemption" method was also applied for the DSC technique. Base on the analyses of these experimental data, a comparison was made between the two practical approaches to generate phase diagrams of multi-solute solutions from those of single-solute solutions: the summation of cubic polynomial melting point equations versus the use of osmotic virial equations with cross coefficients. The calculated values of the model standard deviations suggested that both methods are satisfactory for characterizing this quaternary system. (c) 2010 Elsevier Inc. All rights reserved. 19. Phase diagram Fe-Sn-Sr. New experimental results International Nuclear Information System (INIS) Nieva, N; Jimenez, M.J; Gomez, A; Corvalan Moya, C; Arias, D 2012-01-01 Zr-based alloys are widely used in nuclear industry due to their specific characteristics. The information of the phase diagrams of the ternary system Fe-Zr-Sn is scarce. In this work we investigate, in a experimental way, the central and the Fe-Sn binary adjacent regions of the Fe-Sn-Zr Gibbs triangle at the temperature of 800 o C. For the experimental work, a set of seven ternary alloys was designed, produced and examined by different complementary techniques. There were performed two types of heat treatments: one of medium and another of long duration. We present a new proposal for the 800 o C isothermal section. The boundaries of the identified phases and the fields of one, two and three phases are indicated in the diagram 20. T- P Phase Diagram of Nitrogen at High Pressures Science.gov (United States) Algul, G.; Enginer, Y.; Yurtseven, H. 2018-05-01 By employing a mean field model, calculation of the T- P phase diagram of molecular nitrogen is performed at high pressures up to 200 GPa. Experimental data from the literature are used to fit a quadratic function in T and P, describing the phase line equations which have been derived using the mean field model studied here for N 2, and the fitted parameters are determined. Our model study gives that the observed T- P phase diagram can be described satisfactorily for the first-order transitions between the phases at low as well as high pressures in nitrogen. Some thermodynamic quantities can also be predicted as functions of temperature and pressure from the mean field model studied here and they can be compared with the experimental data. 1. Exact ground-state phase diagrams for the spin-3/2 Blume-Emery-Griffiths model Energy Technology Data Exchange (ETDEWEB) Canko, Osman; Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Deviren, Bayram [Institute of Science, Erciyes University, 38039 Kayseri (Turkey)], E-mail: keskin@erciyes.edu.tr 2008-05-15 We have calculated the exact ground-state phase diagrams of the spin-3/2 Ising model using the method that was proposed and applied to the spin-1 Ising model by Dublenych (2005 Phys. Rev. B 71 012411). The calculated, exact ground-state phase diagrams on the diatomic and triangular lattices with the nearest-neighbor (NN) interaction have been presented in this paper. We have obtained seven and 15 topologically different ground-state phase diagrams for J>0 and J<0, respectively, on the diatomic lattice and have found the conditions for the existence of uniform and intermediate or non-uniform phases. We have also constructed the exact ground-state phase diagrams of the model on the triangular lattice and found 20 and 59 fundamental phase diagrams for J>0 and J<0, respectively, the conditions for the existence of uniform and intermediate phases have also been found. 2. Non-equilibrium phase transition International Nuclear Information System (INIS) Mottola, E.; Cooper, F.M.; Bishop, A.R.; Habib, S.; Kluger, Y.; Jensen, N.G. 1998-01-01 This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Non-equilibrium phase transitions play a central role in a very broad range of scientific areas, ranging from nuclear, particle, and astrophysics to condensed matter physics and the material and biological sciences. The aim of this project was to explore the path to a deeper and more fundamental understanding of the common physical principles underlying the complex real time dynamics of phase transitions. The main emphasis was on the development of general theoretical tools to deal with non-equilibrium processes, and of numerical methods robust enough to capture the time-evolving structures that occur in actual experimental situations. Specific applications to Laboratory multidivisional efforts in relativistic heavy-ion physics (transition to a new phase of nuclear matter consisting of a quark-gluon plasma) and layered high-temperature superconductors (critical currents and flux flow at the National High Magnetic Field Laboratory) were undertaken 3. The effective QCD phase diagram and the critical end point Directory of Open Access Journals (Sweden) Alejandro Ayala 2015-08-01 Full Text Available We study the QCD phase diagram on the temperature T and quark chemical potential μ plane, modeling the strong interactions with the linear sigma model coupled to quarks. The phase transition line is found from the effective potential at finite T and μ taking into account the plasma screening effects. We find the location of the critical end point (CEP to be (μCEP/Tc,TCEP/Tc∼(1.2,0.8, where Tc is the (pseudocritical temperature for the crossover phase transition at vanishing μ. This location lies within the region found by lattice inspired calculations. The results show that in the linear sigma model, the CEP's location in the phase diagram is expectedly determined solely through chiral symmetry breaking. The same is likely to be true for all other models which do not exhibit confinement, provided the proper treatment of the plasma infrared properties for the description of chiral symmetry restoration is implemented. Similarly, we also expect these corrections to be substantially relevant in the QCD phase diagram. 4. Experimental determination and thermodynamic modeling of phase equilibrium and protein partitioning in aqueous two-phase systems containing biodegradable salts International Nuclear Information System (INIS) Perez, Brenda; Malpiedi, Luciana Pellegrini; Tubío, Gisela; Nerli, Bibiana; Alcântara Pessôa Filho, Pedro de 2013-01-01 Highlights: ► Binodal data of systems (water + polyethyleneglycol + sodium) succinate are reported. ► Pitzer model describes the phase equilibrium of systems formed by polyethyleneglycol and biodegradable salts satisfactorily. ► This simple thermodynamic framework was able to predict the partitioning behaviour of model proteins acceptably well. - Abstract: Phase diagrams of sustainable aqueous two-phase systems (ATPSs) formed by polyethyleneglycols (PEGs) of different average molar masses (4000, 6000, and 8000) and sodium succinate are reported in this work. Partition coefficients (Kps) of seven model proteins: bovine serum albumin, catalase, beta-lactoglobulin, alpha-amylase, lysozyme, pepsin, urease and trypsin were experimentally determined in these systems and in ATPSs formed by the former PEGs and other biodegradable sodium salts: citrate and tartrate. An extension of Pitzer model comprising long and short-range term contributions to the excess Gibbs free energy was used to describe the (liquid + liquid) equilibrium. Comparison between experimental and calculated tie line data showed mean deviations always lower than 3%, thus indicating a good correlation. The partition coefficients were modeled by using the same thermodynamic approach. Predicted and experimental partition coefficients correlated quite successfully. Mean deviations were found to be lower than the experimental uncertainty for most of the assayed proteins. 5. Critical point analysis of phase envelope diagram International Nuclear Information System (INIS) Soetikno, Darmadi; Siagian, Ucok W. R.; Kusdiantara, Rudy; Puspita, Dila; Sidarto, Kuntjoro A.; Soewono, Edy; Gunawan, Agus Y. 2014-01-01 Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile, dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab 6. Critical point analysis of phase envelope diagram Energy Technology Data Exchange (ETDEWEB) Soetikno, Darmadi; Siagian, Ucok W. R. [Department of Petroleum Engineering, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Kusdiantara, Rudy, E-mail: rkusdiantara@s.itb.ac.id; Puspita, Dila, E-mail: rkusdiantara@s.itb.ac.id; Sidarto, Kuntjoro A., E-mail: rkusdiantara@s.itb.ac.id; Soewono, Edy; Gunawan, Agus Y. [Department of Mathematics, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia) 2014-03-24 Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile, dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab. 7. Extraction and ion exchange equilibrium. A study by means logarith-mic diagrams International Nuclear Information System (INIS) Vicente Perez, S.; Alvarez, M.D.; Durand, S. 1990-01-01 A general logarithmic mole fraction diagram for the study of distribution equilibria of a) a neutral chemical species between two inmiscible solvents and b) and ionic species between an aqueous phase and ion-exchange resin, is proposed. (Author) 8. Phase diagrams of laser-processed nanoparticles of brass Energy Technology Data Exchange (ETDEWEB) Kazakevich, P.V. [Wave Research Center of A.M. Prokhorov General Physics Institute of the Russian Academy of Sciences 38, Vavilov Street, 119991 Moscow (Russian Federation); Simakin, A.V. [Wave Research Center of A.M. Prokhorov General Physics Institute of the Russian Academy of Sciences 38, Vavilov Street, 119991 Moscow (Russian Federation); Shafeev, G.A. [Wave Research Center of A.M. Prokhorov General Physics Institute of the Russian Academy of Sciences 38, Vavilov Street, 119991 Moscow (Russian Federation); Monteverde, F. [Electron Microscopy Unit, Materia Nova, Avenue Copernic 1, B-7000 Mons (Belgium); Wautelet, M. [Condensed Matter Physics, University of Mons-Hainaut, 23, Avenue Maistriau, B-7000 Mons (Belgium)]. E-mail: michel.wautelet@umh.ac.be 2007-07-31 Nanoparticles of brass are prepared by ablation of a brass target in ethanol using radiation of a copper-vapor laser at various laser fluences. The nanoparticles are characterized by TEM and optical spectroscopy. The multipulse laser irradiation leads to formation both the nanoparticles in liquid and well-ordered micro-structures on a surface of a target. It is revealed that both the morphology and absorption spectra of brass nanoparticles depend on presence of the micro-structures. Nanoparticles with the various phase diagrams are formed from a flat brass surface and from the same surface with micro-structures. The results are compared with a model of phase diagrams, in which size and composition effects are taken into account. 9. Low-pressure phase diagram of crystalline benzene from quantum Monte Carlo Energy Technology Data Exchange (ETDEWEB) Azadi, Sam, E-mail: s.azadi@ucl.ac.uk [Departments of Physics and Astronomy, University College London, Thomas Young Center, London Centre for Nanotechnology, London WC1E 6BT (United Kingdom); Cohen, R. E. [Extreme Materials Initiative, Geophysical Laboratory, Carnegie Institution for Science, Washington, DC 20015 (United States); Department of Earth- and Environmental Sciences, Ludwig Maximilians Universität, Munich 80333 (Germany); Department of Physics and Astronomy, University College London, London WC1E 6BT (United Kingdom) 2016-08-14 We studied the low-pressure (0–10 GPa) phase diagram of crystalline benzene using quantum Monte Carlo and density functional theory (DFT) methods. We performed diffusion quantum Monte Carlo (DMC) calculations to obtain accurate static phase diagrams as benchmarks for modern van der Waals density functionals. Using density functional perturbation theory, we computed the phonon contributions to the free energies. Our DFT enthalpy-pressure phase diagrams indicate that the Pbca and P2{sub 1}/c structures are the most stable phases within the studied pressure range. The DMC Gibbs free-energy calculations predict that the room temperature Pbca to P2{sub 1}/c phase transition occurs at 2.1(1) GPa. This prediction is consistent with available experimental results at room temperature. Our DMC calculations give 50.6 ± 0.5 kJ/mol for crystalline benzene lattice energy. 10. Na-Si binary phase diagram and solution growth of silicon crystals International Nuclear Information System (INIS) Morito, H.; Yamada, T.; Ikeda, T.; Yamane, H. 2009-01-01 In the present study, a Na-Si binary phase diagram was first presented from the results of differential thermal analysis and X-ray diffraction. Based on the phase diagram, we performed low-temperature formation of single crystals, film and porous bulk of Si by vaporizing Na from a Na-Si melt at 800 or 900 deg. C. 11. Phase Diagram of Spiking Neural Networks Directory of Open Access Journals (Sweden) Hamed eSeyed-Allaei 2015-03-01 Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network. 12. Confinement in Polyakov gauge and the QCD phase diagram Energy Technology Data Exchange (ETDEWEB) Marhauser, Marc Florian 2009-10-14 We investigate Quantum Chromodynamics (QCD) in the framework of the functional renormalisation group (fRG). Thereby describing the phase transition from the phase with confined quarks into the quark-gluon-plasma phase. We focus on a physical gauge in which the mechanism driving the phase transition is discernible. We find results compatible with lattice QCD data, as well as with functional methods applied in different gauges. The phase transition is of the expected order and we computed critical exponents. Extensions of the model are discussed. When investigating the QCD phase diagram, we compute the effects of dynamical quarks at finite density on the running of the gauge coupling. Additionally, we calculate how these affect the deconfinement phase transition, also, dynamical quarks allow for the inclusion of a finite chemical potential. Concluding the investigation of the phase diagram, we establish a relation between confinement and chiral symmetry breaking, which is tied to the dynamical generation of hadron masses. In the investigations, we often encounter scale dependent fields. We investigate a footing on which these can be dealt with in a uniform way. (orig.) 13. Phase diagram measurements by Heat-flux DSC and thermodynamic calculations of the mixture of the Esters Ethyl undecanoate (C13H26O2) and Ethyl tridecanoate (C15H30O2) NARCIS (Netherlands) Schaftenaar, H.P.C. 2006-01-01 In this report a phase diagram is determined by heat flux DSC of the binary mixture Ethyl undecanoate and Ethyl tridecanoate. Our hypothesis for equilibrium phase behaviour is that the components Ethyl undecanoate and Ethyl tridecanoate do have the same crystal form and they have restricted 14. Phase diagram and quench dynamics of the cluster-XY spin chain. Science.gov (United States) Montes, Sebastián; Hamma, Alioscia 2012-08-01 We study the complete phase space and the quench dynamics of an exactly solvable spin chain, the cluster-XY model. In this chain, the cluster term and the XY couplings compete to give a rich phase diagram. The phase diagram is studied by means of the quantum geometric tensor. We study the time evolution of the system after a critical quantum quench using the Loschmidt echo. The structure of the revivals after critical quantum quenches presents a nontrivial behavior depending on the phase of the initial state and the critical point. 15. Thermochemical measurements and assessment of the phase diagrams in the system Y-Ba-Cu-O International Nuclear Information System (INIS) 1996-01-01 The aim of this project was to provide a self-consistent set of Gibbs energy data for all phases in the system Y-Ba-Cu-O. Experimental thermochemical investigations by differential thermal analysis (DTA), thermogravimetry (TG), electromotive force measurements (EMF), oxygen coulometric titration (OCT), drop and solution calorimetry, and conventional phase analysis (annealing, quenching, and X-ray diffraction [XRD]) as well as ab initio calculations of interaction energies for the 123 phase have been carried out. The experimental information (phase equilibria, heat capacity, enthalpies of formation, oxygen partial pressures, and so forth) has been used in computer-based assessments of the Gibbs energies. These data have been employed to generate phase diagrams by way of equilibrium computations. All binary and ternary subsystems have been fully assessed. For the quaternary system a dataset covering the subsolidus range has been derived. Applications of the data to practical questions, such as the production of 123 superconductors by an MOCVD process, the producibility of metallic precursors, and the oxidation of a copper-enriched stoichiometric oxide precursor, are demonstrated 16. Low temperature thermodynamic investigation of the phase diagram of Sr3Ru2O7 Science.gov (United States) Sun, D.; Rost, A. W.; Perry, R. S.; Mackenzie, A. P.; Brando, M. 2018-03-01 We studied the phase diagram of Sr3Ru2O7 by means of heat capacity and magnetocaloric effect measurements at temperatures as low as 0.06 K and fields up to 12 T. We confirm the presence of a new quantum critical point at 7.5 T which is characterized by a strong non-Fermi-liquid behavior of the electronic specific heat coefficient Δ C /T ˜-logT over more than a decade in temperature, placing strong constraints on theories of its criticality. In particular logarithmic corrections are found when the dimension d is equal to the dynamic critical exponent z , in contrast to the conclusion of a two-dimensional metamagnetic quantum critical end point, recently proposed. Moreover, we achieved a clear determination of the new second thermodynamic phase adjoining the first one at lower temperatures. Its thermodynamic features differ significantly from those of the dominant phase and characteristics expected of classical equilibrium phase transitions are not observed, indicating fundamental differences in the phase formation. 17. Quantum corrections for the phase diagram of systems with competing order Science.gov (United States) Silva, N. L., Jr.; Continentino, Mucio A.; Barci, Daniel G. 2018-06-01 We use the effective potential method of quantum field theory to obtain the quantum corrections to the zero temperature phase diagram of systems with competing order parameters. We are particularly interested in two different scenarios: regions of the phase diagram where there is a bicritical point, at which both phases vanish continuously, and the case where both phases coexist homogeneously. We consider different types of couplings between the order parameters, including a bilinear one. This kind of coupling breaks time-reversal symmetry and it is only allowed if both order parameters transform according to the same irreducible representation. This occurs in many physical systems of actual interest like competing spin density waves, different types of orbital antiferromagnetism, elastic instabilities of crystal lattices, vortices in a multigap SC and also applies to describe the unusual magnetism of the heavy fermion compound URu2Si2. Our results show that quantum corrections have an important effect on the phase diagram of systems with competing orders. 18. Quantum corrections for the phase diagram of systems with competing order. Science.gov (United States) Silva, N L; Continentino, Mucio A; Barci, Daniel G 2018-06-06 We use the effective potential method of quantum field theory to obtain the quantum corrections to the zero temperature phase diagram of systems with competing order parameters. We are particularly interested in two different scenarios: regions of the phase diagram where there is a bicritical point, at which both phases vanish continuously, and the case where both phases coexist homogeneously. We consider different types of couplings between the order parameters, including a bilinear one. This kind of coupling breaks time-reversal symmetry and it is only allowed if both order parameters transform according to the same irreducible representation. This occurs in many physical systems of actual interest like competing spin density waves, different types of orbital antiferromagnetism, elastic instabilities of crystal lattices, vortices in a multigap SC and also applies to describe the unusual magnetism of the heavy fermion compound URu 2 Si 2 . Our results show that quantum corrections have an important effect on the phase diagram of systems with competing orders. 19. Phase diagram of the ternary Zr-Ti-Sn system International Nuclear Information System (INIS) Arias, D.; Gonzalez Camus, M. 1987-01-01 It is well known that Ti stabilizes the high temperature cubic phase of Zr and that Sn stabilizes the low temperature hexagonal phase of Zr. The effect of Sn on the Zr-Ti diagram has been studied in the present paper. Using high purity metals, nine different alloys have been prepared, with 4-32 at % Ti, 0.7-2.2 at % Sn and Zr till 100%. Resistivity and optical and SEM metallography techniques have been employed. Effect of some impurities have been analyzed. The results are discussed and different isothermic sections of the ternary Zr-Ti-Sn diagram are presented. (Author) [es 20. Phase diagrams and physicochemical properties of Li+,K+(Rb+)//borate-H2O systems at 323 K Science.gov (United States) Feng, Shan; Yu, Xudong; Cheng, Xinglong; Zeng, Ying 2017-11-01 The phase and physicochemical properties diagrams of Li+,K+(Rb+)//borate-H2O systems at 323 K were constructed using the experimentally measured solubilities, densities, and refractive indices. The Schreinemakers' wet residue method and the X-ray diffraction were used for the determination of the compositions of solid phase. Results show that these two systems belong to the hydrate I type, with no solid solution or double salt formation. The borate phases formed in our experiments are RbB5O6(OH)4 · 2H2O, Li2B4O5(OH)4 · H2O, and K2B4O5(OH)4 · 2H2O. Comparison between the stable phase diagrams of the studied system at 288, 323, and 348 K show that in this temperature range, the crystallization form of salts do not changed. With the increase in temperature, the crystallization field of Li2B4O5(OH)4 · H2O salt at 348 K is obviously larger than that at 288 K. In the Li+,K+(Rb+)//borate-H2O systems, the densities and refractive indices of the solutions (at equilibrium) increase along with the mass fraction of K2B4O7 (Rb2B4O7), and reach the maximum values at invariant point E. 1. The 480 deg. C and 405 deg. C isothermal sections of the phase diagram of Fe-Zn-Si ternary system Energy Technology Data Exchange (ETDEWEB) Wang Jianhua [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Hunan 411105 (China)]. E-mail: super_wang111@hotmail.com; Su Xuping [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Hunan 411105 (China); Yin Fucheng [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Hunan 411105 (China); Li Zhi [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Hunan 411105 (China); Zhao Manxiu [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Hunan 411105 (China) 2005-08-16 The 480 deg. C and 405 deg. C isothermal sections of the Fe-Zn-Si ternary phase diagram have been determined experimentally using scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) and X-ray diffractometry. The research of the work has concentrated on the Zn-rich corner, which is relevant to general galvanizing. The present studies have confirmed the existence of equilibrium state between the liquid, the {zeta} phase and the FeSi phase at the 480 deg. C isothermal section. There exist some changes in the phase relationships compared with the isothermal section at 450 deg. C. Experimental results indicate that Si solubility in all four Zn-Fe compounds is also limited at 480 deg. C and 405 deg. C. 2. Experimental Phase Equilibria Studies of the Pb-Fe-O System in Air, in Equilibrium with Metallic Lead and at Intermediate Oxygen Potentials Science.gov (United States) Shevchenko, M.; Jak, E. 2017-12-01 The phase equilibria information on the Pb-Fe-O system is of practical importance for the improvement of the existing thermodynamic database of lead-containing slag systems (Pb-Zn-Fe-Cu-Si-Ca-Al-Mg-O). Phase equilibria of the Pb-Fe-O system have been investigated: (a) in air at temperatures between 1053 K and 1373 K (780 °C and 1100 °C); (b) in equilibrium with metallic lead at temperatures between 1053 K and 1373 K (780 °C and 1100 °C); and (c) at intermediate oxidation conditions for the liquid slag in equilibrium with two solids (spinel + magnetoplumbite), at temperatures between 1093 K and 1373 K (820 °C and 1100 °C). The high-temperature equilibration/quenching/electron probe X-ray microanalysis technique has been used to accurately determine the compositions of the phases in equilibrium in the system. The Pb and Fe concentrations in the phases were determined directly; preliminary thermodynamic modeling with FactSage was used to estimate the ferrous-to-ferric ratios and to present the results in the ternary diagram. 3. Reaction paths and equilibrium end-points in solid-solution aqueous-solution systems Science.gov (United States) Glynn, P.D.; Reardon, E.J.; Plummer, Niel; Busenberg, E. 1990-01-01 Equations are presented describing equilibrium in binary solid-solution aqueous-solution (SSAS) systems after a dissolution, precipitation, or recrystallization process, as a function of the composition and relative proportion of the initial phases. Equilibrium phase diagrams incorporating the concept of stoichiometric saturation are used to interpret possible reaction paths and to demonstrate relations between stoichiometric saturation, primary saturation, and thermodynamic equilibrium states. The concept of stoichiometric saturation is found useful in interpreting and putting limits on dissolution pathways, but there currently is no basis for possible application of this concept to the prediction and/ or understanding of precipitation processes. Previously published dissolution experiments for (Ba, Sr)SO4 and (Sr, Ca)C??O3orth. solids are interpreted using equilibrium phase diagrams. These studies show that stoichiometric saturation can control, or at least influence, initial congruent dissolution pathways. The results for (Sr, Ca)CO3orth. solids reveal that stoichiometric saturation can also control the initial stages of incongruent dissolution, despite the intrinsic instability of some of the initial solids. In contrast, recrystallisation experiments in the highly soluble KCl-KBr-H2O system demonstrate equilibrium. The excess free energy of mixing calculated for K(Cl, Br) solids is closely modeled by the relation GE = ??KBr??KClRT[a0 + a1(2??KBr-1)], where a0 is 1.40 ?? 0.02, a1, is -0.08 ?? 0.03 at 25??C, and ??KBr and ??KCl are the mole fractions of KBr and KCl in the solids. The phase diagram constructed using this fit reveals an alyotropic maximum located at ??KBr = 0.676 and at a total solubility product, ???? = [K+]([Cl-] + [Br-]) = 15.35. ?? 1990. 4. Phase diagram of strongly correlated Fermi systems International Nuclear Information System (INIS) Zverev, M.V.; Khodel', V.A.; Baldo, M. 2000-01-01 Phase transitions in uniform Fermi systems with repulsive forces between the particles caused by restructuring of quasiparticle filling n(p) are analyzed. It is found that in terms of variables, i.e. density ρ, nondimensional binding constant η, phase diagram of a strongly correlated Fermi system for rather a wide class of interactions reminds of a puff-pastry pie. Its upper part is filled with fermion condensate, the lower one - with normal Fermi-liquid. They are separated by a narrow interlayer - the Lifshits phase, characterized by the Fermi multibound surface [ru 5. (α,η) phase diagrams in tilted chiral smectics International Nuclear Information System (INIS) Rjili, M.; Marcerou, J.P.; Gharbi, A.; Othman, T. 2013-01-01 The polymorphism of tilted chiral smectics liquid crystals is incredibly rich and encompasses many subphases such as SmC A ⁎ ; SmC Fi1 ⁎ ; SmC Fi2 ⁎ ; SmC ⁎ ; SmC α ⁎ . The continuum theory established by Marcerou (2010) is used to derive an expression for the free energy density of those subphases. The minimization of this free energy is obtained through a combination of analytical and numerical methods. It leads to a phase diagram built in the (α,η) plane where α is local angular parameter and η describes the variation of the temperature. From this graphical representation, many experimentally observed phase sequences of ferroelectric liquid crystals can be explained, even them including subphases which were recently observed like the SmC 5 ⁎ and the SmC 6 ⁎ ones. However, it should be emphasized that the details of predicted phase diagram are strongly dependent on the compound studied. 6. "Phase diagrams of Lecithin-based microemulsions containing Sodium Salicylate " Directory of Open Access Journals (Sweden) "Aboofazeli R 2000-08-01 Full Text Available Partial phase diagrams were constructed at 25°C to investigate the phase behaviour of systems composed of soybean lecithin, water, sodium salicylate, alcohol and isopropyl myristate. The lecithins used were the commercially available soy bean lecithins, namely E200 and E170 (phosphatidyl choline purities greater than 95% and 68-72% respectively. The cosurfactants employed were n-propanol, 2-propanol and n-butanol and these were used at lecithin/alcohol weight ratios (Km of 1:1 and 1.5:1. At a given Km, the aqueous phase consisted of a 2% w/w sodium salicylate solution. Phase diagrams showed the area of existence of a stable isotropic region along the surfactant/oil axis (i.e., reverse microemulsion area. The extension of the microemulsion domain was influenced by the purity of surfactant, the lecithin/alcohol weight ratios and the kind of the alcohol. 7. Magnetic phase diagram of HoxTm1-x alloys DEFF Research Database (Denmark) Sarthour, R.S.; Cowley, R.A.; Ward, R.C.C. 2000-01-01 The magnetic phase diagram of the competing anisotropy system, Ho/Tm, has been determined by neutron-scattering techniques and the results compared with calculations based on a mean-field model. The crystal-field interactions in Ho favor alignment of the magnetic moments in the basal plane whereas......, with long-range order, were identified and the magnetic phase diagram, including a pentacritical point, determined. A mean-field model was used to explain the results and the results are in good agreement with the experimental results....... in Tm they favor alignment along the c axis. Single-crystal alloys were grown with molecular-beam epitaxy techniques in Oxford. The components of the magnetic moment alone the c direction and in the basal plane were determined from the neutron-scattering measurements. Five distinct magnetic phases... 8. Thermodynamic study of CVD-ZrO2 phase diagrams International Nuclear Information System (INIS) Torres-Huerta, A.M.; Vargas-Garcia, J.R.; Dominguez-Crespo, M.A.; Romero-Serrano, J.A. 2009-01-01 Chemical vapor deposition (CVD) of zirconium oxide (ZrO 2 ) from zirconium acetylacetonate Zr(acac) 4 has been thermodynamically investigated using the Gibbs' free energy minimization method and the FACTSAGE program. Thermodynamic data Cp o , ΔH o and S o for Zr(acac) 4 have been estimated using the Meghreblian-Crawford-Parr and Benson methods because they are not available in the literature. The effect of deposition parameters, such as temperature and pressure, on the extension of the region where pure ZrO 2 can be deposited was analyzed. The results are presented as calculated CVD stability diagrams. The phase diagrams showed two zones, one of them corresponds to pure monoclinic phase of ZrO 2 and the other one corresponds to a mix of monoclinic phase of ZrO 2 and graphite carbon. 9. Exploring the Nuclear Phase Diagram with Beam Energy Scans International Nuclear Information System (INIS) Horvat, Stephen 2017-01-01 The nuclear phase diagram is mapped using beam energy scans of relativistic heavy-ion collisions. This mapping is possible because different collision energies develop along different trajectories through the phase diagram. High energy collisions will evolve though a crossover phase transition according to lattice QCD, but lower collision energies may traverse a first order phase transition. There are hints for this first order phase transition and its critical endpoint, but further measurements and theoretical guidance is needed. In addition to mapping the phase transition, beam energy scans allow us to see if we can turn off the signatures of deconfinement. If an observable is a real signature for the formation of the deconfined state called quark-gluon plasma, then it should turn off at sufficiently low collision energies. In this summary talk I will show the current state of the field using beam energy scan results from RHIC and SPS, I will show where precise theoretical guidance is needed for understanding recent measurements, and I will motivate the need for more data and new measurements from FAIR, NICA, RHIC, and the SPS. (paper) 10. Identifying Liquid-Gas System Misconceptions and Addressing Them Using a Laboratory Exercise on Pressure-Temperature Diagrams of a Mixed Gas Involving Liquid-Vapor Equilibrium Science.gov (United States) 2016-01-01 This study focuses on students' understandings of a liquid-gas system with liquid-vapor equilibrium in a closed system using a pressure-temperature ("P-T") diagram. By administrating three assessment questions concerning the "P-T" diagrams of liquid-gas systems to students at the beginning of undergraduate general chemistry… 11. Evaluated phase diagrams of binary metal-tellurium systems of the D-block transition elements International Nuclear Information System (INIS) 1989-01-01 The binary phase diagrams of metal-tellurium systems for twenty seven d-block transition elements have been critically evaluated. Complete phase diagrams are presented for the elements, chromium, manganese, iron, cobalt, nickel, copper, molybdenum, palladium, silver, lanthanum, platinum and gold, whereas, for scandium, titanium, vanadium, yttrium, zirconium, niobium, technitium, ruthenium, rhodium, hafnium, tantalum, tungsten , rhenium, osmium and iridium, the phase diagrams are incomplete and tentative. (author). 20 refs., 27 tabs., 27 figs 12. HgTe-CdTe phase diagrams calculation by RAS model International Nuclear Information System (INIS) 1986-11-01 The model of Regular Associated Solutions (RAS) for binary solution, which extended onto the ternary solution was used for Mercury-Cadnium-Tellurim phase diagrams calculations. The function of dissociation parameters is used here as a function of temperature and it is independent of composition. The ratio of mole fractions has a weak dependence on temperature and is not neglected. The calculated liquidus binary temperature and the experimental one are so fitted to give the best values of parameters used to calculate the HgTe-CdTe phase diagrams. (author) 13. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography DEFF Research Database (Denmark) Jørgensen, Jakob Sauer; Sidky, E. Y. 2015-01-01 We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study...... and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers...... measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means... 14. Simulation analysis and ternary diagram of municipal solid waste pyrolysis and gasification based on the equilibrium model. Science.gov (United States) Deng, Na; Zhang, Awen; Zhang, Qiang; He, Guansong; Cui, Wenqian; Chen, Guanyi; Song, Chengcai 2017-07-01 A self-sustained municipal solid waste (MSW) pyrolysis-gasification process with self-produced syngas as heat source was proposed and an equilibrium model was established to predict the syngas reuse rate considering variable MSW components. Simulation results indicated that for constant moisture (ash) content, with the increase of ash (moisture) content, syngas reuse rate gradually increased, and reached the maximum 100% when ash (moisture) content was 73.9% (60.4%). Novel ternary diagrams with moisture, ash and combustible as axes were proposed to predict the adaptability of the self-sustained process and syngas reuse rate for waste. For wastes of given components, its position in the ternary diagram can be determined and the syngas reuse rate can be obtained, which will provide guidance for system design. Assuming that the MSW was composed of 100% combustible content, ternary diagram shows that there was a minimum limiting value of 43.8% for the syngas reuse rate in the process. Copyright © 2017. Published by Elsevier Ltd. 15. Phase diagram of ammonium nitrate Energy Technology Data Exchange (ETDEWEB) Dunuwille, Mihindra; Yoo, Choong-Shik, E-mail: csyoo@wsu.edu [Department of Chemistry and Institute for Shock Physics, Washington State University, Pullman, Washington 99164 (United States) 2013-12-07 Ammonium Nitrate (AN) is a fertilizer, yet becomes an explosive upon a small addition of chemical impurities. The origin of enhanced chemical sensitivity in impure AN (or AN mixtures) is not well understood, posing significant safety issues in using AN even today. To remedy the situation, we have carried out an extensive study to investigate the phase stability of AN and its mixtures with hexane (ANFO–AN mixed with fuel oil) and Aluminum (Ammonal) at high pressures and temperatures, using diamond anvil cells (DAC) and micro-Raman spectroscopy. The results indicate that pure AN decomposes to N{sub 2}, N{sub 2}O, and H{sub 2}O at the onset of the melt, whereas the mixtures, ANFO and Ammonal, decompose at substantially lower temperatures. The present results also confirm the recently proposed phase IV-IV{sup ′} transition above 17 GPa and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400°C. 16. Phase diagram of ammonium nitrate International Nuclear Information System (INIS) Dunuwille, Mihindra; Yoo, Choong-Shik 2013-01-01 Ammonium Nitrate (AN) is a fertilizer, yet becomes an explosive upon a small addition of chemical impurities. The origin of enhanced chemical sensitivity in impure AN (or AN mixtures) is not well understood, posing significant safety issues in using AN even today. To remedy the situation, we have carried out an extensive study to investigate the phase stability of AN and its mixtures with hexane (ANFO–AN mixed with fuel oil) and Aluminum (Ammonal) at high pressures and temperatures, using diamond anvil cells (DAC) and micro-Raman spectroscopy. The results indicate that pure AN decomposes to N 2 , N 2 O, and H 2 O at the onset of the melt, whereas the mixtures, ANFO and Ammonal, decompose at substantially lower temperatures. The present results also confirm the recently proposed phase IV-IV ′ transition above 17 GPa and provide new constraints for the melting and phase diagram of AN to 40 GPa and 400°C 17. Phase diagram of the mean field model of simplicial gravity International Nuclear Information System (INIS) Bialas, P.; Burda, Z.; Johnston, D. 1999-01-01 We discuss the phase diagram of the balls in boxes model, with a varying number of boxes. The model can be regarded as a mean-field model of simplicial gravity. We analyse in detail the case of weights of the form p(q) = q -β , which correspond to the measure term introduced in the simplicial quantum gravity simulations. The system has two phases: elongated (fluid) and crumpled. For β ε (2, ∞) the transition between these two phases is first-order, while for β ε (1, 2) it is continuous. The transition becomes softer when β approaches unity and eventually disappears at β = 1. We then generalise the discussion to an arbitrary set of weights. Finally, we show that if one introduces an additional kinematic bound on the average density of balls per box then a new condensed phase appears in the phase diagram. It bears some similarity to the crinkled phase of simplicial gravity discussed recently in models of gravity interacting with matter fields 18. Phase diagram of two-component bosons on an optical lattice International Nuclear Information System (INIS) Altman, Ehud; Hofstetter, Walter; Demler, Eugene; Lukin, Mikhail D 2003-01-01 We present a theoretical analysis of the phase diagram of two-component bosons on an optical lattice. A new formalism is developed which treats the effective spin interactions in the Mott and superfluid phases on the same footing. Using this new approach we chart the phase boundaries of the broken spin symmetry states up to the Mott to superfluid transition and beyond. Near the transition point, the magnitude of spin exchange can be very large, which facilitates the experimental realization of spin-ordered states. We find that spin and quantum fluctuations have a dramatic effect on the transition, making it first order in extended regions of the phase diagram. When each species is at integer filling, an additional phase transition may occur, from a spin-ordered insulator to a Mott insulator with no broken symmetries. We determine the phase boundaries in this regime and show that this is essentially a Mott transition in the spin sector 19. Automated calculation of complete Pxy and Txy diagrams for binary systems DEFF Research Database (Denmark) Cismondi, Martin; Michelsen, Michael Locht 2007-01-01 phase equilibrium calculations in binary systems, in: Proceedings of the CD-ROM EQUIFASE 2006, Morelia, Michoacan, Mexico, October 21-25, 2006; www.gpec.plapiqui.edu.ar]. In this work we present the methods and computational strategy for the automated calculation of complete Pxy and Txy diagrams... 20. Exact ground-state phase diagrams for the spin-3/2 Blume-Emery-Griffiths model International Nuclear Information System (INIS) Canko, Osman; Keskin, Mustafa; Deviren, Bayram 2008-01-01 We have calculated the exact ground-state phase diagrams of the spin-3/2 Ising model using the method that was proposed and applied to the spin-1 Ising model by Dublenych (2005 Phys. Rev. B 71 012411). The calculated, exact ground-state phase diagrams on the diatomic and triangular lattices with the nearest-neighbor (NN) interaction have been presented in this paper. We have obtained seven and 15 topologically different ground-state phase diagrams for J>0 and J 0 and J<0, respectively, the conditions for the existence of uniform and intermediate phases have also been found 1. ({alpha},{eta}) phase diagrams in tilted chiral smectics Energy Technology Data Exchange (ETDEWEB) Rjili, M., E-mail: medrjili@yahoo.fr [Laboratoire de Physique de la Matiere Molle et de la Modelisation Electromagnetique, Faculte des Sciences de Tunis, Universite Tunis El Manar, 2092 El Manar Tunis (Tunisia); Marcerou, J.P., E-mail: marcerou@crpp-bordeaux.cnrs.fr [Centre de Recherches Paul Pascal, 115, Av. Albert-Schweitzer, 33600 Pessac (France); Gharbi, A.; Othman, T. [Laboratoire de Physique de la Matiere Molle et de la Modelisation Electromagnetique, Faculte des Sciences de Tunis, Universite Tunis El Manar, 2092 El Manar Tunis (Tunisia) 2013-02-01 The polymorphism of tilted chiral smectics liquid crystals is incredibly rich and encompasses many subphases such as SmC{sub A}{sup Low-Asterisk }; SmC{sub Fi1}{sup Low-Asterisk }; SmC{sub Fi2}{sup Low-Asterisk }; SmC{sup Low-Asterisk }; SmC{sub {alpha}}{sup Low-Asterisk }. The continuum theory established by Marcerou (2010) is used to derive an expression for the free energy density of those subphases. The minimization of this free energy is obtained through a combination of analytical and numerical methods. It leads to a phase diagram built in the ({alpha},{eta}) plane where {alpha} is local angular parameter and {eta} describes the variation of the temperature. From this graphical representation, many experimentally observed phase sequences of ferroelectric liquid crystals can be explained, even them including subphases which were recently observed like the SmC{sub 5}{sup Low-Asterisk} and the SmC{sub 6}{sup Low-Asterisk} ones. However, it should be emphasized that the details of predicted phase diagram are strongly dependent on the compound studied. 2. Calculation of Fe-B-V ternary phase diagram Czech Academy of Sciences Publication Activity Database Homolová, V.; Kroupa, Aleš; Výrostková, A. 2012-01-01 Roč. 520, APR (2012), s. 30-35 ISSN 0925-8388 R&D Projects: GA ČR(CZ) GAP108/10/1908 Institutional support: RVO:68081723 Keywords : phase diagram * thermodynamic modelling Subject RIV: BJ - Thermodynamics Impact factor: 2.390, year: 2012 3. Calculation of binary phase diagrams between the actinide elements, rare earth elements, and transition metal elements International Nuclear Information System (INIS) Selle, J.E. 1992-01-01 Attempts were made to apply the Kaufman method of calculating binary phase diagrams to the calculation of binary phase diagrams between the rare earths, actinides, and the refractory transition metals. Difficulties were encountered in applying the method to the rare earths and actinides, and modifications were necessary to provide accurate representation of known diagrams. To calculate the interaction parameters for rare earth-rare earth diagrams, it was necessary to use the atomic volumes for each of the phases: liquid, body-centered cubic, hexagonal close-packed, and face-centered cubic. Determination of the atomic volumes of each of these phases for each element is discussed in detail. In some cases, empirical means were necessary. Results are presented on the calculation of rare earth-rare earth, rare earth-actinide, and actinide-actinide diagrams. For rare earth-refractory transition metal diagrams and actinide-refractory transition metal diagrams, empirical means were required to develop values for the enthalpy of vaporization for rare earth elements and values for the constant (C) required when intermediate phases are present. Results of using the values determined for each element are presented 4. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model International Nuclear Information System (INIS) Kohyama, H.; Kimura, D.; Inagaki, T. 2015-01-01 We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies 5. Temperature-field phase diagram of extreme magnetoresistance. Science.gov (United States) Fallah Tafti, Fazel; Gibson, Quinn; Kushwaha, Satya; Krizan, Jason W; Haldolaarachchige, Neel; Cava, Robert Joseph 2016-06-21 The recent discovery of extreme magnetoresistance (XMR) in LaSb introduced lanthanum monopnictides as a new platform to study this effect in the absence of broken inversion symmetry or protected linear band crossing. In this work, we report XMR in LaBi. Through a comparative study of magnetotransport effects in LaBi and LaSb, we construct a temperature-field phase diagram with triangular shape that illustrates how a magnetic field tunes the electronic behavior in these materials. We show that the triangular phase diagram can be generalized to other topological semimetals with different crystal structures and different chemical compositions. By comparing our experimental results to band structure calculations, we suggest that XMR in LaBi and LaSb originates from a combination of compensated electron-hole pockets and a particular orbital texture on the electron pocket. Such orbital texture is likely to be a generic feature of various topological semimetals, giving rise to their small residual resistivity at zero field and subject to strong scattering induced by a magnetic field. 6. Electron Number-Based Phase Diagram of Pr1 -xLaCex CuO4 -δ and Possible Absence of Disparity between Electron- and Hole-Doped Cuprate Phase Diagrams Science.gov (United States) Song, Dongjoon; Han, Garam; Kyung, Wonshik; Seo, Jeongjin; Cho, Soohyun; Kim, Beom Seo; Arita, Masashi; Shimada, Kenya; Namatame, Hirofumi; Taniguchi, Masaki; Yoshida, Y.; Eisaki, H.; Park, Seung Ryong; Kim, C. 2017-03-01 We performed annealing and angle resolved photoemission spectroscopy studies on electron-doped cuprate Pr1 -xLaCex CuO4 -δ (PLCCO). It is found that the optimal annealing condition is dependent on the Ce content x . The electron number (n ) is estimated from the experimentally obtained Fermi surface volume for x =0.10 , 0.15 and 0.18 samples. It clearly shows a significant and annealing dependent deviation from the nominal x . In addition, we observe that the pseudo-gap at hot spots is also closely correlated with n ; the pseudogap gradually closes as n increases. We established a new phase diagram of PLCCO as a function of n . Different from the x -based one, the new phase diagram shows similar antiferromagnetic and superconducting phases to those of hole doped ones. Our results raise a possibility for absence of disparity between the phase diagrams of electron- and hole-doped cuprates 7. Phase diagram of dilute nuclear matter: Unconventional pairing and the BCS-BEC crossover Energy Technology Data Exchange (ETDEWEB) Stein, Martin; Sedrakian, Armen [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik 2013-07-01 We report on a comprehensive study of the phase structure of cold, dilute nuclear matter featuring a {sup 3}S{sub 1}-{sup 3}D{sub 1} condensate at non-zero isospin asymmetry, within wide ranges of temperatures and densities. We find a rich phase diagram comprising three superfluid phases, namely a LOFF phase, the ordinary BCS phase, and a heterogeneous, phase-separated BCS phase, with associated crossovers from the latter two phases to a homogeneous or phase-separated Bose-Einstein condensate of deuterons. The phase diagram contains two tri-critical points (one a Lifshitz point), which may degenerate into a single tetra-critical point for some degree of isospin asymmetry. 8. Use of isoconcentrational phase diagrams for prediction of amorphization of binary systems International Nuclear Information System (INIS) Lazarev, A.I.; Belashchenko, D.K. 1992-01-01 Based on the application of isoconcentrational diagrams of phase equilibria of liquid with solid solutions of various crystal structures the thermodynamic method was considered for prediction of concentration ranges of amorphization in binary systems.To confirm the applicability of the thermodynamic criterion in practice caclulations of phase diagrams were accomplished for complex binary eutectic systems (Hf-Be, Zr-Be) with the known concentration ranges of amorphization 9. Phase diagram for interacting Bose gases International Nuclear Information System (INIS) Morawetz, K.; Maennel, M.; Schreiber, M. 2007-01-01 We propose a modified form of the inversion method in terms of a self-energy expansion to access the phase diagram of the Bose-Einstein transition. The dependence of the critical temperature on the interaction parameter is calculated. This is discussed with the help of a condition for Bose-Einstein condensation in interacting systems which follows from the pole of the T matrix in the same way as from the divergence of the medium-dependent scattering length. A many-body approximation consisting of screened ladder diagrams is proposed, which describes the Monte Carlo data more appropriately. The specific results are that a non-self-consistent T matrix leads to a linear coefficient in leading order of 4.7, the screened ladder approximation to 2.3, and the self-consistent T matrix due to the effective mass to a coefficient of 1.3 close to the Monte Carlo data 10. Using reweighting and free energy surface interpolation to predict solid-solid phase diagrams Science.gov (United States) Schieber, Natalie P.; Dybeck, Eric C.; Shirts, Michael R. 2018-04-01 Many physical properties of small organic molecules are dependent on the current crystal packing, or polymorph, of the material, including bioavailability of pharmaceuticals, optical properties of dyes, and charge transport properties of semiconductors. Predicting the most stable crystalline form at a given temperature and pressure requires determining the crystalline form with the lowest relative Gibbs free energy. Effective computational prediction of the most stable polymorph could save significant time and effort in the design of novel molecular crystalline solids or predict their behavior under new conditions. In this study, we introduce a new approach using multistate reweighting to address the problem of determining solid-solid phase diagrams and apply this approach to the phase diagram of solid benzene. For this approach, we perform sampling at a selection of temperature and pressure states in the region of interest. We use multistate reweighting methods to determine the reduced free energy differences between T and P states within a given polymorph and validate this phase diagram using several measures. The relative stability of the polymorphs at the sampled states can be successively interpolated from these points to create the phase diagram by combining these reduced free energy differences with a reference Gibbs free energy difference between polymorphs. The method also allows for straightforward estimation of uncertainties in the phase boundary. We also find that when properly implemented, multistate reweighting for phase diagram determination scales better with the size of the system than previously estimated. 11. Models with short- and long-range interactions: the phase diagram and the reentrant phase International Nuclear Information System (INIS) Dauxois, Thierry; Lori, Leonardo; Ruffo, Stefano; De Buyl, Pierre 2010-01-01 We study the phase diagram of two different Hamiltonians with competing local, nearest-neighbour, and mean-field couplings. The first example corresponds to the HMF Hamiltonian with an additional short-range interaction. The second example is a reduced Hamiltonian for dipolar layered spin structures, with a new feature with respect to the first example: the presence of anisotropies. The two examples are solved in both the canonical and the microcanonical ensemble using a combination of the min–max method with the transfer operator method. The phase diagrams present typical features of systems with long-range interactions: ensemble inequivalence, negative specific heat and temperature jumps. Moreover, for a given range of parameters, we report the signature of phase reentrance. This can also be interpreted as the presence of azeotropy with the creation of two first-order phase transitions with ensemble inequivalence, as one parameter is varied continuously 12. Monte Carlo study of the phase diagram for the two-dimensional Z(4) model International Nuclear Information System (INIS) Carneiro, G.M.; Pol, M.E.; Zagury, N. 1982-05-01 The phase diagram of the two-dimensional Z(4) model on a square lattice is determined using a Monte Carlo method. The results of this simulation confirm the general features of the phase diagram predicted theoretically for the ferromagnetic case, and show the existence of a new phase with perpendicular order. (Author) [pt 13. Experimental measurement and prediction of (liquid + liquid + liquid) equilibrium for the system (n-hexadecane + water + triacetin) International Nuclear Information System (INIS) Revellame, Emmanuel D.; Holmes, William E.; Hernandez, Rafael; French, W. Todd; Forks, Allison; Ashe, Taylor; Estévez, L. Antonio 2016-01-01 Highlights: • Phase diagram for the system n-hexadecane + water + triacetin was established at T = 296.15 K and atmospheric pressure (0.1 MPa). • Both NRTL and UNIQUAC activity coefficient model adequately predicts the LLLE of the ternary system. • The phase equilibrium of the system is predominantly dictated by enthalpic contributions to the activity coefficient. - Abstract: The phase diagram for the ternary system containing (n-hexadecane + water + triacetin) was obtained experimentally at T = 296.15 K and ambient pressure. Results show that this system is of Type 3 according to the Treybal classification of ternary system. NRTL and UNIQUAC interaction parameters were calculated from binary phase equilibrium values and were used to predict the (liquid + liquid + liquid) equilibrium (LLLE) region. Results indicated that both NRTL and UNIQUAC could predict the LLLE region of the system with similar precision as indicated by the comparable standard deviations. This indicates that the enthalpic contribution to the activity coefficient is predominant and entropic contributions can be neglected. 14. Phase diagram of Fe1-xCox ultrathin film International Nuclear Information System (INIS) Fridman, Yu.A.; Klevets, Ph.N.; Voytenko, A.P. 2008-01-01 Concentration-driven reorientation phase transitions in ultrathin magnetic films of FeCo alloy have been studied. It is established that, in addition to the easy-axis and easy-plane phases, a spatially inhomogeneous phase (domain structure), a canted phase, and also an 'in-plane easy-axis' phase can exist in the system. The realization of the last phase is associated with the competition between the single-ion anisotropy and the magnetoelastic interaction. The critical values of Co concentration corresponding to the phase transitions are evaluated, the types of phase transitions are determined, and the phase diagrams are constructed 15. Cu–Ni nanoalloy phase diagram – Prediction and experiment Czech Academy of Sciences Publication Activity Database Sopoušek, J.; Vřešťál, J.; Pinkas, J.; Brož, P.; Buršík, Jiří; Stýskalík, A.; Škoda, D.; Zobač, O.; Lee, J. 2014-01-01 Roč. 45, June (2014), s. 33-39 ISSN 0364-5916 Institutional support: RVO:68081723 Keywords : nanoalloy * phase diagram * thermodynamic modeling Subject RIV: BJ - Thermodynamics Impact factor: 1.370, year: 2014 16. The QCD Phase Diagram: Large Nc, Quarkyonic Matter and the Triple Point International Nuclear Information System (INIS) McLerran, L. 2010-01-01 I discuss the phase diagram of QCD in the large N c limit. Qarkyonic Matter is described. The properties of QCD matter as measured in the abundance of produced particles are shown to be consistent with this phase diagram. A possible triple point of Hadronic Matter, Deconfined Matter and Quarkyonic Matter is shown to explain various behaviors of ratios of particle abundances seen in CERN fixed target experiments. (author) 17. Optimization of the thermodynamic properties and phase diagrams of P2O5-containing systems Science.gov (United States) Hudon, Pierre; Jung, In-Ho 2014-05-01 P2O5 is an important oxide component in the late stage products of numerous igneous rocks such as granites and pegmatites. Typically, P2O5 combines with CaO and crystallizes in the form of apatite, while in volatile-free conditions, Ca-whitlockite is formed. In spite of their interest, the thermodynamic properties and phase diagrams of P2O5-containg systems are not well known yet. In the case of the pure P2O5 for example, no experimental thermodynamic data are available for the liquid and the O and O' solid phases. As a result, we re-evaluated all the thermodynamic and phase diagram data of the P2O5 unary system [1]. Optimization of the thermodynamic properties and phase diagrams of the binary P2O5 systems was then performed including the Li2O-, Na2O-, MgO-, CaO-, BaO-, MnO-, FeO-, Fe2O3-, ZnO-, Al2O3-, and SiO2-P2O5 [2] systems. All available thermodynamic and phase equilibrium data were simultaneously reproduced in order to obtain a set of model equations for the Gibbs energies of all phases as functions of temperature and composition. In particular, the Gibbs energy of the liquid solution was described using the Modified Quasichemical Model [3-5] implemented in the FactSage software [6]. Thermodynamic modeling of the Li2O-Na2O-K2O-MgO-CaO-FeO-Fe2O3-Al2O3-SiO2 system, which include many granite-forming minerals such as nepheline, leucite, pyroxene, melilite, feldspar and spinel is currently in progress. [1] Jung, I.-H., Hudon, P. (2012) Thermodynamic assessment of P2O5. J. Am. Ceram. Soc., 95 (11), 3665-3672. [2] Rahman, M., Hudon, P. and Jung, I.-H. (2013) A coupled experimental study and thermodynamic modeling of the SiO2-P2O5 system. Metall. Mater. Trans. B, 44 (4), 837-852. [3] Pelton, A.D. and Blander, M. (1984) Computer-assisted analysis of the thermodynamic properties and phase diagrams of slags. Proc. AIME Symp. Metall. Slags Fluxes, TMS-AIME, 281-294. [4] Pelton, A.D. and Blander, M. (1986) Thermodynamic analysis of ordered liquid solutions by a modified 18. Pseudo-ternary phase diagram in the Na2O-Na2O2-NaOH system International Nuclear Information System (INIS) Saito, Jun-ichi; Tendo, Masayuki; Aoto, Kazumi 1997-10-01 Generally, the phase diagrams are always used to understand the present state of compounds at certain temperature. In order to understand the corrosion behavior of structural material for FBR by main sodium compounds (Na 2 O, Na 2 O 2 and NaOH), it is very important to comprehend the phase diagrams of their compounds. However, only Na 2 O-NaOH pseudo-binary phase diagram had been investigated previously in this system. There is no study of other pseudo-binary or ternary phase diagrams in the Na 2 O-Na 2 O 2 -NaOH system. In this study, in order to clarify the present states of their compounds at certain temperatures, the pseudo-binary and ternary phase diagrams in the Na 2 O-Na 2 O 2 -NaOH system were prepared. A series of thermal analyses with binary and ternary component system has been carried out using the differential scanning calorimetry (DSC). The liquidus temperature and ternary eutectic temperatures were confirmed by these measurements. The beneficial indications for constructing phase diagrams were obtained from these experiments. On the basis of these results, the interaction parameters between compounds which were utilized for the Thermo-Calc calculation were optimized. Thermo-Calc is one of thermodynamic calculation software. Consequently the accurate pseudo-binary and ternary phase diagrams were indicated using the optimized parameters. (author) 19. Phase diagram and transport properties for hydrogen-helium fluid planets International Nuclear Information System (INIS) Stevenson, D.J.; Salpeter, E.E. 1977-01-01 Hydrogen and helium are the major constituents of Jupiter and Saturn, and phase transitions can have important effects on the planetary structure. In this paper, the relevant phase diagrams and microscopic transport properties are analyzed in detail. The following paper (Paper II) applies these results to the evolution and present dynamic structure of the Jovian planets.Pure hydrogen is first discussed, especially the nature of the molecular-metallic transition and the melting curves for the two phases. It is concluded that at the temperatures and pressures of interest (Tapprox. =10 4 K, Papprox. =1--10 Mbar), both phases are fluid, but the transition between them might nevertheless be first-order. The insulator-metal transition in helium occurs at a much higher pressure (approx.70 Mbars) and is not of interest.The phase diagrams for both molecular and metallic hydrogen-helium mixtures are discussed. In the metallic mixture, calculations indicate a miscibility gap for T9 or approx. =10 4 K. Immiscibility in the molecular mixture is more difficult to predict but almost certainly occurs at much lower temperatures. A fluid-state model is constructed which predicts the likely topology of the three-dimensional phase diagram. The greater solubility of helium in the molecular phase leads to the prediction that the He/H mass ratio is typically twice as large in the molecular phase as in the coexisting metallic phase. Under these circumstances a ''density inversion'' is possible in which the molecular phase becomes more dense than the metallic phase.The partitioning of minor constituents is also considered: The deuterium/hydrogen mass ratio is essentially the same for all coexisting hydrogen-helium phases, at least for T> or approx. =5000 K. The partitioning of H 2 O, CH 4 , and NH 3 probably favors the molecular (or helium-rich) phase. Substances with high conduction electron density (e.g., Al) may partition into the metallic phase 20. Lattice parameters values and phase diagram for the Cu2Zn1-zFezGeSe4 alloy system International Nuclear Information System (INIS) Caldera, D.; Quintero, M.; Morocoima, M.; Quintero, E.; Grima, P.; Marchan, N.; Moreno, E.; Bocaranda, P.; Delgado, G.E.; Mora, A.E.; Briceno, J.M.; Fernandez, J.L. 2008-01-01 X-ray powder diffraction and differential thermal analysis (DTA) measurements were made on polycrystalline samples of the Cu 2 Zn 1-z Fe z GeSe 4 alloy system. The diffraction patterns were used to show the equilibrium conditions and to estimate crystalline parameter values. It was found that, at room temperature, a single phase solid solution with the tetragonal stannite α structure (I4-bar2m) occurs across the whole composition range. The DTA thermograms were used to construct the phase diagram of the Cu 2 Zn 1-z Fe z GeSe 4 alloy system. It was confirmed that the Cu 2 ZnGeSe 4 compound melts incongruently. It was observed that undercooling effects occur for samples with z > 0.9 1. The phase diagram of ammonium nitrate Science.gov (United States) Chellappa, Raja S.; Dattelbaum, Dana M.; Velisavljevic, Nenad; Sheffield, Stephen 2012-08-01 The pressure-temperature (P-T) phase diagram of ammonium nitrate (AN) [NH4NO3] has been determined using synchrotron x-ray diffraction (XRD) and Raman spectroscopy measurements. Phase boundaries were established by characterizing phase transitions to the high temperature polymorphs during multiple P-T measurements using both XRD and Raman spectroscopy measurements. At room temperature, the ambient pressure orthorhombic (Pmmn) AN-IV phase was stable up to 45 GPa and no phase transitions were observed. AN-IV phase was also observed to be stable in a large P-T phase space. The phase boundaries are steep with a small phase stability regime for high temperature phases. A P-V-T equation of state based on a high temperature Birch-Murnaghan formalism was obtained by simultaneously fitting the P-V isotherms at 298, 325, 446, and 467 K, thermal expansion data at 1 bar, and volumes from P-T ramping experiments. Anomalous thermal expansion behavior of AN was observed at high pressure with a modest negative thermal expansion in the 3-11 GPa range for temperatures up to 467 K. The role of vibrational anharmonicity in this anomalous thermal expansion behavior has been established using high P-T Raman spectroscopy. 2. Pourbaix Diagrams at Elevated Temperatures A Study of Zinc and Tin Science.gov (United States) Palazhchenko, Olga Metals in industrial settings such as power plants are often subjected to high temperature and pressure aqueous environments, where failure to control corrosion compromises worker and environment safety. For instance, zircaloy (1.2-1.7 wt.% Sn) fuel rods are exposed to aqueous 250-310 °C coolant in CANDU reactors. The Pourbaix (EH-pH) diagram is a plot of electrochemical potential versus pH, which shows the domains of various metal species and by inference, corrosion susceptibility. Elevated temperature data for tin +II and tin +IV species were obtained using solid-aqueous phase equilibria with the respective oxides, in a batch vessel with in-situ pH measurement. Solubilities, determined via spectroscopic techniques, were used to calculate equilibrium constants and the Gibbs energies of Sn complexes for E-pH diagram construction. The SnOH3+ and Sn(OH )-5 species were incorporated, for the first time, into the 298.15 K and 358.15 K diagrams, with novel Go values determined at 358.15 K. Key words: Pourbaix diagrams, EH-pH, elevated temperatures, solubility, equilibrium, metal oxides, hydrolysis, redox potential, pH, thermochemical data, tin, zinc, zircaloy, corrosion, passivity. 3. From MIPS to Vicsek: A comprehensive phase diagram for self-propelled rods Science.gov (United States) Shi, Xiaqing Self-propelled rods interacting by volume exclusion is one of the simplest active matter systems. Despite years of effort, no comprehensive picture of their phase diagram is available. Furthermore, results on explicit rods are so far largely disconnected from those obtained on the relatively better understood cases of motility induced phase separation (MIPS) of (usually) isotropic active particles, and from our current knowledge of Vicsek-style aligning point particles. In this talk, I will present a complete phase diagram of a generic model of self-propelled rods and show how it is connected to both MIPS and Vicsek worlds. 4. Dynamic phase transition in the kinetic spin-32 Blume-Capel model: Phase diagrams in the temperature and crystal-field interaction plane International Nuclear Information System (INIS) Keskin, Mustafa; Canko, Osman; Deviren, Bayram 2007-01-01 We analyze, within a mean-field approach, the stationary states of the kinetic spin-32 Blume-Capel (BC) model by the Glauber-type stochastic dynamics and subject to a time-dependent oscillating external magnetic field. The dynamic phase transition (DPT) points are obtained by investigating the behavior of the dynamic magnetization as a function of temperature and as well as calculating the Liapunov exponent. Phase diagrams are constructed in the temperature and crystal-field interaction plane. We find five fundamental types of phase diagrams for the different values of the reduced magnetic field amplitude parameter (h) in which they present a disordered, two ordered phases and the coexistences phase regions. The phase diagrams also exhibit a dynamic double-critical end point for 0 5.06 5. Phase diagram study of a dimerized spin-S zig–zag ladder International Nuclear Information System (INIS) Matera, J M; Lamas, C A 2014-01-01 The phase diagram of a frustrated spin-S zig–zag ladder is studied through different numerical and analytical methods. We show that for arbitrary S, there is a family of Hamiltonians for which a fully-dimerized state is an exact ground state, being the Majumdar–Ghosh point for a particular member of the family. We show that the system presents a transition between a dimerized phase to a Néel-like phase for S = 1/2, and spiral phases can appear for large S. The phase diagram is characterized by means of a generalization of the usual mean field approximation. The novelty in the present implementation is to consider the strongest coupled sites as the unit cell. The gap and the excitation spectrum is analyzed through the random phase approximation. Also, a perturbative treatment to obtain the critical points is discussed. Comparisons of the results with numerical methods like the Density Matrix Renormalization Group are also presented. (paper) 6. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography. Science.gov (United States) Jørgensen, J S; Sidky, E Y 2015-06-13 We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. 7. P-T-x phase diagrams of MeF-UF4(Me=Li-Cs) systems International Nuclear Information System (INIS) Korenev, Yu.M.; Rykov, A.N.; Varkov, M.V.; Novoselova, A.V. 1988-01-01 Vapor composition and general pressure at three-phase equilibria in the MeF-UF 4 (Me=Li-Cs) systems are calculated using the values of independent component activities obtained earlier together with the data on fusibility diagrams. P-T and T-x projections of phase diagrams of these systems are constructed 8. Finite size and Coulomb corrections: from nuclei to nuclear liquid vapor phase diagram International Nuclear Information System (INIS) Moretto, L.G.; Elliott, J.B.; Phair, L. 2003-01-01 In this paper we consider the problem of obtaining the infinite symmetric uncharged nuclear matter phase diagram from a thermal nuclear reaction. In the first part we shall consider the Coulomb interaction which, because of its long range makes the definition of phases problematic. This Coulomb effect seems truly devastating since it does not allow one to define nuclear phase transitions much above A ∼ 30. However there may be a solution to this difficulty. If we consider the emission of particles with a sizable charge, we notice that a large Coulomb barrier Bc is present. For T << Bc these channels may be considered effectively closed. Consequently the unbound channels may not play a role on a suitably short time scale. Then a phase transition may still be definable in an approximate way. In the second part of the article we shall deal with the finite size problem by means of a new method, the complement method, which shall permit a straightforward extrapolation to the infinite system. The complement approach consists of evaluating the change in free energy occurring when a particle or cluster is moved from one (finite) phase to another. In the case of a liquid drop in equilibrium with its vapor, this is done by extracting a vapor particle of any given size from the drop and evaluating the energy and entropy changes associated with both the vapor particle and the residual liquid drop (complement) 9. Phase Diagrams of the Aqueous Two-Phase Systems of Poly(ethylene glycol/Sodium Polyacrylate/Salts Directory of Open Access Journals (Sweden) 2011-03-01 Full Text Available Aqueous two-phase systems consisting of polyethylene glycol (PEG, sodium polyacrylate (NaPAA, and a salt have been studied. The effects of the polymer size, salt type (NaCl, Na2SO4, sodium adipate and sodium azelate and salt concentrations on the position of the binodal curve were investigated. The investigated PEG molecules had a molar mass of 2,000 to 8,000 g/mol, while that of NaPAA was 8,000 g/mol. Experimental phase diagrams, and tie lines and calculated phase diagrams, based on Flory-Huggins theory of polymer solutions are presented. Due to strong enthalpic and entropic balancing forces, the hydrophobicity of the added salt has a strong influence on the position of the binodal, which could be reproduced by model calculations. 10. Quarks and gluons in the phase diagram of quantum chromodynamics Energy Technology Data Exchange (ETDEWEB) Welzbacher, Christian Andreas 2016-07-14 In this dissertation we study the phase diagram of strongly interacting matter by approaching the theory of quantum chromodynamics in the functional approach of Dyson-Schwinger equations. With these quantum (field) equations of motions we calculate the non-perturbative quark propagator within the Matsubara formalism. We built up on previous works and extend the so-called truncation scheme, which is necessary to render the infinite tower of Dyson-Schwinger equations finite and study phase transitions of chiral symmetry and the confinement/deconfinement transition. In the first part of this thesis we discuss general aspects of quantum chromodynamics and introduce the Dyson-Schwinger equations in general and present the quark Dyson-Schwinger equation together with its counterpart for the gluon. The Bethe-Salpeter equation is introduced which is necessary to perform two-body bound state calculations. A view on the phase diagram of quantum chromodynamics is given, including the discussion of order parameter for chiral symmetry and confinement. Here we also discuss the dependence of the phase structure on the masses of the quarks. In the following we present the truncation and our results for an unquenched N{sub f} = 2+1 calculation and compare it to previous studies. We highlight some complementary details for the quark and gluon propagator and discus the resulting phase diagram, which is in agreement with previous work. Results for an equivalent of the Columbia plot and the critical surface are discussed. A systematically improved truncation, where the charm quark as a dynamical quark flavour is added, will be presented in Ch. 4. An important aspect in this investigation is the proper adjustment of the scales. This is done by matching vacuum properties of the relevant pseudoscalar mesons separately for N{sub f} = 2+1 and N f = 2+1+1 via a solution of the Bethe-Salpeter equation. A comparison of the resulting N{sub f} = 2+1 and N{sub f} = 2+1+1 phase diagram indicates 11. Phase diagram of Pr-P system International Nuclear Information System (INIS) Mironov, K.E. 1981-01-01 An area of the Pr-P system, adjoining to the Pr ordinate, is plotted up by the DTA method. Presence of P solid solution in Pr is established. Data on thermal stability of PrP, PrP 2 , PrP 5 and PrP 7 are generalized. The diagram of phase transformations in Pr-P system is plotted up proceeding from the whole complex of the data, presented. A supposition is made on a possible formation of solid solutions between the highest polyphosphide and phosphorus [ru 12. First-Order Transitions and the Magnetic Phase Diagram of CeSb DEFF Research Database (Denmark) Lebech, Bente; Clausen, Kurt Nørgaard; Vogt, O. 1980-01-01 might exist in the magnetic phase diagram of CeSb at 16K for a field of approximately 0.3 T. The present study concludes that the transitions from the paramagnetic to the magnetically ordered states are of first order for fields below 0.8 T. Within the experimental accuracy no change has been observed......The high-temperature (14-17K) low-magnetic field (0-0.8 T) region of the phase diagram of the anomalous antiferromagnet CeSb has been reinvestigated by neutron diffraction in an attempt to locate a possible tricritical point. Previous neutron diffraction studies indicated that a tricritical point... 13. Pseudo-critical point in anomalous phase diagrams of simple plasma models Science.gov (United States) Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu 2016-11-01 Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941). 14. Phase diagram of the Ge-rich of the Ba–Ge system and characterisation of single-phase BaGe4 International Nuclear Information System (INIS) Prokofieva, Violetta K.; Pavlova, Lydia M. 2014-01-01 Highlights: • The Ba-Ge phase diagram for the range 50–100 at.% Ge was constructed. • Single-phase BaGe 4 grown by the Czochralski method was characterised. • A phenomenological model for a liquid-liquid phase transition is proposed. - Abstract: The Ba–Ge binary system has been investigated by several authors, but some uncertainties remain regarding phases with Ba/Ge ⩽ 2. The goal of this work was to resolve the uncertainty about the current phase diagram of Ba–Ge by performing DTA, X-ray powder diffraction, metallographic and chemical analyses, and measurements of the electrical conductivity and viscosity. The experimental Ba–Ge phase diagram over the composition range of 50–100 at.% Ge was constructed from the cooling curves and single-phase BaGe 4 grown by the Czochralski crystal pulling method was characterised. Semiconducting BaGe 4 crystallised peritectically from the liquid phase near the eutectic. In the liquid state, the caloric effects were observed in the DTA curves at 1050 °C where there are no definite phase lines in the Ba–Ge phase diagram. These effects are confirmed by significant changes in the viscosity and electrical conductivity of a Ba–Ge alloy with eutectic composition at this temperature. A phenomenological model based on two different approaches, a phase approach and a chemical approach, is proposed to explain the isothermal liquid–liquid phase transition observed in the Ba–Ge system from the Ge side. Our results suggest that this transition is due to the peritectic reactions in the liquid phase. This reversible phase transition results in the formation of precursors of various metastable clathrate phases and is associated with sudden changes in the structure of Ba–Ge liquid alloys. Characteristics of both first- and second-order phase transitions are observed. Charge transfer appears to play an important role in this transition 15. The phase diagram of solid hydrogen at high pressure: A challenge for first principles calculations Science.gov (United States) 2015-03-01 We present comprehensive results for the high-pressure phase diagram of solid hydrogen. We focus on the energetically most favorable molecular and atomic crystal structures. To obtain the ground-state static enthalpy and phase diagram, we use semi-local and hybrid density functional theory (DFT) as well as diffusion quantum Monte Carlo (DMC) methods. The closure of the band gap with increasing pressure is investigated utilizing quasi-particle many-body calculations within the GW approximation. The dynamical phase diagram is calculated by adding proton zero-point energies (ZPE) to static enthalpies. Density functional perturbation theory is employed to calculate the proton ZPE and the infra-red and Raman spectra. Our results clearly demonstrate the failure of DFT-based methods to provide an accurate static phase diagram, especially when comparing insulating and metallic phases. Our dynamical phase diagram obtained using fully many-body DMC calculations shows that the molecular-to-atomic phase transition happens at the experimentally accessible pressure of 374 GPa. We claim that going beyond mean-field schemes to obtain derivatives of the total energy and optimize crystal structures at the many-body level is crucial. This work was supported by the UK engineering and physics science research council under Grant EP/I030190/1, and made use of computing facilities provided by HECTOR, and by the Imperial College London high performance computing centre. 16. Ferromagnetic quantum criticality: New aspects from the phase diagram of LaCrGe3 Science.gov (United States) Taufour, Valentin; Kaluarachchi, Udhara S.; Bud'ko, Sergey L.; Canfield, Paul C. 2018-05-01 Recent theoretical and experimental studies have shown that ferromagnetic quantum criticality is always avoided in clean systems. Two possibilities have been identified. In the first scenario, the ferromagnetic transition becomes of the first order at a tricritical point before being suppressed. A wing structure phase diagram is observed indicating the possibility of a new type of quantum critical point under magnetic field. In a second scenario, a transition to a modulated magnetic phase occurs. Our recent studies on the compound LaCrGe3 illustrate a third scenario where not only a new magnetic phase occurs, but also a change of order of the transition at a tricritical point leading to a wing-structure phase diagram. Careful experimental study of the phase diagram near the tricritical point also illustrates new rules near this type of point. 17. Uhlenbeck-Ford model: Phase diagram and corresponding-states analysis Science.gov (United States) Paula Leite, Rodolfo; Santos-Flórez, Pedro Antonio; de Koning, Maurice 2017-09-01 Using molecular dynamics simulations and nonequilibrium thermodynamic-integration techniques we compute the Helmholtz free energies of the body-centered-cubic (bcc), face-centered-cubic (fcc), hexagonal close-packed, and fluid phases of the Uhlenbeck-Ford model (UFM) and use the results to construct its phase diagram. The pair interaction associated with the UFM is characterized by an ultrasoft, purely repulsive pair potential that diverges logarithmically at the origin. We find that the bcc and fcc are the only thermodynamically stable crystalline phases in the phase diagram. Furthermore, we report the existence of two reentrant transition sequences as a function of the number density, one featuring a fluid-bcc-fluid succession and another displaying a bcc-fcc-bcc sequence near the triple point. We find strong resemblances to the phase behavior of other soft, purely repulsive systems such as the Gaussian-core model (GCM), inverse-power-law, and Yukawa potentials. In particular, we find that the fcc-bcc-fluid triple point and the phase boundaries in its vicinity are in good agreement with the prediction supplied by a recently proposed corresponding-states principle [J. Chem. Phys. 134, 241101 (2011), 10.1063/1.3605659; Europhys. Lett. 100, 66004 (2012), 10.1209/0295-5075/100/66004]. The particularly strong resemblance between the behavior of the UFM and GCM models are also discussed. 18. Highly Accurate Calculations of the Phase Diagram of Cold Lithium Science.gov (United States) Shulenburger, Luke; Baczewski, Andrew The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000. 19. Dynamical Symmetries and Causality in Non-Equilibrium Phase Transitions Directory of Open Access Journals (Sweden) Malte Henkel 2015-11-01 Full Text Available Dynamical symmetries are of considerable importance in elucidating the complex behaviour of strongly interacting systems with many degrees of freedom. Paradigmatic examples are cooperative phenomena as they arise in phase transitions, where conformal invariance has led to enormous progress in equilibrium phase transitions, especially in two dimensions. Non-equilibrium phase transitions can arise in much larger portions of the parameter space than equilibrium phase transitions. The state of the art of recent attempts to generalise conformal invariance to a new generic symmetry, taking into account the different scaling behaviour of space and time, will be reviewed. Particular attention will be given to the causality properties as they follow for co-variant n-point functions. These are important for the physical identification of n-point functions as responses or correlators. 20. Phase diagrams of NaOCl-Na3PO4-H2O ternary system. Jiaensosan natrium-rinsan natrium-mizu sanseibun soheiko jotaizu Energy Technology Data Exchange (ETDEWEB) Hamano, A. (Sasebo College of Tehcnology, Nagasaki (Japan)); Kito, K. (Nippon Steel Chemical Co. Ltd., Tokyo (Japan)) 1993-10-30 Sodium chlorate (NaOCl) aqueous solution is used widely as a bleach. Attempts have been made to extract crystals from this aqueous solution, but only with difficulty because of instability of the generated crystals. Generating Na3PO4 and solid solution has been attempted as a method to stabilize the crystals. The present study has prepared a phase equilibrium diagram for three components of NaOCl-Na3PO4-H2O at 5[degree]C and 10[degree]C for theoretical elucidation thereof. In the prepared diagram, the conjugate lines linking the saturated solution with the solution containing solid phase have not crossed at one point, but scattered on the solid phase line. This has made clear that no double salt has been generated, but solid solution has been produced. The mol ratio of NaOCl in the generated solid solution was 0.071 to 0.447 at 15[degree]C and 0.026 to 0.6 at 5[degree]C indicating that solid solution with composition of wider range than in literatures reported conventionally has been produced. 8 refs., 2 figs., 2 tabs. 1. Thermodynamic study of CVD-ZrO{sub 2} phase diagrams Energy Technology Data Exchange (ETDEWEB) Torres-Huerta, A.M., E-mail: atorresh@ipn.m [Research Center for Applied Science and Advanced Technology, Altamira-IPN, Altamira C.P.89600 Tamaulipas (Mexico); Vargas-Garcia, J.R. [Dept of Metallurgical Eng., ESIQIE-IPN, Mexico 07300 D.F. (Mexico); Dominguez-Crespo, M.A. [Research Center for Applied Science and Advanced Technology, Altamira-IPN, Altamira C.P.89600 Tamaulipas (Mexico); Romero-Serrano, J.A. [Dept of Metallurgical Eng., ESIQIE-IPN, Mexico 07300 D.F. (Mexico) 2009-08-26 Chemical vapor deposition (CVD) of zirconium oxide (ZrO{sub 2}) from zirconium acetylacetonate Zr(acac){sub 4} has been thermodynamically investigated using the Gibbs' free energy minimization method and the FACTSAGE program. Thermodynamic data Cp{sup o}, DELTAH{sup o} and S{sup o} for Zr(acac){sub 4} have been estimated using the Meghreblian-Crawford-Parr and Benson methods because they are not available in the literature. The effect of deposition parameters, such as temperature and pressure, on the extension of the region where pure ZrO{sub 2} can be deposited was analyzed. The results are presented as calculated CVD stability diagrams. The phase diagrams showed two zones, one of them corresponds to pure monoclinic phase of ZrO{sub 2} and the other one corresponds to a mix of monoclinic phase of ZrO{sub 2} and graphite carbon. 2. Temperature-dependent pitch and phase diagram for incommensurate XY spins in a slab geometry International Nuclear Information System (INIS) Collins, M.; Saslow, W.M. 1996-01-01 Strain-engineered Heisenberg antiferromagnets recently have been produced by controlling the layer thickness of MnSe/ZnTe superlattices. Neutron-scattering studies reveal a spiral that tends to untwist with increasing temperature. To simulate this system, we employ an XY model with nearest- and second-nearest neighbor antiferromagnetic interactions. The bulk mean-field phase diagram has four possible phases, for the full range of the exchange constants. Monte Carlo calculations are performed for a slab geometry, using an algorithm that allows the system to choose incommensurate boundary conditions. The phase diagram is constructed by monitoring the spiral pitch as a function of temperature for a range of exchange constants. For appropriate exchange constants, good agreement is obtained with experiment. From the mean-field phase diagram it appears that strain engineering an NaCl structure in a superlattice configuration might produce a type of spiral phase, and an associated antiferromagnetic-to-spiral phase transition. copyright 1996 The American Physical Society 3. Studies on the phase diagram of Pb-Mo-O system International Nuclear Information System (INIS) Aiswarya, P.M.; Ganesan, Rajesh; Gnanasekaran, T. 2014-01-01 Liquid lead and Lead-Bismuth Eutectic (LBE) alloy are considered as spallation target and coolant in the accelerator driven systems and as candidate coolant in advanced nuclear reactors. Corrosion of the structural steel components in these liquid metal coolants can be minimized by the insitu formation of passive oxide layer on the steel surface under controlled oxygen concentration. A detailed knowledge of phase diagrams of Pb-M-O and Bi-M-O (M = Fe, Cr, Mo) systems and data on thermochemical properties of the ternary compounds of these systems are required for better understanding of composition and stability of these passive oxide films. In the present work, studies have been carried out to establish the ternary phase diagram of Pb-Mo-O system 4. Experimental determination of the phase diagram of the system sodium-sodium hydride up to 9000C and hydrogen pressures up to 800 bar International Nuclear Information System (INIS) Klostermeier, W. 1978-01-01 In the present work part of the sodium-sodium hydride system phase diagram has been studied at high temperatures (up to 900 0 C) and high hydrogen pressures (up to 1000 bar). The absorption isothermal curves recorded at temperatures between 650 0 C and 900 0 C show an increase in hydride solubility in sodium from 5.5 mol% at 650 0 to 19 mol% at 900 0 C. The melting point of sodium hydride has been measured giving the value 632 0 C with a hydrogen equilibrium pressure of 106 bar. In the mixing gap region the plateau equilibrium pressure, which is independent of composition, and his temperature dependence have been obtained. The enthalpy and entropy of melting are determined. (GSCH) [de 5. Canonical phase diagrams of the 1D Falicov-Kimball model at T = O Science.gov (United States) Gajek, Z.; Jȩdrzejewski, J.; Lemański, R. 1996-02-01 The Falicov-Kimball model of spinless quantum electrons hopping on a 1-dimensional lattice and of immobile classical ions occupying some lattice sites, with only intrasite coupling between those particles, have been studied at zero temperature by means of well-controlled numerical procedures. For selected values of the unique coupling parameter U the restricted phase diagrams (based on all the periodic configurations of localized particles (ions) with period not greater than 16 lattice constants, typically) have been constructed in the grand-canonical ensemble. Then these diagrams have been translated into the canonical ensemble. Compared to the diagrams obtained in other studies our ones contain more details, in particular they give better insight into the way the mixtures of periodic phases are formed. Our study has revealed several families of new characteristic phases like the generalized most homogeneous and the generalized crenel phases, a first example of a structural phase transition and a tendency to build up an additional symmetry - the hole-particle symmetry with respect to the ions (electrons) only, as U decreases. 6. The phase diagrams of a ferromagnetic thin film in a random magnetic field Energy Technology Data Exchange (ETDEWEB) 2016-10-07 In this paper, the magnetic properties and the phase diagrams of a ferromagnetic thin film with a thickness N in a random magnetic field (RMF) are investigated by using the Monte Carlo simulation technique based on the Metropolis algorithm. The effects of the RMF and the surface exchange interaction on the critical behavior are studied. A variety of multicritical points such as tricritical points, isolated critical points, and triple points are obtained. It is also found that the double reentrant phenomenon can appear for appropriate values of the system parameters. - Highlights: • Phase diagrams of a ferromagnetic thin film are examined by the Monte Carlo simulation. • The effect of the random magnetic field on the magnetic properties is studied. • Different types of the phase diagrams are obtained. • The dependence of the magnetization and susceptibility on the temperature are investigated. 7. Bose-Einstein Condensation of Long-Lifetime Polaritons in Thermal Equilibrium. Science.gov (United States) Sun, Yongbao; Wen, Patrick; Yoon, Yoseob; Liu, Gangqiang; Steger, Mark; Pfeiffer, Loren N; West, Ken; Snoke, David W; Nelson, Keith A 2017-01-06 The experimental realization of Bose-Einstein condensation (BEC) with atoms and quasiparticles has triggered wide exploration of macroscopic quantum effects. Microcavity polaritons are of particular interest because quantum phenomena such as BEC and superfluidity can be observed at elevated temperatures. However, polariton lifetimes are typically too short to permit thermal equilibration. This has led to debate about whether polariton condensation is intrinsically a nonequilibrium effect. Here we report the first unambiguous observation of BEC of optically trapped polaritons in thermal equilibrium in a high-Q microcavity, evidenced by equilibrium Bose-Einstein distributions over broad ranges of polariton densities and bath temperatures. With thermal equilibrium established, we verify that polariton condensation is a phase transition with a well-defined density-temperature phase diagram. The measured phase boundary agrees well with the predictions of basic quantum gas theory. 8. High-pressure phase diagrams of liquid CO2 and N2 Science.gov (United States) Boates, Brian; Bonev, Stanimir 2011-06-01 The phase diagrams of liquid CO2 and N2 have been investigated using first-principles theory. Both materials exhibit transitions to conducting liquids at high temperatures (T) and relatively modest pressures (P). Furthermore, both liquids undergo polymerization phase transitions at pressures comparable to their solid counterparts. The liquid phase diagrams have been divided into several regimes through a detailed analysis of changes in bonding, as well as structural and electronic properties for pressures and temperatures up to 200 GPa and 10 000 K, respectively. Similarities and differences between the high- P and T behavior of these fluids will be discussed. Calculations of the Hugoniot are in excellent agreement with available experimental data. Work supported by NSERC, LLNL, and the Killam Trusts. Prepared by LLNL under Contract DE-AC52-07NA27344. 9. Phase diagram and tricritical behavior of an metamagnet in uniform and random fields International Nuclear Information System (INIS) Liang Yaqiu; Wei Guozhu; Xu Xiaojuan; Song Guoli 2010-01-01 A two-sublattice Ising metamagnet in both uniform and random fields is studied within the mean-field approach based on Bogoliubov's inequality for the Gibbs free energy. We show that the qualitative features of the phase diagrams are dependent on the parameters of the model and the uniform field values. The tricritical point and reentrant phenomenon can be observed on the phase diagram. The reentrance is due to the competition between uniform and random interactions. 10. Quest for the QCD phase diagram in extreme environments Energy Technology Data Exchange (ETDEWEB) Fukushima, Kenji, E-mail: fuku@rk.phys.keio.ac.jp [Keio University, Department of Physics (Japan) 2013-03-15 We review the state-of-the-art status of the research on the phase diagram of QCD matter out of quarks and gluons. Our discussions particularly include the extreme environments such as the high temperature, the high baryon density, and the strong magnetic field. 11. Polariton condensation phase diagram in wide-band-gap planar microcavities: GaN versus ZnO Science.gov (United States) Jamadi, O.; Réveret, F.; Mallet, E.; Disseix, P.; Médard, F.; Mihailovic, M.; Solnyshkov, D.; Malpuech, G.; Leymarie, J.; Lafosse, X.; Bouchoule, S.; Li, F.; Leroux, M.; Semond, F.; Zuniga-Perez, J. 2016-03-01 The polariton condensation phase diagram is compared in GaN and ZnO microcavities grown on mesa-patterned silicon substrate. Owing to a common platform, these microcavities share similar photonic properties with large quality factors and low photonic disorder, which makes it possible to determine the optimal spot diameter and to realize a thorough phase diagram study. Both systems have been investigated under the same experimental conditions. The experimental results and the subsequent analysis reveal clearly that longitudinal optical phonons have no influence in the thermodynamic region of the condensation phase diagram, while they allow a strong (slight) decrease of the polariton lasing threshold in the trade-off zone (kinetic region). Phase diagrams are compared with numerical simulations using Boltzmann equations, and are in satisfactory agreement. A lower polariton lasing threshold has been measured at low temperature in the ZnO microcavity, as is expected due to a larger Rabi splitting. This study highlights polariton relaxation mechanisms and their importance in polariton lasing. 12. Pseudo-critical point in anomalous phase diagrams of simple plasma models International Nuclear Information System (INIS) Chigvintsev, A Yu; Iosilevskiy, I L; Noginova, L Yu 2016-01-01 Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z . Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval ( Z 1 < Z < Z 2 ). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941). (paper) 13. Multicritical phase diagrams of the antiferromagnetic spin-3/2 Blume-Capel model Energy Technology Data Exchange (ETDEWEB) Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)]. E-mail: keskin@erciyes.edu.tr; Ali Pinar, M. [Institute of Science, Erciyes University, 38039 Kayseri (Turkey); Erdinc, Ahmet [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Canko, Osman [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2006-04-24 The antiferromagnetic spin-3/2 Blume-Capel model in an external magnetic field is investigated, and the phase diagrams are obtained in detail by using the cluster variation method. The model exhibits distinct critical regions, including the first-order, second-order and special points: two double critical points, a critical end point, a tricritical point and a zero-temperature critical point. The new phase diagram topology is also found that was not obtained previously. Comparison of the results with those of other studies on this, and closely related systems, is made. 14. Multicritical phase diagrams of the antiferromagnetic spin-3/2 Blume-Capel model International Nuclear Information System (INIS) Keskin, Mustafa; Ali Pinar, M.; Erdinc, Ahmet; Canko, Osman 2006-01-01 The antiferromagnetic spin-3/2 Blume-Capel model in an external magnetic field is investigated, and the phase diagrams are obtained in detail by using the cluster variation method. The model exhibits distinct critical regions, including the first-order, second-order and special points: two double critical points, a critical end point, a tricritical point and a zero-temperature critical point. The new phase diagram topology is also found that was not obtained previously. Comparison of the results with those of other studies on this, and closely related systems, is made 15. Measurement of vapor-liquid-liquid phase equilibrium-Equipment and results DEFF Research Database (Denmark) Frost, Michael Grynnerup; von Solms, Nicolas; Richon, Dominique 2015-01-01 There exists a need for new accurate and reliable experimental data, preferably with full characterization of all the phases present in equilibrium. The need for high-quality experimental phase equilibrium data is the case for the chemical industry in general. All areas deal with processes whose ... 16. A new experimental phase diagram investigation of Cu-Sb. Science.gov (United States) Fürtauer, Siegfried; Flandorfer, Hans The binary system Cu-Sb is a constituent system that is studied in investigations of technically important ternary and quaternary alloy systems (e.g., casting alloys and lead-free solders). Although this binary system has been thoroughly investigated over the last century, there are still some uncertainties regarding its high-temperature phases. Thus, parts of its phase diagram have been drawn with dashed lines in reviews published in the literature. The aim of this work was to resolve these uncertainties in the current phase diagram of Cu-Sb by performing XRD, SEM-EDX, EPMA, and DTA. The results from thermal analysis agreed well with those given in the literature, although some modifications due to the invariant reaction temperatures were necessary. In particular, reactions located on the Cu-rich side of the nonquenchable high-temperature β phase (BiF 3 -type) left considerable scope for interpretation. Generally, the structural descriptions of the various binary phases given in the literature were verified. The range of homogeneity of the ε phase (Cu 3 Ti type) was found to be higher on the Sb-rich side. Most of the reaction temperatures were verified, but a few had to be revised, such as the eutectoid reaction [Formula: see text] at 440 °C (found to occur at 427 °C in this work) and the eutectoid reaction [Formula: see text] at 400 °C (found to occur at 440 °C in this work). Further phase transformations that had previously only been estimated were confirmed, and their characteristic temperatures were determined. 17. PHASE DIAGRAM OF GELATINE-POLYURONATE COLLOIDS: ITS APPLICATION FOR MICROENCAPSULATION AND NOT ONLY Directory of Open Access Journals (Sweden) Alexei Baerle 2016-06-01 Full Text Available Phase state and the charge of colloidal particles in the gelatine-polyuronate system were studied. A method for comparative evaluation of molecular weight of colloids by means of viscosimetric measurements and electrophoresis was developed. It is shown that the Diagram {Phase state = f (composition, pH} contains six well-defined regions. The diagram explains and predicts the behaviour of protein-polysaccharide colloids, which are included in beverages or forms the shells of oil-containing microcapsules. 18. Dynamic phase transition in the kinetic spin-32 Blume-Capel model: Phase diagrams in the temperature and crystal-field interaction plane Energy Technology Data Exchange (ETDEWEB) Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)]. E-mail: keskin@erciyes.edu.tr; Canko, Osman [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Deviren, Bayram [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2007-06-15 We analyze, within a mean-field approach, the stationary states of the kinetic spin-32 Blume-Capel (BC) model by the Glauber-type stochastic dynamics and subject to a time-dependent oscillating external magnetic field. The dynamic phase transition (DPT) points are obtained by investigating the behavior of the dynamic magnetization as a function of temperature and as well as calculating the Liapunov exponent. Phase diagrams are constructed in the temperature and crystal-field interaction plane. We find five fundamental types of phase diagrams for the different values of the reduced magnetic field amplitude parameter (h) in which they present a disordered, two ordered phases and the coexistences phase regions. The phase diagrams also exhibit a dynamic double-critical end point for 05.06. 19. Magnetic phase diagram of Ce2Fe17 under high pressures in high magnetic fields International Nuclear Information System (INIS) Ishikawa, Fumihiro; Goto, Tsuneaki; Fujii, Hironobu 2003-01-01 The magnetization of Ce 2 Fe 17 was precisely measured under high pressures up to 1.2 GPa in magnetic fields up to 18 T. The magnetic phase diagram in the B-T plane is determined at 0, 0.3, 0.4, 0.6, 0.9 and 1.2 GPa. At 0 GPa, five magnetic phases exist and the application of high pressure produces two additional magnetic phases. The shape of the phase diagram changes drastically with increasing pressure 20. Tight-binding calculation of Ti-Rh--type phase diagram International Nuclear Information System (INIS) Sluiter, M.; Turchi, P.; Fu Zezhong; de Fontaine, D. 1988-01-01 Tight-binding electronic band-structure calculations were combined with a free-energy expression from a statistical mechanical method called the cluster-variation method. The effective pair interactions used in the cluster-variation calculation were evaluated by the generalized perturbation method. Only d orbitals were included and the numbers of d electrons per atom were taken to be three for the pure A element and eight for the pure B. A phase diagram was constructed incorporating, for the first time, both fcc and bcc lattices and their simple-ordered superstructures. The calculated diagram agreed reasonably well with those determined empirically for Ti-Rh or Ti-Ir 1. Investigation of binary solid phases by calorimetry and kinetic modelling NARCIS (Netherlands) Matovic, M. 2007-01-01 The traditional methods for the determination of liquid-solid phase diagrams are based on the assumption that the overall equilibrium is established between the phases. However, the result of the crystallization of a liquid mixture will typically be a non-equilibrium or metastable state of the 2. Phase diagrams of the ternary alloy with a single-ion anisotropy in the mean-field approximation International Nuclear Information System (INIS) Dely, J.; Bobak, A. 2006-01-01 The phase diagram of the AB p C 1-p ternary alloy consisting of Ising spins S A =32, S B =2, and S C =52 is investigated by the use of a mean-field theory based on the Bogoliubov inequality for the Gibbs free energy. The effect of the single-ion anisotropy on the phase diagrams is discussed by changing values of the parameters in the model Hamiltonian and comparison is made with the recently reported finite-temperature phase diagrams for the ternary alloy having spin S B =1 3. Topological phase diagram of superconducting carbon nanotubes Energy Technology Data Exchange (ETDEWEB) Milz, Lars; Marganska-Lyzniak, Magdalena; Grifoni, Milena [Institut I - Theoretische Physik Universitaet Regensburg (Germany) 2016-07-01 The topological superconducting phase diagram of superconducting carbon nanotubes is discussed. Under the assumption of a short-ranged pairing potential, there are two spin-singlet states: an s-wave and an exotic p + ip-wave that are possible because of the special structure of the honeycomb lattice. The consequences for the possible presence of Majorana edge states in carbon nanotubes are addressed. In particular, regions in the magnetic field-chemical potential plane possibly hosting localized Majorana modes are discussed. 4. Gd5(SixGe1−x)4 system – updated phase diagram International Nuclear Information System (INIS) Melikhov, Yevgen; Hadimani, R.L.; Raghunathan, Arun 2015-01-01 Gd 5 (Si x Ge 1−x ) 4 for 0.41phase transition between the two. In this range, the magnetic moment vs. magnetic field (MH) isotherms measured just above the first order transition temperature carry information about all magnetic and structural transitions. Here, the Curie–Weiss law was applied to the paramagnetic portions of the MH isotherms which allowed identification of the second order magnetic phase transition temperature of the monoclinic phase, a region where the second order transition does not occur due to the existence of the first order transition. The calculated second order phase transition temperatures of the monoclinic phase were added to the existing phase diagram. The completed magnetic-structural phase diagram carries now all the information including the magnetic transition temperatures of both monoclinic and orthorhombic phases. It was also found that the magnetic transition temperature of the monoclinic phase and the first order transition temperature are interrelated. - Highlights: • Magnetocaloric Gd 5 (Si x Ge 1−x ) 4 for 0.41phase transition suppresses second order transition of monoclinic phase. • Curie–Weiss law and Arrott Plot technique were used to analyse M vs. H isotherms. • Second order phase transition temperatures of the monoclinic phase were estimated. • Magnetic-structural phase diagram Gd 5 (Si x Ge 1−x ) 4 for 0.41 5. Non-equilibrium physics at a holographic chiral phase transition Energy Technology Data Exchange (ETDEWEB) Evans, Nick; Kim, Keun-young [Southampton Univ. (United Kingdom). School of Physics and Astronomy; Kavli Institute for Theoretical Physics China, Beijing (China); Kalaydzhyan, Tigran; Kirsch, Ingo [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany) 2010-11-15 The D3/D7 system holographically describes an N=2 gauge theory which spontaneously breaks a chiral symmetry by the formation of a quark condensate in the presence of a magnetic field. At finite temperature it displays a first order phase transition. We study out of equilibrium dynamics associated with this transition by placing probe D7 branes in a geometry describing a boost-invariant expanding or contracting plasma. We use an adiabatic approximation to track the evolution of the quark condensate in a heated system and reproduce the phase structure expected from equilibrium dynamics. We then study solutions of the full partial differential equation that describes the evolution of out of equilibrium configurations to provide a complete description of the phase transition including describing aspects of bubble formation. (orig.) 6. Experimental investigation of the Al-Y phase diagram International Nuclear Information System (INIS) Liu Shuhong; Du Yong; Xu Honghui; He Cuiyun; Schuster, Julius C. 2006-01-01 The Al-Y phase diagram has been reinvestigated with 16 key alloys over its whole composition range by means of differential thermal analysis, X-ray diffraction, optical microscopy, and scanning electron microscopy with energy dispersive X-ray techniques. The existence of five intermetallic phases, Al 3 Y, Al 2 Y, AlY, Al 2 Y 3 , and AlY 2 , has been confirmed. Al 2 Y and Al 2 Y 3 melt congruently at 1490 ± 2 and 1105 ± 2 deg. C, respectively. Al 3 Y, AlY, and AlY 2 are formed via the peritectic reactions L + Al 2 Y ↔ Al 3 Y at 980 ± 2 deg. C, L + Al 2 Y ↔ AlY at 1138 ± 2 deg. C, and L + Al 2 Y 3 ↔ AlY 2 at 977 ± 2 deg. C, respectively. Three eutectic reactions L ↔ (Al) + Al 3 Y at 637 ± 2 deg. C, L ↔ AlY + Al 2 Y 3 at 1081 ± 2 deg. C, and L ↔ AlY 2 + (αY) at 955 ± 2 deg. C , are observed. The previously reported Al 3 Y 5 , AlY 3 compounds were not found. A revised Al-Y phase diagram is presented mainly based on the present experimental results 7. Exploring the Clapeyron Equation and the Phase Rule Using a Mechanical Drawing Toy Science.gov (United States) Darvesh, Katherine V. 2013-01-01 The equilibrium between phases is a key concept from the introductory physical chemistry curriculum. Phase diagrams display which phase is the most stable at a given temperature and pressure. If more than one phase has the lowest Gibbs energy, then those phases are in equilibrium under those conditions. An activity designed to demonstrate the… 8. Phase Equilibrium, Chemical Equilibrium, and a Test of the Third Law: Experiments for Physical Chemistry. Science.gov (United States) Dannhauser, Walter 1980-01-01 Described is an experiment designed to provide an experimental basis for a unifying point of view (utilizing theoretical framework and chemistry laboratory experiments) for physical chemistry students. Three experiments are described: phase equilibrium, chemical equilibrium, and a test of the third law of thermodynamics. (Author/DS) 9. Phase diagrams and heterogeneous equilibria a practical introduction CERN Document Server Predel, Bruno; Pool, Monte 2004-01-01 This graduate-level textbook provides an introduction to the practical application of phase diagrams. It is intended for students and researchers in chemistry, metallurgy, mineralogy, and materials science as well as in engineering and physics. Heterogeneous equilibria are described by a minimum of theory illustrated by practical examples and realistic case discussions from the different fields of application. The treatment of the physical and energetic background of phase equilibria leads to the discussion of the thermodynamics of mixtures and the correlation between energetics and composition. Thus, tools for the prediction of energetic, structural, and physical quantities are provided. The authors treat the nucleation of phase transitions, the production and stability of technologically important metastable phases, and metallic glasses. Furthermore, the text also concisely presents the thermodynamics and composition of polymer systems. 10. Construction of the Al-Ni-Si phase diagram over the whole composition and temperature ranges: thermodynamic modeling supported by key experiments and first-principles calculations Energy Technology Data Exchange (ETDEWEB) Xiong Wei; Du Yong; Wang Jiong; Zhang Wei-Wei [State Key Lab. of Powder Metallurgy, Central South Univ., Changsha (China); Hu Rong-Xiang; Nash, P. [Thermal Processing Technology Center, Illinois Inst. of Tech., Chicago (United States); Lu Xiao-Gang [Thermo-Calc AB, Stockholm Technology Park, Stockholm (Sweden) 2008-06-15 An extensive thermodynamic investigation of the Al-Ni-Si system is carried out via an integrated approach of calculation of phase diagrams, first-principles calculations, and key experiments. Eighteen decisive alloys are prepared in order to verify the existence of the previously reported ternary compounds and to provide new phase equilibrium data. Phase compositions, microstructure, and phase transition temperatures are determined using the combined techniques of X-ray diffraction, scanning electron microscopy, energy dispersion X-ray analysis, and differential thermal analysis. The order/disorder transition between disordered bccA2 and ordered bccB2 phases as well as that between disordered fccA1 and ordered L1{sub 2} phase are described using a two-sublattice model. A self-consistent parameter set is finally obtained by considering the huge amount of experimental data including 13 vertical sections and 5 isothermal sections from both the literature and the present experiments. Almost all of the reliable phase diagram data can be well described by the present modeling. The reliability of the calculated thermodynamic properties for ternary phases is verified through enthalpy measurement employing drop calorimetry and first-principles calculations. The thermodynamic parameters obtained can also successfully predict most of the thermodynamic properties and describe the solidification path for the selected as-cast alloy Al{sub 6}Ni{sub 55}Si{sub 39}. (orig.) 11. Phase diagram of a modified Lennard-Jones system International Nuclear Information System (INIS) Sakagami, Takahiro; Fuchizaki, Kazuhiro 2010-01-01 The well-known Lennard-Jones potential is modified in such a way that it smoothly vanishes at a certain distance. A system whose interparticle interaction is given by such a potential is referred to as a modified Lennard-Jones system, and is served as a standard system describing simple solids and fluids. A phase diagram is determined based on the free energies obtained through thermodynamic integration. 12. Thermodynamics and phase diagrams of the plutonium-uranium, uranium-zirconium, plutonium-zirconium and plutonium-uranium-zirconium systems International Nuclear Information System (INIS) Agarwal, R.; Venugopal, V. 2004-05-01 Thermodynamic and phase diagram data reported in literature for the binaries, Pu-U, Pu-Zr and U-Zr , were compiled and optimised to calculate Gibbs energies of all the binary phases of these systems. Lukas program was used to carry out these optimisations, where, thermodynamic and phase diagram data of all the binary phases of a binary system were optimised simultaneously. Gibbs energy sets thus calculated were used to compare our results with the experimental and calculated phase diagram and thermodynamic data reponed in the literature. The Gibbs energies of the binary systems were then compiled together to define Pu-U-Zr ternary system. (author) 13. Magnetic phase diagram of a frustrated spin ladder Science.gov (United States) Sugimoto, Takanori; Mori, Michiyasu; Tohyama, Takami; Maekawa, Sadamichi 2018-04-01 Frustrated spin ladders show magnetization plateaux depending on the rung-exchange interaction and frustration defined by the ratio of first and second neighbor exchange interactions in each chain. This paper reports on its magnetic phase diagram. Using the variational matrix-product state method, we accurately determine phase boundaries. Several kinds of magnetization plateaux are induced by the frustration and the strong correlation among quasiparticles on a lattice. The appropriate description of quasiparticles and their relevant interactions are changed by a magnetic field. We find that the frustration differentiates the triplet quasiparticle from the singlet one in kinetic energy. 14. First-principles interatomic potentials for transition-metal aluminides. III. Extension to ternary phase diagrams Science.gov (United States) Widom, Mike; Al-Lehyani, Ibrahim; Moriarty, John A. 2000-08-01 Modeling structural and mechanical properties of intermetallic compounds and alloys requires detailed knowledge of their interatomic interactions. The first two papers of this series [Phys. Rev. B 56, 7905 (1997); 58, 8967 (1998)] derived first-principles interatomic potentials for transition-metal (TM) aluminides using generalized pseudopotential theory (GPT). Those papers focused on binary alloys of aluminum with first-row transition metals and assessed the ability of GPT potentials to reproduce and elucidate the alloy phase diagrams of Al-Co and Al-Ni. This paper addresses the phase diagrams of the binary alloy Al-Cu and the ternary systems Al-Co-Cu and Al-Co-Ni, using GPT pair potentials calculated in the limit of vanishing transition-metal concentration. Despite this highly simplifying approximation, we find rough agreement with the known low-temperature phase diagrams, up to 50% total TM concentration provided the Co fraction is below 25%. Full composition-dependent potentials and many-body interactions would be required to correct deficiencies at higher Co concentration. Outside this troublesome region, the experimentally determined stable and metastable phases all lie on or near the convex hull of a scatter plot of energy versus composition. We verify, qualitatively, reported solubility ranges extending binary alloys into the ternary diagram in both Al-Co-Cu and Al-Co-Ni. Finally, we reproduce previously conjectured transition-metal positions in the decagonal quasicrystal phase. 15. Equilibrium and non-equilibrium extraction separation of rare earth metals in presence of diethylenetriaminepentaacetic acid in aqueous phase International Nuclear Information System (INIS) Azis, Abdul; Teramoto, Masaaki; Matsuyama, Hideto. 1995-01-01 Equilibrium and non-equilibrium extraction separations of rare earth metals were carried out in the presence of chelating agent in the aqueous phase. The separation systems of the rare earth metal mixtures used were Y/Dy, Y/Ho, Y/Er and Y/Tm, and the chelating agent and the extractant were diethylenetriaminepentaacetic acid (DTPA) and bis (2,4,4-trimethylpentyl) phosphinic acid (CYANEXR 272), respectively. For Y/Dy and Y/Ho systems, higher selectivities were obtained in equilibrium separation compared with those in non-equilibrium separation. On the other hand, the selectivities in non-equilibrium separation were higher for Y/Er and Y/Tm systems. In the separation condition suitable to each system, the addition of DTPA to the aqueous phase was found to be very effective for obtaining higher selectivities. The distribution ratios of the rare earth metals and the selectivities in the equilibrium separations obtained experimentally were thoroughly analyzed by considering various equilibria such as the extraction equilibrium and the complex formation equilibrium between rare earth metals and DTPA in the aqueous phase. Moreover, the extraction rates and the selectivities in the non-equilibrium separations were also analyzed by the extraction model considering the dissociation reactions of the rare earth metal-DTPA complexes in the aqueous stagnant layer. Based on these analyses, we presented an index which is useful for selecting the optimum operation mode. Using this index, we can predict that the selectivities under equilibrium conditions are higher than those under non-equilibrium conditions for Y/Dy and Y/Ho systems, while for Y/Er and Y/Tm systems, higher selectivities are obtained under non-equilibrium conditions. The experimental results were in agreement with predictions by this index. Further, the selectivities in various systems including other chelating agents and extractants were discussed based on this index. (J.P.N.) 16. Phase diagrams for systems Cu2S-AIIS (AII=Mg, Ca, Sr, Ba) International Nuclear Information System (INIS) Andreev, O.V.; Sikerina, N.V.; Solov'eva, A.V. 2005-01-01 By the methods of physicochemical analysis phase diagrams of Cu 2 S-A II S (A II =Mg, Ca, Sr, Ba) systems are studied. The system Cu 2 S-SrS is of eutectic type with eutectic coordinates 1095 K and 21.5 mol.% of SrS. Solubility of SrS in Cu 2 S is 2 mol.% at 1095 K. Regularities of phase diagram changes of Cu 2 S-A II S (A II =Mg, Ca, Sr, Ba) system are determined. Thermodynamic analysis is done [ru 17. Phase equilibrium, crystallization behavior and thermodynamic studies of (m-dinitrobenzene + vanillin) eutectic system International Nuclear Information System (INIS) Singh, Jayram; Singh, N.B. 2015-01-01 Graphical abstract: The phase diagram of (m-dinitrobenzene + vanillin) system. - Highlights: • (Thaw + melt) method has shown that (m-dinitrobenzene + vanillin) system forms simple eutectic type phase diagram. • Excess thermodynamic functions showed that eutectic mixture is non-ideal. • The flexural strength measurements have shown that in eutectic mixture, crystallization occurs in an ordered way. - Abstract: The phase diagram of (m-dinitrobenzene + vanillin) system has been studied by the thaw melt method and an eutectic type phase diagram was obtained. The linear velocities of crystallization of the parent components and the eutectic mixture were determined. The enthalpy of fusion of the components and the eutectic mixture were determined using the differential scanning calorimetric technique. Excess Gibbs energy, excess entropy, excess enthalpy of mixing, and interfacial energy have been calculated. FTIR spectroscopic studies and flexural strength measurements were also made. The results have shown that the eutectic is a non-ideal mixture of the two components. On the basis of Jackson’s roughness parameter, it is predicted that the eutectic has faceted morphology 18. Investigation of phase diagrams for cylindrical Ising nanotube using cellular automata Science.gov (United States) Astaraki, M.; Ghaemi, M.; Afzali, K. 2018-05-01 Recent developments in the field of applied nanoscience and nanotechnology have heightened the need for categorizing various characteristics of nanostructures. In this regard, this paper establishes a novel method to investigate magnetic properties (phase diagram and spontaneous magnetization) of a cylindrical Ising nanotube. Using a two-layer Ising model and the core-shell concept, the interactions within nanotube has been modelled. In the model, both ferromagnetic and antiferromagnetic cases have been considered. Furthermore, the effect of nanotube's length on the critical temperature is investigated. The model has been simulated using cellular automata approach and phase diagrams were constructed for different values of inter- and intra-layer couplings. For the antiferromagnetic case, the possibility of existence of compensation point is observed. 19. P-T phase diagram of a holographic s+p model from Gauss-Bonnet gravity International Nuclear Information System (INIS) Nie, Zhang-Yu; Zeng, Hui 2015-01-01 In this paper, we study the holographic s+p model in 5-dimensional bulk gravity with the Gauss-Bonnet term. We work in the probe limit and give the Δ-T phase diagrams at three different values of the Gauss-Bonnet coefficient to show the effect of the Gauss-Bonnet term. We also construct the P-T phase diagrams for the holographic system using two different definitions of the pressure and compare the results. 20. Phase diagram of a QED-cavity array coupled via a N-type level scheme Energy Technology Data Exchange (ETDEWEB) Jin, Jiasen; Rossini, Davide [CNR, NEST, Scuola Normale Superiore and Istituto di Nanoscienze, Pisa (Italy); Fazio, Rosario [CNR, NEST, Scuola Normale Superiore and Istituto di Nanoscienze, Pisa (Italy); National University of Singapore, Center for Quantum Technologies, Singapore (Singapore) 2015-01-01 We study the zero-temperature phase diagram of a one-dimensional array of QED cavities where, besides the single-photon hopping, an additional coupling between neighboring cavities is mediated by an N-type four-level system. By varying the relative strength of the various couplings, the array is shown to exhibit a variety of quantum phases including a polaritonic Mott insulator, a density-wave and a superfluid phase. Our results have been obtained by means of numerical density-matrix renormalization group calculations. The phase diagram was obtained by analyzing the energy gaps for the polaritons, as well as through a study of two-point correlation functions. (orig.) 1. Calculation of superalloy phase diagrams. IV International Nuclear Information System (INIS) Kaufman, L.; Nesor, H. 1975-01-01 Explicit descriptions of the Fe--Mo, Fe--W, Fe--Nb, W--Cr and Ti--W binary systems have been developed in line with lattice stability, thermochemical and phase diagram data. These descriptions, along with similar results derived previously, have been employed to calculate isothermal sections in the Cr--Al--Fe, Fe--Mo--Cr, Fe--W--Cr, Ni--Al--Co, Nb--Ti--W, Ti--W--Mo, Cr--W--Mo, Ni--Mo--W, and Ni--W--Ti systems for comparison with experimental results. The effects of carbon impurities on miscibility gap formation in the Ti--W, Nb--Ti--W, Ti--W--Mo and Cr--W--Mo systems are discussed 2. Collapsing cycloidal structures in the magnetic phase diagram of erbium DEFF Research Database (Denmark) Jehan, D.A.; McMorrow, D.F.; Simpson, J.A. 1994-01-01 The magnetic structure of Er with a magnetic field applied in the hexagonal basal plane has been studied using a combination of experimental techniques and mean-field modeling. From neutron-scattering and magnetization measurements, phase diagrams are constructed. At temperatures above...... approximately 20 K, the application of a field is found to favor cycloidal structures with modulation wave vectors of q(c) = (6/23)c*, (4/15)c*, and (2/7)c*. For fields above almost-equal-to 40 kOe, the (2/7) structure dominates the phase diagram. From a detailed study of this most stable cycloid, we determine...... how it distorts as the field is increased. In low fields, there is a spin reorientation, so that the plane of the cycloid becomes perpendicular to the applied field, while in larger fields, the cycloid collapses through a series of fanlike structures. At lower temperatures, as the field is increased... 3. First-principles interatomic potentials for transition-metal aluminides. III. Extension to ternary phase diagrams International Nuclear Information System (INIS) Widom, Mike; Al-Lehyani, Ibrahim; Moriarty, John A. 2000-01-01 Modeling structural and mechanical properties of intermetallic compounds and alloys requires detailed knowledge of their interatomic interactions. The first two papers of this series [Phys. Rev. B 56, 7905 (1997); 58, 8967 (1998)] derived first-principles interatomic potentials for transition-metal (TM) aluminides using generalized pseudopotential theory (GPT). Those papers focused on binary alloys of aluminum with first-row transition metals and assessed the ability of GPT potentials to reproduce and elucidate the alloy phase diagrams of Al-Co and Al-Ni. This paper addresses the phase diagrams of the binary alloy Al-Cu and the ternary systems Al-Co-Cu and Al-Co-Ni, using GPT pair potentials calculated in the limit of vanishing transition-metal concentration. Despite this highly simplifying approximation, we find rough agreement with the known low-temperature phase diagrams, up to 50% total TM concentration provided the Co fraction is below 25%. Full composition-dependent potentials and many-body interactions would be required to correct deficiencies at higher Co concentration. Outside this troublesome region, the experimentally determined stable and metastable phases all lie on or near the convex hull of a scatter plot of energy versus composition. We verify, qualitatively, reported solubility ranges extending binary alloys into the ternary diagram in both Al-Co-Cu and Al-Co-Ni. Finally, we reproduce previously conjectured transition-metal positions in the decagonal quasicrystal phase. (c) 2000 The American Physical Society 4. Ground state phase diagram of extended attractive Hubbard model International Nuclear Information System (INIS) Robaszkiewicz, S.; Chao, K.A.; Micnas, R. 1980-08-01 The ground state phase diagram of the extended Hubbard model with intraatomic attraction has been derived in the Hartree-Fock approximation formulated in terms of the Bogoliubov variational approach. For a given value of electron density, the nature of the ordered ground state depends essentially on the sign and the strength of the nearest neighbor coupling. (author) 5. Automated discovery and construction of surface phase diagrams using machine learning International Nuclear Information System (INIS) Ulissi, Zachary W.; Singh, Aayush R.; Tsai, Charlie 2016-01-01 Surface phase diagrams are necessary for understanding surface chemistry in electrochemical catalysis, where a range of adsorbates and coverages exist at varying applied potentials. These diagrams are typically constructed using intuition, which risks missing complex coverages and configurations at potentials of interest. More accurate cluster expansion methods are often difficult to implement quickly for new surfaces. We adopt a machine learning approach to rectify both issues. Using a Gaussian process regression model, the free energy of all possible adsorbate coverages for surfaces is predicted for a finite number of adsorption sites. Our result demonstrates a rational, simple, and systematic approach for generating accurate free-energy diagrams with reduced computational resources. Finally, the Pourbaix diagram for the IrO_2(110) surface (with nine coverages from fully hydrogenated to fully oxygenated surfaces) is reconstructed using just 20 electronic structure relaxations, compared to approximately 90 using typical search methods. Similar efficiency is demonstrated for the MoS_2 surface. 6. Phase behavior, rheological characteristics and microstructure of sodium caseinate-Persian gum system. Science.gov (United States) 2018-01-01 In this study, the phase behavior of sodium caseinate-Persian gum mixtures was investigated. The effect of thermodynamic incompatibility on phase distribution of sodium caseinate fractions as well as the flow behavior and microstructure of the biopolymer mixtures were also studied. The phase diagram clearly demonstrated the dominant effect of Persian gum on the incompatibility of the two biopolymers. SDS-PAGE electrophoresis indicated no selective fractionation of sodium caseinate subunits between equilibrium phases upon de-mixing. The microstructure of mixtures significantly changed depending on their position within the phase diagram. Fitting viscometric data to Cross and Bingham models revealed that the apparent viscosity, relaxation time and shear thinning behavior of the mixtures is greatly influenced by the volume ratio and concentration of the equilibrium phases. There is a strong dependence of the flow behavior of sodium caseinate-Persian gum mixtures on the composition of the equilibrium phases and the corresponding microstructure of the system. Copyright © 2017. Published by Elsevier Ltd. 7. Thermodynamic analysis of 6xxx series Al alloys: Phase fraction diagrams OpenAIRE Cui S.; Mishra R.; Jung I.-H. 2018-01-01 Microstructural evolution of 6xxx Al alloys during various metallurgical processes was analyzed using accurate thermodynamic database. Phase fractions of all the possible precipitate phases which can form in the as-cast and equilibrium states of the Al-Mg-Si-Cu-Fe-Mn-Cr alloys were calculated over the technically useful composition range. The influence of minor elements such as Cu, Fe, Mn, and Cr on the amount of each type of precipitate in the as-cast and equilibrium conditions were analyzed... 8. A general analytical equation for phase diagrams of an N-layer ferroelectric thin film with two surface layers Energy Technology Data Exchange (ETDEWEB) Lu, Z X; Teng, B H; Rong, Y H; Lu, X H; Yang, X [School of Physical Electronics, University of Electronic Science and Technology of China, Chengdu 610054 (China)], E-mail: phytbh@163.com 2010-03-15 Within the framework of effective-field theory with correlations, the phase diagrams of an N-layer ferroelectric thin film with two surface layers are studied by the differential operator technique based on the spin-1/2 transverse Ising model. A general analytical equation for the phase diagram of a ferroelectric thin film with arbitrary layer number as well as exchange interactions and transverse fields is derived, and then the effects of exchange interactions and transverse fields on phase diagrams are discussed for an arbitrary layer number N. Meanwhile, the crossover features, from the ferroelectric-dominant phase diagram (FPD) to the paraelectric-dominant phase diagram (PPD), for various parameters of an N-layer ferroelectric thin film with two surface layers are investigated. As a result, an N-independent common intersection point equation is obtained, and the three-dimensional curved surfaces for the crossover values are constructed. In comparison with the usual mean-field approximation, the differential operator technique with correlations reduces to some extent the ferroelectric features of a ferroelectric thin film. 9. The phase diagram of scalar field theory on the fuzzy disc Energy Technology Data Exchange (ETDEWEB) Rea, Simone; Sämann, Christian [Maxwell Institute for Mathematical Sciences, Department of Mathematics,Heriot-Watt University,Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS (United Kingdom) 2015-11-17 Using a recently developed bootstrapping method, we compute the phase diagram of scalar field theory on the fuzzy disc with quartic even potential. We find three distinct phases with second and third order phase transitions between them. In particular, we find that the second order phase transition happens approximately at a fixed ratio of the two coupling constants defining the potential. We compute this ratio analytically in the limit of large coupling constants. Our results qualitatively agree with previously obtained numerical results. 10. Phase equilibrium data for the ternary system (propane + chloroform + oryzanol) International Nuclear Information System (INIS) Correa, Fernanda V.; Comim, Sibele R.R.; Cesaro, Aline M. de; Rigo, Aline A.; Mazutti, Marcio A.; Hense, Haiko; Oliveira, J. Vladimir 2011-01-01 The compound oryzanol available in the rice bran (oriza sativa) is well known for its antioxidant activity. Phase equilibrium data involving oryzanol in compressed fluids, hardly found in the literature, are important to provide the basis for the extraction and fractionation processes. In this sense, the aim of this work is to report phase equilibrium measurements for the system (γ-oryzanol + chloroform) in compressed propane. Phase equilibrium experiments were performed using the static synthetic method (cloud points transition data) in a high-pressure variable-volume view cell in the temperature range of 303 K to 353 K, pressures up to 17 MPa, for oryzanol overall mass fractions of 2 wt%, 5 wt% and 10 wt% in (propane + chloroform) mixtures. A complex phase behaviour comprising vapour-liquid, liquid-liquid, vapour-liquid-liquid, solid-liquid, solid-liquid-liquid, solid-liquid-liquid-vapour transitions were visually observed for the system studied. 11. Phase equilibrium data for the ternary system (propane + chloroform + oryzanol) Energy Technology Data Exchange (ETDEWEB) Correa, Fernanda V.; Comim, Sibele R.R. [EQA/UFSC, Chemical and Food Engineering Department, Federal University of Santa Catarina, C.P. 476, CEP 88040-900, Florianopolis, SC (Brazil); Cesaro, Aline M. de; Rigo, Aline A.; Mazutti, Marcio A. [Department of Food Engineering, URI - Campus de Erechim, Av. Sete de Setembro, 1621, 99700-000 Erechim, RS (Brazil); Hense, Haiko [EQA/UFSC, Chemical and Food Engineering Department, Federal University of Santa Catarina, C.P. 476, CEP 88040-900, Florianopolis, SC (Brazil); Oliveira, J. Vladimir, E-mail: vladimir@uricer.edu.b [Department of Food Engineering, URI - Campus de Erechim, Av. Sete de Setembro, 1621, 99700-000 Erechim, RS (Brazil) 2011-01-15 The compound oryzanol available in the rice bran (oriza sativa) is well known for its antioxidant activity. Phase equilibrium data involving oryzanol in compressed fluids, hardly found in the literature, are important to provide the basis for the extraction and fractionation processes. In this sense, the aim of this work is to report phase equilibrium measurements for the system ({gamma}-oryzanol + chloroform) in compressed propane. Phase equilibrium experiments were performed using the static synthetic method (cloud points transition data) in a high-pressure variable-volume view cell in the temperature range of 303 K to 353 K, pressures up to 17 MPa, for oryzanol overall mass fractions of 2 wt%, 5 wt% and 10 wt% in (propane + chloroform) mixtures. A complex phase behaviour comprising vapour-liquid, liquid-liquid, vapour-liquid-liquid, solid-liquid, solid-liquid-liquid, solid-liquid-liquid-vapour transitions were visually observed for the system studied. 12. Dark energy in six nearby galaxy flows: Synthetic phase diagrams and self-similarity Science.gov (United States) Chernin, A. D.; Teerikorpi, P.; Dolgachev, V. P.; Kanter, A. A.; Domozhilova, L. M.; Valtonen, M. J.; Byrd, G. G. 2012-09-01 Outward flows of galaxies are observed around groups of galaxies on spatial scales of about 1 Mpc, and around galaxy clusters on scales of 10 Mpc. Using recent data from the Hubble Space Telescope (HST), we have constructed two synthetic velocity-distance phase diagrams: one for four flows on galaxy-group scales and the other for two flows on cluster scales. It has been shown that, in both cases, the antigravity produced by the cosmic dark-energy background is stronger than the gravity produced by the matter in the outflow volume. The antigravity accelerates the flows and introduces a phase attractor that is common to all scales, corresponding to a linear velocity-distance relation (the local Hubble law). As a result, the bundle of outflow trajectories mostly follow the trajectory of the attractor. A comparison of the two diagrams reveals the universal self-similar nature of the outflows: their gross phase structure in dimensionless variables is essentially independent of their physical spatial scales, which differ by approximately a factor of 10 in the two diagrams. 13. Phase diagram of the Fe-Sn-Zr system at 800 °C International Nuclear Information System (INIS) Nieva, N.; Corvalán, C.; Jiménez, M.J.; Gómez, A.; Arreguez, C.; Joubert, J.-M.; Arias, D. 2017-01-01 New experimental results on the Fe-Sn-Zr phase diagram at 800 °C are presented, particularly in the central, Fe rich and Sn rich regions of the Gibbs triangle. Seven ternary alloys were designed, produced and examined by different techniques: optical and scanning electron microscopy, semi-quantitative microanalysis, quantitative microanalysis and X-ray diffraction. The results of this work and previous experimental data were used to determine the phase diagram section at 800 °C which contains at least five ternary compounds: Fe 6 Sn 6 Zr, Y, X′, θ and C36. - Highlights: •A phase diagram of Fe-Sn-Zr system at 800 °C is proposed. •The isothermal section of Fe-Sn-Zr system at 800 °C and that at 900 °C determined previously allow reliable extrapolations at low temperatures. •The study at different temperatures (900 °C and 800 °C in this case) is highly desirable because it allows the separation between enthalpic and entropic effects in a future Calphad modelling. 14. Phase diagram of the Fe-Sn-Zr system at 800 °C Energy Technology Data Exchange (ETDEWEB) Nieva, N. [Laboratorio de Física del Sólido, Departamento de Física, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán (Argentina); Corvalán, C., E-mail: corvalan@cnea.gov.ar [Gerencia de Materiales, Comisión Nacional de Energía Atómica Argentina (CNEA), Universidad Nacional de Tres de Febrero, Argentina, CONICET, Consejo Nacional de Ciencia y Técnica (Argentina); Jiménez, M.J. [IFISUR, CONICET, Departamento de Física, Universidad Nacional del Sur, Bahía Blanca (Argentina); Gómez, A. [Grupo LMFAE – PPFAE, Centro Atómico Ezeiza, Comisión Nacional de Energía Atómica (Argentina); Arreguez, C. [Laboratorio de Física del Sólido, Departamento de Física, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán (Argentina); Joubert, J.-M. [Chimie Métallurgique des Terres Rares (CMTR), Institut de Chimie et des Matériaux Paris-Est (ICMPE), CNRS, Université Paris-Est Créteil, 2-8 rue Henri Dunant, 94320 Thiais Cedex (France); Arias, D. [Instituto de Tecnología J. Sabato, Universidad Nacional de San Martín-CNEA (Argentina) 2017-04-15 New experimental results on the Fe-Sn-Zr phase diagram at 800 °C are presented, particularly in the central, Fe rich and Sn rich regions of the Gibbs triangle. Seven ternary alloys were designed, produced and examined by different techniques: optical and scanning electron microscopy, semi-quantitative microanalysis, quantitative microanalysis and X-ray diffraction. The results of this work and previous experimental data were used to determine the phase diagram section at 800 °C which contains at least five ternary compounds: Fe{sub 6}Sn{sub 6}Zr, Y, X′, θ and C36. - Highlights: •A phase diagram of Fe-Sn-Zr system at 800 °C is proposed. •The isothermal section of Fe-Sn-Zr system at 800 °C and that at 900 °C determined previously allow reliable extrapolations at low temperatures. •The study at different temperatures (900 °C and 800 °C in this case) is highly desirable because it allows the separation between enthalpic and entropic effects in a future Calphad modelling. 15. Effective-field theory for dynamic phase diagrams of the kinetic spin-3/2 Blume–Capel model under a time oscillating longitudinal field Energy Technology Data Exchange (ETDEWEB) Ertaş, Mehmet [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Kocakaplan, Yusuf [Institute of Science, Erciyes University, 38039 Kayseri (Turkey); Keskin, Mustafa, E-mail: keskin@erciyes.edu.tr [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2013-12-15 Dynamic phase diagrams are presented for the kinetic spin-3/2 Blume–Capel model under a time oscillating longitudinal field by use of the effective-field theory with correlations. The dynamic equation of the average magnetization is obtained for the square lattice by utilizing the Glauber-type stochastic process. Dynamic phase diagrams are presented in the reduced temperature and the magnetic field amplitude plane. We also investigated the effect of longitudinal field frequency. Finally, the discussion and comparison of the phase diagrams are given. - Highlights: • Dynamic behaviors in the spin-3/2 Blume–Capel system is investigated by the effective-field theory based on the Glauber-type stochastic dynamics. • The dynamic phase transitions and dynamic phase diagrams are obtained. • The effects of the longitudinal field frequency on the dynamic phase diagrams of the system are investigated. • Dynamic phase diagrams exhibit several ordered phases, coexistence phase regions and several critical points as well as a re-entrant behavior. 16. Effective-field theory for dynamic phase diagrams of the kinetic spin-3/2 Blume–Capel model under a time oscillating longitudinal field International Nuclear Information System (INIS) Ertaş, Mehmet; Kocakaplan, Yusuf; Keskin, Mustafa 2013-01-01 Dynamic phase diagrams are presented for the kinetic spin-3/2 Blume–Capel model under a time oscillating longitudinal field by use of the effective-field theory with correlations. The dynamic equation of the average magnetization is obtained for the square lattice by utilizing the Glauber-type stochastic process. Dynamic phase diagrams are presented in the reduced temperature and the magnetic field amplitude plane. We also investigated the effect of longitudinal field frequency. Finally, the discussion and comparison of the phase diagrams are given. - Highlights: • Dynamic behaviors in the spin-3/2 Blume–Capel system is investigated by the effective-field theory based on the Glauber-type stochastic dynamics. • The dynamic phase transitions and dynamic phase diagrams are obtained. • The effects of the longitudinal field frequency on the dynamic phase diagrams of the system are investigated. • Dynamic phase diagrams exhibit several ordered phases, coexistence phase regions and several critical points as well as a re-entrant behavior 17. Speeding up compositional reservoir simulation through an efficient implementation of phase equilibrium calculation DEFF Research Database (Denmark) Belkadi, Abdelkrim; Yan, Wei; Moggia, Elsa 2013-01-01 Compositional reservoir simulations are widely used to simulate reservoir processes with strong compositional effects, such as gas injection. The equations of state (EoS) based phase equilibrium calculation is a time consuming part in this type of simulations. The phase equilibrium problem can....... Application of the shadow region method to skip stability analysis can further cut the phase equilibrium calculation time. Copyright 2013, Society of Petroleum Engineers.... 18. Low-temperature phase diagram of YbBiPt International Nuclear Information System (INIS) Movshovich, R.; Lacerda, A.; Canfield, P.C.; Thompson, J.D.; Fisk, Z. 1994-01-01 Resistivity measurements are reported on the cubic heavy-fermion compound YbBiPt at ambient and hydrostatic pressures to ∼19 kbar and in magnetic fields to 1 T. The phase transition at T c =0.4 K is identified by a sharp rise in resistivity. That feature is used to build low-temperature H-T and P-T phase diagrams. The phase boundary in the H-T plane follows the weak-coupling BCS expression remarkably well from T c to T c /4, while small hydrostatic pressure of ∼1 kbar suppresses the low-temperature phase entirely. These effects of hydrostatic pressure and magnetic field on the phase transition are consistent with an spin-density-wave (SDW) formation in a very heavy electron band at T=0.4 K. Outside of the SDW phase at low temperature, hydrostatic pressure increases the T 2 coefficient of resistivity, signaling an increase in heavy-fermion correlations with hydrostatic pressure. The residual resistivity decreases with pressure, contrary to trends in other Yb heavy-fermion compounds 19. E-T phase diagram of an antiferroelectric liquid crystal with re-entrand smectic C* phase Czech Academy of Sciences Publication Activity Database Na, Y.-H.; Naruse, Y.; Fukuda, N.; Orihara, H.; Fajar, A.; Hamplová, Věra; Kašpar, Miroslav; Glogarová, Milada 2008-01-01 Roč. 364, č. 1 (2008), s. 13-19 ISSN 0015-0193 Institutional research plan: CEZ:AV0Z10100520 Keywords : phase diagram * liquid crystals * dielectric measurements * electric field Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.562, year: 2008 20. Green material composites from renewable resources: Polymorphic transitions and phase diagram of beeswax/rosin resin Energy Technology Data Exchange (ETDEWEB) Gaillard, Yves [Mines-ParisTech., CEMEF, UMR CNRS 7635, 1 rue Claude Daunesse 06904 Sophia Antipolis cedex (France); Mija, Alice [University of Nice-Sophia Antipolis, Thermokinetic Group, Laboratory of Chemistry of Organic and Metallic Materials C.M.O.M., 06108 Nice Cedex 2 (France); Burr, Alain; Darque-Ceretti, Evelyne; Felder, Eric [Mines-ParisTech., CEMEF, UMR CNRS 7635, 1 rue Claude Daunesse 06904 Sophia Antipolis cedex (France); Sbirrazzuoli, Nicolas, E-mail: sbirrazz@unice.fr [University of Nice-Sophia Antipolis, Thermokinetic Group, Laboratory of Chemistry of Organic and Metallic Materials C.M.O.M., 06108 Nice Cedex 2 (France) 2011-07-10 Highlights: {yields} Blends of Rosin and beeswax are studied by DSC, XRD, and optical microscopy. {yields} The first phase diagram beeswax/rosin is established. {yields} Polymorphic transitions are identified and appear to be highly related to rosin content. - Abstract: Rosin and beeswax are two complex natural materials presenting numerous applications in paints, adhesives, varnishes or inks. Melted, they are particularly interesting for their adhesion properties. This paper establishes the first phase diagram beeswax/rosin blends. A systematic approach using X-ray diffraction (XRD), differential scanning calorimetry (DSC) and polarised optical microscopy (POM) has been performed in order to describe the crystallographic structure and the thermal properties of two materials, beeswax and rosin, and their blends. Indeed, melting, softening and crystallisation temperatures, polymorphic transitions but also crystalline index has been investigated. The resulting phase diagram reveals a complex behaviour in terms of phase transformation and time-dependent phenomenon mainly representative of the complex composition of beeswax. 1. A strictly hyperbolic equilibrium phase transition model International Nuclear Information System (INIS) Allaire, G; Faccanoni, G; Kokh, S. 2007-01-01 This Note is concerned with the strict hyperbolicity of the compressible Euler equations equipped with an equation of state that describes the thermodynamical equilibrium between the liquid phase and the vapor phase of a fluid. The proof is valid for a very wide class of fluids. The argument only relies on smoothness assumptions and on the classical thermodynamical stability assumptions, that requires a definite negative Hessian matrix for each phase entropy as a function of the specific volume and internal energy. (authors) 2. Determining the phase diagram of lithium via ab initio calculation and ramp compression Science.gov (United States) Shulenburger, Luke; Seagle, Chris; Haill, Thomas; Harding, Eric 2015-06-01 Diamond anvil cell experiments have shown elemental lithium to have an extraordinarily complex phase diagram under pressure exhibiting numerous solid phases at pressures below 1 Mbar, as well as a complicated melting behavior. We explore this phase diagram utilizing a combination of quantum mechanical calculations and ramp compression experiments performed on Sandia National Laboratories' Z-machine. We aim to extend our knowledge of the high pressure behavior to moderate temperatures at pressures above 50 GPa with a specific focus on the melt line above 70 GPa. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the US Dept of Energy's Natl. Nuclear Security Administration under Contract DE-AC04-94AL85000. 3. Conductivity, calorimetry and phase diagram of the NaHSO4–KHSO4 system DEFF Research Database (Denmark) Hind, Hamma-Cugny; Rasmussen, Søren Birk; Rogez, J. 2006-01-01 to polynomials of the form κ(X)=A(X)+B(X)(T-Tm)+C(X)(T-Tm)2, where Tm is the intermediate temperature of the measured temperature range and X, the mole fraction of KHSO4. The possible role of this binary system as a catalyst solvent is also discussed. (C) 2005 Elsevier B.V. All rights reserved.......Physico-chemical properties of the binary system NaHSO4-KHSO4 were studied by calorimetry and conductivity, The enthalpy of mixing has been measured at 505 K in the full composition range and the phase diagram calculated. The phase diagram has also been constructed from phase transition... 4. Phase diagram studies for microencapsulation of pharmaceuticals using cellulose acetate trimellitate. Science.gov (United States) Sanghvi, S P; Nairn, J G 1991-04-01 Phase diagrams were prepared to indicate the region of microcapsule formation for the following system: cellulose acetate trimellitate, light mineral oil, and the solvent mixture (acetone:ethanol), using chloroform as the hardening agent. The effect of sorbitan monoleate, sorbitan monolaurate, and sorbitan trioleate on the region of the phase diagram for the formation of microcapsules was investigated. The results indicate that microcapsules are readily formed when the polymer concentration is in the 0.5-1.5% range and the solvent concentration is in the 5-10% range. Aggregation of microcapsules was minimized by using lower solvent concentration. Low concentrations of sorbitan monooleate in mineral oil (less than or equal to 1%) gave products that had smoother coats and more uniform particle size. Surfactants with low hydrophile:lipophile balance produced larger regions on the phase diagram for microencapsulation compared with a surfactant with higher hydrophile:lipophile balance. A mechanism for microencapsulation is described. Tartrazine microcapsules produced using different concentrations of surfactant were tested for dissolution characteristics in both acidic and neutral conditions. Tartrazine-containing microcapsules prepared by using 3% sorbitan monooleate had the lowest release in acidic conditions. The effect of surfactant and formulation concentration on microcapsule size was studied by analyzing the particle size distribution for both blank and tartrazine-containing microcapsules. The smallest microcapsule size was obtained when the sorbitan monooleate concentration was 3%. It appears that there is an upper limit for the surfactant concentration that could be used to achieve successful microencapsulation. 5. Neutron-diffraction studies of the nuclear magnetic phase diagram of copper DEFF Research Database (Denmark) Annila, A.J.; Clausen, Kurt Nørgaard; Oja, A.S. 1992-01-01 We have studied the spontaneous antiferromagnetic (AF) order in the nuclear spin system of copper by use of neutron-diffraction experiments at nanokelvin temperatures. Copper is an ideal model system as a nearest-neighbor-dominated spin-3/2 fcc antiferromagnet. The phase diagram has been investig......We have studied the spontaneous antiferromagnetic (AF) order in the nuclear spin system of copper by use of neutron-diffraction experiments at nanokelvin temperatures. Copper is an ideal model system as a nearest-neighbor-dominated spin-3/2 fcc antiferromagnet. The phase diagram has been...... investigated by measuring the magnetic-field dependence of the (100) reflection, characteristic of a type-I AF structure, and of a Bragg peak at (0 2/3 2/3). The results suggest the presence of high-field (100) phases at 0.12 less-than-or-equal-to B less-than-or-equal-to B(c) almost-equal-to 0.26 mT, for B...... compared with results of earlier susceptibility measurements in order to identify the translational periods of the three previously found antiferromagnetic phases for B parallel-to [100]. Recent theoretical work has yielded results in agreement with our experimental data.... 6. Thermochemical and phase diagram studies of the Sn–Zn–Ni system Czech Academy of Sciences Publication Activity Database Gandova, V.D.; Brož, P.; Buršík, Jiří; Vassilev, G.P. 2011-01-01 Roč. 524, 1-2 (2011), s. 47-55 ISSN 0040-6031 Institutional research plan: CEZ:AV0Z20410507 Keywords : DSC * solders * Sn-Zn-Ni phase diagram Subject RIV: BJ - Thermodynamics Impact factor: 1.805, year: 2011 7. Metastable equilibrium for the quaternary system containing with lithium+potassium+magnesium+chloride in aqueous solution at 323K Energy Technology Data Exchange (ETDEWEB) Yu, Xudong; Yin, Qinghong; Jiang, Dongbo; Zeng, Ying [Chengdu University of Technology, Chengdu (China) 2014-06-15 The metastable equilibrium of the system contained with lithium, potassium, magnesium, and chloride in aqueous system was investigated at 323 K using an isothermal evaporation method. The isothermal experimental data and physicochemical properties, such as density and refractive index of the equilibrated solution, were determined. With the experimental results, the stereo phase diagram, the projected phase diagram, the water content diagram and the physicochemical properties versus composition diagrams were constructed. The projected phase diagram consists of three invariant points, seven univariant curves and five crystallization fields corresponding to single salts potassium chloride (KCl), lithium chloride monohydrate (LiCl·H{sub 2}O), bischofite (MgCl{sub 2}·6H{sub 2}O) and two double salts lithium carnallite (LiCl·MgCl{sub 2}·7H{sub 2}O) and potassium carnallite (KCl·MgCl{sub 2}·6H{sub 2}O). Salt KCl has the largest crystallization region; it contains almost 95% of the general crystallization field. 8. Dynamic phase transitions and dynamic phase diagrams of the spin-2 Blume-Capel model under an oscillating magnetic field within the effective-field theory Energy Technology Data Exchange (ETDEWEB) Ertas, Mehmet [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Institute of Science, Erciyes University, 38039 Kayseri (Turkey); Deviren, Bayram [Department of Physics, Nevsehir University, 50300 Nevsehir (Turkey); Keskin, Mustafa, E-mail: keskin@erciyes.edu.tr [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2012-03-15 The dynamic phase transitions are studied in the kinetic spin-2 Blume-Capel model under a time-dependent oscillating magnetic field using the effective-field theory with correlations. The effective-field dynamic equation for the average magnetization is derived by employing the Glauber transition rates and the phases in the system are obtained by solving this dynamic equation. The nature (first- or second-order) of the dynamic phase transition is characterized by investigating the thermal behavior of the dynamic magnetization and the dynamic phase transition temperatures are obtained. The dynamic phase diagrams are constructed in the reduced temperature and magnetic field amplitude plane and are of seven fundamental types. Phase diagrams contain the paramagnetic (P), ferromagnetic-2 (F{sub 2}) and three coexistence or mixed phase regions, namely the F{sub 2}+P, F{sub 1}+P and F{sub 2}+F{sub 1}+P, which strongly depend on the crystal-field interaction (D) parameter. The system also exhibits the dynamic tricritical behavior. - Highlights: Black-Right-Pointing-Pointer Dynamic phase transitions are studied in spin-2 BC model using EFT. Black-Right-Pointing-Pointer Dynamic phase diagrams are constructed in (T/zJ, h/zJ) plane. Black-Right-Pointing-Pointer Seven fundamental types of dynamic phase diagrams are found in the system. Black-Right-Pointing-Pointer System exhibits dynamic tricritical behavior. 9. Stability conditions and phase diagrams for two-component Fermi gases with population imbalance International Nuclear Information System (INIS) Chen Qijin; He Yan; Chien, C.-C.; Levin, K. 2006-01-01 Superfluidity in atomic Fermi gases with population imbalance has recently become an exciting research focus. There is considerable disagreement in the literature about the appropriate stability conditions for states in the phase diagram throughout the BCS to Bose-Einstein condensation crossover. Here we discuss these stability conditions for homogeneous polarized superfluid phases, and compare with recent alternative proposals. The requirement of a positive second-order partial derivative of the thermodynamic potential with respect to the fermionic excitation gap Δ (at fixed chemical potentials) is demonstrated to be equivalent to the positive definiteness of the particle number susceptibility matrix. In addition, we show the positivity of the effective pair mass constitutes another nontrivial stability condition. These conditions determine the (local) stability of the system towards phase separation (or other ordered phases). We also study systematically the effects of finite temperature and the related pseudogap on the phase diagrams defined by our stability conditions 10. A chaotic jerk system with non-hyperbolic equilibrium: Dynamics, effect of time delay and circuit realisation Science.gov (United States) 2018-04-01 The literature on chaos has highlighted several chaotic systems with special features. In this work, a novel chaotic jerk system with non-hyperbolic equilibrium is proposed. The dynamics of this new system is revealed through equilibrium analysis, phase portrait, bifurcation diagram and Lyapunov exponents. In addition, we investigate the time-delay effects on the proposed system. Realisation of such a system is presented to verify its feasibility. 11. Temperature gradient method for lipid phase diagram construction using time-resolved x-ray diffraction International Nuclear Information System (INIS) Caffrey, M.; Hing, F.S. 1987-01-01 A method that enables temperature-composition phase diagram construction at unprecedented rates is described and evaluated. The method involves establishing a known temperature gradient along the length of a metal rod. Samples of different compositions contained in long, thin-walled capillaries are positioned lengthwise on the rod and equilibrated such that the temperature gradient is communicated into the sample. The sample is then moved through a focused, monochromatic synchrotron-derived x-ray beam and the image-intensified diffraction pattern from the sample is recorded on videotape continuously in live-time as a function of position and, thus, temperature. The temperature at which the diffraction pattern changes corresponds to a phase boundary, and the phase(s) existing (coexisting) on either side of the boundary can be identified on the basis of the diffraction pattern. Repeating the measurement on samples covering the entire composition range completes the phase diagram. These additional samples can be conveniently placed at different locations around the perimeter of the cylindrical rod and rotated into position for diffraction measurement. Temperature-composition phase diagrams for the fully hydrated binary mixtures, dimyristoylphosphatidylcholine (DMPC)/dipalmitoylphosphatidylcholine (DPPC) and dipalmitoylphosphatidylethanolamine (DPPE)/DPPC, have been constructed using the new temperature gradient method. They agree well with and extend the results obtained by other techniques. In the DPPE/DPPC system structural parameters as a function of temperature in the various phases including the subgel phase are reported. The potential limitations of this steady-state method are discussed 12. Revision of the Ge–Ti phase diagram and structural stability of the new phase Ge4Ti5 International Nuclear Information System (INIS) Bittner, Roland W.; Colinet, Catherine; Tedenac, Jean-Claude; Richter, Klaus W. 2013-01-01 Highlights: •New compound Ge 4 Ti 5 found by experiments and by DFT ground state calculations. •Enthalpies of formation calculated for different Ge–Ti compounds. •Modifications of the Ge–Ti phase diagram suggested. -- Abstract: The binary phase diagram Ge–Ti was investigated experimentally by powder X-ray diffraction, scanning electron microscopy including EDX analysis, and differential thermal analysis. Total energies of the compounds GeTi 3 , GeTi 2 , Ge 3 Ti 5 , Ge 4 Ti 5 , Ge 5 Ti 6 , GeTi and Ge 2 Ti were calculated for various structure types employing electronic density-functional theory (DFT). Experimental studies as well as electronic calculations show the existence of a new phase Ge 4 Ti 5 (Ge 4 Sm 5 -type, oP36, Pnma) which is formed in a solid state reaction Ge 3 Ti 5 + Ge 5 Ti 6 = Ge 4 Ti 5 . In addition, a significant homogeneity range was observed for the compound Ge 3 Ti 5 and the composition of the liquid phase in the eutectic reaction L = Ge + Ge 2 Ti was found to be at significant higher Ge-content (97.5 at.% Ge) than reported in previous studies. Based on these new results, a modified phase diagram Ge–Ti is suggested. The zero-temperature lattice parameters and the formation enthalpies determined by DTF calculations were found to be in good agreement with experimental data 13. Dynamic vortex-phase diagram of MgB2 single crystals near the peak-effect region International Nuclear Information System (INIS) Kim, Heon-Jung; Lee, Hyun-Sook; Kang, Byeongwon; Chowdhury, P.; Kim, Kyung-Hee; Park, Min-Seok; Lee, Sung-Ik 2006-01-01 The dynamic vortex-phase diagram of MgB 2 single crystals has been constructed by using voltage noise characteristics. Between the onset (H on ) and the peak (H p ) magnetic fields, crossovers from a state with large noises to a noise-free state were observed with increasing current while above H p , a reverse behavior was found. We will discuss the dynamic vortex phase diagram and the possible origins of the crossovers 14. A New Chaotic Flow with Hidden Attractor: The First Hyperjerk System with No Equilibrium Science.gov (United States) Ren, Shuili; Panahi, Shirin; Rajagopal, Karthikeyan; Akgul, Akif; Pham, Viet-Thanh; Jafari, Sajad 2018-02-01 Discovering unknown aspects of non-equilibrium systems with hidden strange attractors is an attractive research topic. A novel quadratic hyperjerk system is introduced in this paper. It is noteworthy that this non-equilibrium system can generate hidden chaotic attractors. The essential properties of such systems are investigated by means of equilibrium points, phase portrait, bifurcation diagram, and Lyapunov exponents. In addition, a fractional-order differential equation of this new system is presented. Moreover, an electronic circuit is also designed and implemented to verify the feasibility of the theoretical model. 15. High-pressure vapor-liquid equilibrium data for CO2-orange peel oil Directory of Open Access Journals (Sweden) G.R. Stuart 2000-06-01 Full Text Available Recently, there has been a growing interest in fractionating orange peel oil by the use of supercritical carbon dioxide (SCCO2. However, progress in this area has been hindered by the lack of more comprehensive work concerning the phase equilibrium behavior of the SCCO2-orange peel oil system. In this context, the aim of this work is to provide new phase equilibrium data for this system over a wide range of temperatures and pressures, permitting the construction of coexistence PT-xy curves as well as the P-T diagram. The experiments were performed in a high-pressure variable-volume view cell in the temperature range of 50-70ºC from 70 to 135 atm and in the CO2 mass fraction composition range of 0.35-0.98. Based on the experimental phase equilibrium results, appropriate operating conditions can be set for high-pressure fractionation purposes. 16. Misfit strain phase diagrams of epitaxial PMN–PT films Energy Technology Data Exchange (ETDEWEB) Khakpash, N.; Khassaf, H.; Rossetti, G. A. [Department of Materials Science and Engineering and Institute of Materials Science, University of Connecticut, Storrs, Connecticut 06269 (United States); Alpay, S. P., E-mail: p.alpay@ims.uconn.edu [Department of Materials Science and Engineering and Institute of Materials Science, University of Connecticut, Storrs, Connecticut 06269 (United States); Department of Physics, University of Connecticut, Storrs, Connecticut 06269 (United States) 2015-02-23 Misfit strain–temperature phase diagrams of three compositions of (001) pseudocubic (1 − x)·Pb (Mg{sub l/3}Nb{sub 2/3})O{sub 3} − x·PbTiO{sub 3} (PMN–PT) thin films are computed using a phenomenological model. Two (x = 0.30, 0.42) are located near the morphotropic phase boundary (MPB) of bulk PMN–PT at room temperature (RT) and one (x = 0.70) is located far from the MPB. The results show that it is possible to stabilize an adaptive monoclinic phase over a wide range of misfit strains. At RT, the stability region of this phase is much larger for PMN–PT compared to barium strontium titanate and lead zirconate titanate films. 17. Phase diagram of the Kondo-Heisenberg model on honeycomb lattice with geometrical frustration Science.gov (United States) Li, Huan; Song, Hai-Feng; Liu, Yu 2016-11-01 We calculated the phase diagram of the Kondo-Heisenberg model on a two-dimensional honeycomb lattice with both nearest-neighbor and next-nearest-neighbor antiferromagnetic spin exchanges, to investigate the interplay between RKKY and Kondo interactions in the presence of magnetic frustration. Within a mean-field decoupling technology in slave-fermion representation, we derived the zero-temperature phase diagram as a function of Kondo coupling J k and frustration strength Q. The geometrical frustration can destroy the magnetic order, driving the original antiferromagnetic (AF) phase to non-magnetic valence bond solids (VBS). In addition, we found two distinct VBS. As J k is increased, a phase transition from AF to Kondo paramagnetic (KP) phase occurs, without the intermediate phase coexisting AF order with Kondo screening found in square lattice systems. In the KP phase, the enhancement of frustration weakens the Kondo screening effect, resulting in a phase transition from KP to VBS. We also found a process to recover the AF order from VBS by increasing J k in a wide range of frustration strength. Our work may provide predictions for future experimental observation of new processes of quantum phase transitions in frustrated heavy-fermion compounds. 18. Groundstate fidelity phase diagram of the fully anisotropic two-leg spin-½ XXZ ladder Science.gov (United States) Li, Sheng-Hao; Shi, Qian-Qian; Batchelor, Murray T.; Zhou, Huan-Qiang 2017-11-01 The fully anisotropic two-leg spin-\\tfrac{1}{2} XXZ ladder model is studied in terms of an algorithm based on the tensor network (TN) representation of quantum many-body states as an adaptation of projected entangled pair states to the geometry of translationally invariant infinite-size quantum spin ladders. The TN algorithm provides an effective method to generate the groundstate wave function, which allows computation of the groundstate fidelity per lattice site, a universal marker to detect phase transitions in quantum many-body systems. The groundstate fidelity is used in conjunction with local order and string order parameters to systematically map out the groundstate phase diagram of the ladder model. The phase diagram exhibits a rich diversity of quantum phases. These are the ferromagnetic, stripe ferromagnetic, rung singlet, rung triplet, Néel, stripe Néel and Haldane phases, along with the two XY phases XY1 and XY2. 19. Phase diagram of Fe{sub 1-x}Co{sub x} ultrathin film Energy Technology Data Exchange (ETDEWEB) Fridman, Yu.A. [V.I. Vernadskiy Taurida National University, Vernadskiy Avenue 4, Simferopol, Crimea 95007 (Ukraine)], E-mail: frid@tnu.crimea.ua; Klevets, Ph.N.; Voytenko, A.P. [V.I. Vernadskiy Taurida National University, Vernadskiy Avenue 4, Simferopol, Crimea 95007 (Ukraine) 2008-12-15 Concentration-driven reorientation phase transitions in ultrathin magnetic films of FeCo alloy have been studied. It is established that, in addition to the easy-axis and easy-plane phases, a spatially inhomogeneous phase (domain structure), a canted phase, and also an 'in-plane easy-axis' phase can exist in the system. The realization of the last phase is associated with the competition between the single-ion anisotropy and the magnetoelastic interaction. The critical values of Co concentration corresponding to the phase transitions are evaluated, the types of phase transitions are determined, and the phase diagrams are constructed. 20. Phase-Field simulation of phase decomposition in Fe-Cr-Co alloy under an external magnetic field Science.gov (United States) Koyama, Toshiyuki; Onodera, Hidehiro 2004-07-01 Phase decomposition during isothermal aging of a Fe-Cr-Co ternary alloy under an external magnetic field is simulated based on the phase-field method. In this simulation, since the Gibbs energy available from the thermodynamic CALPHAD database of the equilibrium phase diagram is employed as a chemical free energy, the present calculation provides the quantitative microstructure changes directly linked to the phase diagram. The simulated microstructure evolution demonstrates that the lamella like microstructure elongated along the external magnetic field is evolved with the progress of aging. The morphological and temporal developments of the simulated microstructures are in good agreement with experimental results that have been obtained for this alloy system. 1. Investigation of binary solid phases by calorimetry and kinetic modelling OpenAIRE Matovic, M. 2007-01-01 The traditional methods for the determination of liquid-solid phase diagrams are based on the assumption that the overall equilibrium is established between the phases. However, the result of the crystallization of a liquid mixture will typically be a non-equilibrium or metastable state of the solid. For a proper description of the crystallization process the equilibrium approach is insufficient and a kinetic approach is actually required. In this work, we show that during slow crystallizatio... 2. Phase diagram of the Dirac spectrum at nonzero chemical potential International Nuclear Information System (INIS) Osborn, J. C.; Splittorff, K.; Verbaarschot, J. J. M. 2008-01-01 The Dirac spectrum of QCD with dynamical fermions at nonzero chemical potential is characterized by three regions: a region with a constant eigenvalue density, a region where the eigenvalue density shows oscillations that grow exponentially with the volume and the remainder of the complex plane where the eigenvalue density is zero. In this paper we derive the phase diagram of the Dirac spectrum from a chiral Lagrangian. We show that the constant eigenvalue density corresponds to a pion condensed phase while the strongly oscillating region is given by a kaon condensed phase. The normal phase with nonzero chiral condensate but vanishing Bose condensates coincides with the region of the complex plane where there are no eigenvalues. 3. Essential Magnesium Alloys Binary Phase Diagrams and Their Thermochemical Data Directory of Open Access Journals (Sweden) 2014-01-01 Full Text Available Magnesium-based alloys are becoming a major industrial material for structural applications because of their potential weight saving characteristics. All the commercial Mg alloys like AZ, AM, AE, EZ, ZK, and so forth series are multicomponent and hence it is important to understand the phase relations of the alloying elements with Mg. In this work, eleven essential Mg-based binary systems including Mg-Al/Zn/Mn/Ca/Sr/Y/Ni/Ce/Nd/Cu/Sn have been reviewed. Each of these systems has been discussed critically on the aspects of phase diagram and thermodynamic properties. All the available experimental data has been summarized and critically assessed to provide detailed understanding of the systems. The phase diagrams are calculated based on the most up-to-date optimized parameters. The thermodynamic model parameters for all the systems except Mg-Nd have been summarized in tables. The crystallographic information of the intermetallic compounds of different binary systems is provided. Also, the heat of formation of the intermetallic compounds obtained from experimental, first principle calculations and CALPHAD optimizations are provided. In addition, reoptimization of the Mg-Y system has been done in this work since new experimental data showed wider solubility of the intermetallic compounds. 4. A model for non-equilibrium, non-homogeneous two-phase critical flow International Nuclear Information System (INIS) Bassel, Wageeh Sidrak; Ting, Daniel Kao Sun 1999-01-01 Critical two phase flow is a very important phenomena in nuclear reactor technology for the analysis of loss of coolant accident. Several recent papers, Lee and Shrock (1990), Dagan (1993) and Downar (1996) , among others, treat the phenomena using complex models which require heuristic parameters such as relaxation constants or interfacial transfer models. In this paper a mathematical model for one dimensional non equilibrium and non homogeneous two phase flow in constant area duct is developed. The model is constituted of three conservation equations type mass ,momentum and energy. Two important variables are defined in the model: equilibrium constant in the energy equation and the impulse function in the momentum equation. In the energy equation, the enthalpy of the liquid phase is determined by a linear interpolation function between the liquid phase enthalpy at inlet condition and the saturated liquid enthalpy at local pressure. The interpolation coefficient is the equilibrium constant. The momentum equation is expressed in terms of the impulse function. It is considered that there is slip between the liquid and vapor phases, the liquid phase is in metastable state and the vapor phase is in saturated stable state. The model is not heuristic in nature and does not require complex interface transfer models. It is proved numerically that for the critical condition the partial derivative of two phase pressure drop with respect to the local pressure or to phase velocity must be zero.This criteria is demonstrated by numerical examples. The experimental work of Fauske (1962) and Jeandey (1982) were analyzed resulting in estimated numerical values for important parameters like slip ratio, equilibrium constant and two phase frictional drop. (author) 5. What is the real role of the equilibrium phase in abdominal computed tomography? Energy Technology Data Exchange (ETDEWEB) Salvadori, Priscila Silveira [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina; Costa, Danilo Manuel Cerqueira; Romano, Ricardo Francisco Tavares; Galvao, Breno Vitor Tomaz; Monjardim, Rodrigo da Fonseca; Bretas, Elisa Almeida Sathler; Rios, Lucas Torres; Shigueoka, David Carlos; Caldana, Rogerio Pedreschi; D' Ippolito, Giuseppe, E-mail: giuseppe_dr@uol.com.br [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina. Department of Diagnostic Imaging 2013-03-15 Objective: To evaluate the role of the equilibrium phase in abdominal computed tomography. Materials and Methods: A retrospective, cross-sectional, observational study reviewed 219 consecutive contrast-enhanced abdominal computed tomography images acquired in a three-month period, for different clinical indications. For each study, two reports were issued - one based on the initial analysis of non-contrast-enhanced, arterial and portal phases only (first analysis), and a second reading of these phases added to the equilibrium phase (second analysis). At the end of both readings, differences between primary and secondary diagnoses were pointed out and recorded, in order to measure the impact of suppressing the equilibrium phase on the clinical outcome for each of the patients. The extension of the exact Fisher's test was utilized to evaluate the changes in the primary diagnosis (p < 0.05 as significant). Results: Among the 219 cases reviewed, the absence of the equilibrium phase determined change in the primary diagnosis in only one case (0.46%; p > 0.999). As regards secondary diagnoses, changes after the second analysis were observed in five cases (2.3%). Conclusion: For clinical scenarios such as cancer staging, acute abdomen and investigation for abdominal collections, the equilibrium phase is dispensable and does not offer any significant diagnostic contribution. (author) 6. Determination of the equilibrium miscibility gap in the Pd-Rh alloy system using metal nanopowders obtained by decomposition of coordination compounds Energy Technology Data Exchange (ETDEWEB) Shubin, Yu.V., E-mail: shubin@niic.nsc.ru; Plyusnin, P.E.; Korenev, S.V. 2015-02-15 Highlights: • The Pd-Rh phase diagram has been experimentally reinvestigated. • The true equilibrium was achieved with the two-way approach. • The critical point of the miscibility gap lie at 58 at.% Rh and 820 °C. - Abstract: The Pd-Rh phase diagram has been reinvestigated in the subsolidus region using X-ray diffraction, scanning and transmission electron microscopy. The true equilibrium at the miscibility boundary was achieved with the two-way approach. Nanosized powders of metastable solid solutions and two-phase palladium-rhodium mixtures were used to shorten the time required to equilibrate the system. The initial samples were prepared by decomposition of coordination compounds [Pd(NH{sub 3}){sub 2}Cl{sub 2}], [Rh(NH{sub 3}){sub 5}Cl]Cl{sub 2}, [Pd(NH{sub 3}){sub 4}]{sub 3}[Rh(NO{sub 2}){sub 6}]{sub 2} and [Pd(NH{sub 3}){sub 4}][Rh(NH{sub 3})(NO{sub 2}){sub 5}]. The obtained phase diagram exhibits miscibility gap wider than generally accepted with the critical point of solubility at 58 at.% Rh and 820 °C. 7. Phase transformations and systems driven far from equilibrium International Nuclear Information System (INIS) Ma, E.; Atzmon, M.; Bellon, P.; Trivedi, R. 1998-01-01 This volume compiles invited and contributed papers that were presented at Symposium B of the 1997 Materials Research Society Fall Meeting, Phase Transformations and Systems Driven Far From Equilibrium, which was held December 1--5, in Boston, Massachusetts. While this symposium followed the tradition of previous MRS symposia on the fundamental topic of phase transformations, this year the emphasis was on materials systems driven far from equilibrium. The central theme of the majority of the work presented is the understanding of the thermodynamics and kinetics of phase transformations, with significant coverage of metastable materials and externally forced transformations driven, for example, by energy beams or mechanical deformation. The papers are arranged in seven sections: solidification theory and experiments; nucleation; solid state transformations and microstructural evolution; beam-induced transformations; amorphous solids; interfacial and thin film transformations; and nanophases and mechanical alloying. One hundred three papers have been processed separately for inclusion on the data base 8. High-pressure phase transition and phase diagram of gallium arsenide Science.gov (United States) Besson, J. M.; Itié, J. P.; Polian, A.; Weill, G.; Mansot, J. L.; Gonzalez, J. 1991-09-01 Under hydrostatic pressure, cubic GaAs-I undergoes phase transitions to at least two orthorhombic structures. The initial phase transition to GaAs-II has been investigated by optical-transmittance measurements, Raman scattering, and x-ray absorption. The structure of pressurized samples, which are retrieved at ambient, has been studied by x-ray diffraction and high-resolution diffraction microscopy. Various criteria that define the domain of stability of GaAs-I are examined, such as the occurrence of crystalline defects, the local variation in atomic coordination number, or the actual change in crystal structure. These are shown not to occur at the same pressure at 300 K, the latter being observable only several GPa above the actual thermodynamic instability pressure of GaAs-I. Comparison of the evolution of these parameters on increasing and decreasing pressure locates the thermodynamic transition region GaAs-I-->GaAs-II at 12+/-1.5 GPa and at 300 K that is lower than generally reported. The use of thermodynamic relations around the triple point, and of regularities in the properties of isoelectronic and isostructural III-V compounds, yields a phase diagram for GaAs which is consistent with this value. 9. Phase equilibrium study on system uranium-plutonium-tungsten-carbon International Nuclear Information System (INIS) Ugajin, Mitsuhiro 1976-11-01 Metallurgical properties of the U-Pu-W-C system have been studied with emphasis on phases and reactions. Free energy of compound formation, carbon activity and U/Pu segregation in the W-doped carbide fuel are estimated using phase diagram data. The results indicate that tungsten metal is useful as a thermochemical stabilizer of the carbide fuel. Tungsten has high temperature stability in contact with uranium carbide and mixed uranium-plutonium carbide. (auth.) 10. Concurrence of dynamical phase transitions at finite temperature in the fully connected transverse-field Ising model Science.gov (United States) Lang, Johannes; Frank, Bernhard; Halimeh, Jad C. 2018-05-01 We construct the finite-temperature dynamical phase diagram of the fully connected transverse-field Ising model from the vantage point of two disparate concepts of dynamical criticality. An analytical derivation of the classical dynamics and exact diagonalization simulations are used to study the dynamics after a quantum quench in the system prepared in a thermal equilibrium state. The different dynamical phases characterized by the type of nonanalyticities that emerge in an appropriately defined Loschmidt-echo return rate directly correspond to the dynamical phases determined by the spontaneous breaking of Z2 symmetry in the long-time steady state. The dynamical phase diagram is qualitatively different depending on whether the initial thermal state is ferromagnetic or paramagnetic. Whereas the former leads to a dynamical phase diagram that can be directly related to its equilibrium counterpart, the latter gives rise to a divergent dynamical critical temperature at vanishing final transverse-field strength. 11. Phase Diagram of a Simple Model for Fractional Topological Insulator Science.gov (United States) Chen, Hua; Yang, Kun 2012-02-01 We study a simple model of two species of (or spin-1/2) fermions with short-range intra-species repulsion in the presence of opposite (effetive) magnetic field, each at filling factor 1/3. In the absence of inter-species interaction, the ground state is simply two copies of the 1/3 Laughlin state, with opposite chirality. Due to the overall time-reversal symmetry, this is a fractional topological insulator. We show this phase is stable against moderate inter-species interactions. However strong enough inter-species repulsion leads to phase separation, while strong enough inter-species attraction drives the system into a superfluid phase. We obtain the phase diagram through exact diagonalization caluclations. Nature of the fractional topological insluator-superfluid phase transition is discussed using an appropriate Chern-Simons-Ginsburg-Landau effective field theory. 12. Dynamic phase transition in the kinetic spin-1 Blume-Capel model: Phase diagrams in the temperature and crystal-field interaction plane International Nuclear Information System (INIS) Keskin, M.; Canko, O.; Temizer, U. 2007-01-01 Within a mean-field approach, the stationary states of the kinetic spin-1 Blume-Capel model in the presence of a time-dependent oscillating external magnetic field is studied. The Glauber-type stochastic dynamics is used to describe the time evolution of the system and obtain the mean-field dynamic equation of motion. The dynamic phase-transition points are calculated and phase diagrams are presented in the temperature and crystal-field interaction plane. According to the values of the magnetic field amplitude, three fundamental types of phase diagrams are found: One exhibits a dynamic tricritical point, while the other two exhibit a dynamic zero-temperature critical point 13. Dynamic phase diagrams of a cylindrical Ising nanowire in the presence of a time dependent magnetic field International Nuclear Information System (INIS) Kantar, Ersin; Ertaş, Mehmet; Keskin, Mustafa 2014-01-01 The dynamic phase diagrams of a cylindrical Ising nanowire in the presence of a time dependent magnetic field are obtained by using the effective-field theory with correlations based on the Glauber-type stochastic dynamics. According to the values of interaction parameters, a number of interesting properties have been found in the dynamic phase diagrams, such as many dynamic critical points (tricritical point, double critical end point, critical end point, zero temperature critical point, multicritical point, tetracritical point, and triple point) as well as reentrant phenomena. - Highlights: • The cylindrical Ising nanowire is investigated within the Glauber dynamics based on EFT. • The time variations of average order parameters to find phases are studied. • The dynamic phase diagrams are found for the different interaction parameters. • The system displays the critical points as well as a reentrant behavior 14. Dynamic phase diagrams of a cylindrical Ising nanowire in the presence of a time dependent magnetic field Energy Technology Data Exchange (ETDEWEB) Kantar, Ersin; Ertaş, Mehmet, E-mail: mehmetertas@erciyes.edu.tr; Keskin, Mustafa 2014-06-01 The dynamic phase diagrams of a cylindrical Ising nanowire in the presence of a time dependent magnetic field are obtained by using the effective-field theory with correlations based on the Glauber-type stochastic dynamics. According to the values of interaction parameters, a number of interesting properties have been found in the dynamic phase diagrams, such as many dynamic critical points (tricritical point, double critical end point, critical end point, zero temperature critical point, multicritical point, tetracritical point, and triple point) as well as reentrant phenomena. - Highlights: • The cylindrical Ising nanowire is investigated within the Glauber dynamics based on EFT. • The time variations of average order parameters to find phases are studied. • The dynamic phase diagrams are found for the different interaction parameters. • The system displays the critical points as well as a reentrant behavior. 15. Magnetic structures, phase diagram and spin waves of magneto-electric LiNiPO4 DEFF Research Database (Denmark) Jensen, Thomas Bagger Stibius 2007-01-01 LiNiPO4 is a magneto-electric material, having co-existing antiferromagnetic and ferroelectric phases when suitable magnetic fields are applied at low temperatures. Such systems have received growing interest in recent years, but the nature of the magneticelectric couplings is yet to be fully...... through the last three years, it is not the primary subject of this thesis. The objective of the phD project has been to provide groundwork that may be beneficiary to future studies of LiNiPO4. More specifically, we have mapped out the magnetic HT phase diagram with magnetic fields below 14.7 T applied...... along the crystallographic c-axis, determined the magnetic structures for the phases in the phase diagram, and have set up a spin model Hamiltonian describing the spin wave dynamics and estimating the relevant magnetic interactions.... 16. Ammonia-water phase diagram and its implications for icy satellites International Nuclear Information System (INIS) Johnson, M.L.; Nicol, M. 1986-01-01 A Holzapfel-type diamond anvil cell is used to determine the NH 3 - H 2 O phase diagram in the region from 0 to 33 mole percent NH 3 , 240 to 370 K, and 0 to 5 GPa. The following phases were identified: liquid; water ices Ih, III, V, VI, VII, and VIII; ammonia monohydrate, NH 3 .H 2 O; and ammonia dihydrate NH 3 . 2 H 2 O. Ammonia dihydrate becomes prominent at moderate pressures (less than 1 GPa), with planetologically significant implications, including the possibility of layering in Titan's magma ocean 17. New Wang-Landau approach to obtain phase diagrams for multicomponent alloys Science.gov (United States) Takeuchi, Kazuhito; Tanaka, Ryohei; Yuge, Koretaka 2017-10-01 We develop an approach to apply the Wang-Landau algorithm to multicomponent alloys in a semi-grand-canonical ensemble. Although the Wang-Landau algorithm has great advantages over conventional sampling methods, there are few applications to alloys. This is because calculating compositions in a semi-grand-canonical ensemble via the Wang-Landau algorithm requires a multidimensional density of states in terms of total energy and compositions, and constructing it is difficult from the viewpoints of both implementation and computational cost. In this study, we develop a simple approach to calculate the alloy phase diagram based on the Wang-Landau algorithm, and show that a number of one-dimensional densities of states could lead to compositions in a semi-grand-canonical ensemble as a multidimensional density of states could. Finally, we apply the present method to Cu-Au and Pd-Rh alloys and confirm that the present method successfully describes the phase diagram with high efficiency, validity, and accuracy. 18. Analysis of three-phase equilibrium conditions for methane hydrate by isometric-isothermal molecular dynamics simulations Science.gov (United States) Yuhara, Daisuke; Brumby, Paul E.; Wu, David T.; Sum, Amadeu K.; Yasuoka, Kenji 2018-05-01 To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems. 19. Analysis of three-phase equilibrium conditions for methane hydrate by isometric-isothermal molecular dynamics simulations. Science.gov (United States) Yuhara, Daisuke; Brumby, Paul E; Wu, David T; Sum, Amadeu K; Yasuoka, Kenji 2018-05-14 To develop prediction methods of three-phase equilibrium (coexistence) conditions of methane hydrate by molecular simulations, we examined the use of NVT (isometric-isothermal) molecular dynamics (MD) simulations. NVT MD simulations of coexisting solid hydrate, liquid water, and vapor methane phases were performed at four different temperatures, namely, 285, 290, 295, and 300 K. NVT simulations do not require complex pressure control schemes in multi-phase systems, and the growth or dissociation of the hydrate phase can lead to significant pressure changes in the approach toward equilibrium conditions. We found that the calculated equilibrium pressures tended to be higher than those reported by previous NPT (isobaric-isothermal) simulation studies using the same water model. The deviations of equilibrium conditions from previous simulation studies are mainly attributable to the employed calculation methods of pressure and Lennard-Jones interactions. We monitored the pressure in the methane phase, far from the interfaces with other phases, and confirmed that it was higher than the total pressure of the system calculated by previous studies. This fact clearly highlights the difficulties associated with the pressure calculation and control for multi-phase systems. The treatment of Lennard-Jones interactions without tail corrections in MD simulations also contributes to the overestimation of equilibrium pressure. Although improvements are still required to obtain accurate equilibrium conditions, NVT MD simulations exhibit potential for the prediction of equilibrium conditions of multi-phase systems. 20. Phase diagram of the Hubbard model with arbitrary band filling: renormalization group approach International Nuclear Information System (INIS) Cannas, Sergio A.; Cordoba Univ. Nacional; Tsallis, Constantino. 1991-01-01 The finite temperature phase diagram of the Hubbard model in d = 2 and d = 3 is calculated for arbitrary values of the parameter U/t and chemical potential μ using a quantum real space renormalization group. Evidence for a ferromagnetic phase at low temperatures is presented. (author). 15 refs., 5 figs 1. Phase diagram of incoherently driven strongly correlated photonic lattices Science.gov (United States) Biella, Alberto; Storme, Florent; Lebreuilly, José; Rossini, Davide; Fazio, Rosario; Carusotto, Iacopo; Ciuti, Cristiano 2017-08-01 We explore theoretically the nonequilibrium photonic phases of an array of coupled cavities in presence of incoherent driving and dissipation. In particular, we consider a Hubbard model system where each site is a Kerr nonlinear resonator coupled to a two-level emitter, which is pumped incoherently. Within a Gutzwiller mean-field approach, we determine the steady-state phase diagram of such a system. We find that, at a critical value of the intercavity photon hopping rate, a second-order nonequilibrium phase transition associated with the spontaneous breaking of the U(1 ) symmetry occurs. The transition from an incompressible Mott-like photon fluid to a coherent delocalized phase is driven by commensurability effects and not by the competition between photon hopping and optical nonlinearity. The essence of the mean-field predictions is corroborated by finite-size simulations obtained with matrix product operators and corner-space renormalization methods. 2. Thermodynamic properties of fluid mixtures at high pressures and high temperatures. Application to high explosives and to phase diagrams of binary mixtures International Nuclear Information System (INIS) Pittion-Rossillon, Gerard 1982-01-01 The free energy for mixtures of about ten species which are chemically reacting is calculated. In order to have accurate results near the freezing line, excess properties are deduced from a modern statistical mechanics theory. Intermolecular potentials for like molecules are fitted to give good agreement with shock experiments in pure liquid samples, and mixture properties come naturally from the theory. The stationary Chapman-Jouguet detonation wave is calculated with a chemical equilibrium computer code and results are in good agreement with experiment for a lot of various explosives. One then study gas-gas equilibria in a binary mixture and show the extreme sensitivity of theoretical phase diagrams to the hypothesis of the model (author) [fr 3. Phase Diagram in a Random Mixture of Two Antiferromagnets with Competing Spin Anisotropies. I Science.gov (United States) Someya, Yoshiko 1981-12-01 The phase diagram of a random mixture of two antiferromagnets with competing spin anisotropies (A1-xBx) has been analyzed by extending the theory of Matsubara and Inawashiro, and Oguchi and Ishikawa. In the model assumed, the anisotropy energies are expressed by the anisotropic exchange interactions. According to this formulation, it has been shown that the concentration dependence of TN becomes a function of \\includegraphics{dummy.eps}, where P, Q=A, B; SP is a magnitude of P-spin, and JPQη is a η component of exchange integral between P- and Q-spin). Further, the phase boundary between an AF phase and an OAF (oblique antiferromagnetic) phase at T{=}0 K has been shown to be determined by α({\\equiv}SB/SA), if \\includegraphics{dummy.eps} are given. The obtained phase diagrams for Fe1-xCoxCl2, K2Mn1-xFexF4 and Fe1-xCoxCl2\\cdot2H2O are compared with the experimental ones. 4. Lattice parameters values and phase diagram for the Cu{sub 2}Zn{sub 1-z}Fe{sub z}GeSe{sub 4} alloy system Energy Technology Data Exchange (ETDEWEB) Caldera, D. [Centro de Estudios de Semiconductores, Departamento de Fisica, Facultad de Ciencias, Universidad de Los Andes, Merida 5101 (Venezuela); Quintero, M. [Centro de Estudios de Semiconductores, Departamento de Fisica, Facultad de Ciencias, Universidad de Los Andes, Merida 5101 (Venezuela)], E-mail: mquinter@ula.ve; Morocoima, M.; Quintero, E.; Grima, P.; Marchan, N.; Moreno, E.; Bocaranda, P. [Centro de Estudios de Semiconductores, Departamento de Fisica, Facultad de Ciencias, Universidad de Los Andes, Merida 5101 (Venezuela); Delgado, G.E. [Laboratorio de Cristalografia, Departamento de Quimica, Facultad de Ciencias, Universidad de Los Andes, Merida 5101 (Venezuela); Mora, A.E.; Briceno, J.M.; Fernandez, J.L. [Laboratorio de Analisis Quimico y Estructura de Materiales, Departamento de Fisica, Universidad de Los Andes, Merida 5101 (Venezuela) 2008-06-12 X-ray powder diffraction and differential thermal analysis (DTA) measurements were made on polycrystalline samples of the Cu{sub 2}Zn{sub 1-z}Fe{sub z}GeSe{sub 4} alloy system. The diffraction patterns were used to show the equilibrium conditions and to estimate crystalline parameter values. It was found that, at room temperature, a single phase solid solution with the tetragonal stannite {alpha} structure (I4-bar2m) occurs across the whole composition range. The DTA thermograms were used to construct the phase diagram of the Cu{sub 2}Zn{sub 1-z}Fe{sub z}GeSe{sub 4} alloy system. It was confirmed that the Cu{sub 2}ZnGeSe{sub 4} compound melts incongruently. It was observed that undercooling effects occur for samples with z > 0.9. 5. Effects of the randomly distributed magnetic field on the phase diagrams of the Ising Nanowire II: Continuous distributions International Nuclear Information System (INIS) Akıncı, Ümit 2012-01-01 The effect of the random magnetic field distribution on the phase diagrams and ground state magnetizations of the Ising nanowire has been investigated with effective field theory with correlations. Gaussian distribution has been chosen as a random magnetic field distribution. The variation of the phase diagrams with that distribution parameters has been obtained and some interesting results have been found such as disappearance of the reentrant behavior and first order transitions which appear in the case of discrete distributions. Also for single and double Gaussian distributions, ground state magnetizations for different distribution parameters have been determined which can be regarded as separate partially ordered phases of the system. - Highlights: ► We give the phase diagrams of the Ising nanowire under the continuous randomly distributed magnetic field. ► Ground state magnetization values obtained. ► Different partially ordered phases observed. 6. Investigating the QCD phase diagram with hadron multiplicities at NICA Energy Technology Data Exchange (ETDEWEB) Becattini, F. [Universita di Firenze (Italy); INFN, Firenze (Italy); Stock, R. [Goethe University, Frankfurt am Main (Germany) 2016-08-15 We discuss the potential of the experimental programme at NICA to investigate the QCD phase diagram and particularly the position of the critical line at large baryon-chemical potential with accurate measurements of particle multiplicities. We briefly review the present status and we outline the tasks to be accomplished both theoretically and the experimentally to make hadronic abundances a sensitive probe. (orig.) 7. Application Of Empirical Phase Diagrams For Multidimensional Data Visualization Of High Throughput Microbatch Crystallization Experiments. Science.gov (United States) Klijn, Marieke E; Hubbuch, Jürgen 2018-04-27 Protein phase diagrams are a tool to investigate cause and consequence of solution conditions on protein phase behavior. The effects are scored according to aggregation morphologies such as crystals or amorphous precipitates. Solution conditions affect morphological features, such as crystal size, as well as kinetic features, such as crystal growth time. Common used data visualization techniques include individual line graphs or symbols-based phase diagrams. These techniques have limitations in terms of handling large datasets, comprehensiveness or completeness. To eliminate these limitations, morphological and kinetic features obtained from crystallization images generated with high throughput microbatch experiments have been visualized with radar charts in combination with the empirical phase diagram (EPD) method. Morphological features (crystal size, shape, and number, as well as precipitate size) and kinetic features (crystal and precipitate onset and growth time) are extracted for 768 solutions with varying chicken egg white lysozyme concentration, salt type, ionic strength and pH. Image-based aggregation morphology and kinetic features were compiled into a single and easily interpretable figure, thereby showing that the EPD method can support high throughput crystallization experiments in its data amount as well as its data complexity. Copyright © 2018. Published by Elsevier Inc. 8. Phase diagram as a function of temperature and magnetic field for magnetic semiconductors OpenAIRE Gonzalez, I.; Castro, J.; Baldomir, D. 2002-01-01 Using an extension of the Nagaev model of phase separation (E.L. Nagaev, and A.I. Podel'shchikov, Sov. Phys. JETP, 71 (1990) 1108), we calculate the phase diagram for degenerate antiferromagnetic semiconductors in the T-H plane for different current carrier densities. Both, wide-band semiconductors and 'double-exchange' materials, are investigated. 9. Phase diagram as a function of temperature and magnetic field for magnetic semiconductors Science.gov (United States) González, I.; Castro, J.; Baldomir, D. 2002-10-01 Using an extension of the Nagaev model of phase separation [E. L. Nagaev and A. I. Podel'shchikov, Sov. Phys. JETP, 71, 1108 (1990)] we calculate the phase diagram for degenerate antiferromagnetic semiconductors in the T-H plane for different current carrier densities. Both wide-band semiconductors and double-exchange materials are investigated. 10. Experimental investigation and thermodynamic calculations of the Bi–In–Ni phase diagram International Nuclear Information System (INIS) Premović, Milena; Minić, Duško; Manasijević, Dragan; Ćosović, Vladan; Živković, Dragana; Dervišević, Irma 2015-01-01 Highlights: • Calculated constitutive binary system based on literature data. • Experimentally determined (DTA) temperatures of phase transformations compared with analytical calculation. • Definition of several vertical sections. • Calculated horizontal section, confirmed by experimental SEM–EDS and XRD method. • Calculated liquidus surface projection and determined invariant reaction occurred in ternary Bi–In–Ni system. - Abstract: Phase diagram of the Bi–In–Ni ternary system was investigated using differential thermal analysis (DTA), scanning electron microscopy (SEM) with energy dispersive spectrometry (EDS), and X-ray powder diffraction (XRD) analysis. Experimentally obtained results were compared with the results of thermodynamic calculation of phase equilibria based on calculation of phase diagram (CALPHAD) method and literature data. Phase transition temperatures of alloys with overall compositions along three selected vertical sections In–Bi 0.8 Ni 0.2 , x(Bi) = 0.6 and Bi–In 0.5 Ni 0.5 were measured by DTA. Liquidus temperatures were experimentally determined and compared with the results of thermodynamic calculation. Identification of coexisting phases in samples equilibrated at 100 °C, 300 °C and 350 °C was carried out using SEM–EDS and XRD methods. The obtained results were compared with the calculated isothermal sections of the Bi–In–Ni ternary system at corresponding temperatures. Calculated liquidus projection and invariant equilibria of the Bi–In–Ni ternary system were presented 11. Prediction of phase equilibria in the In–Sb–Pb system Directory of Open Access Journals (Sweden) DUSKO MINIC 2008-03-01 Full Text Available Binary thermodynamic data, successfully used for phase diagram calculations of the binary systems In–Sb, Pb–Sb and In–Pb, were used for the prediction of the phase equilibria in the ternary In–Sb–Pb system. The predicted equilibrium phase diagram of the vertical Pb–InSb section was compared with the results of differential thermal analysis DTA and optical microscopy. The calculated phase diagram of the isothermal section at 300 °C was compared with the experimentally (SEM, EDX determined composition of phases in the chosen alloys after annealing. Very good agreement between the binary-based thermodynamic prediction and the experimental data was found in all cases. The calculated liquidus projection of the ternary In–Sb–Pb system is also presented. 12. High pressure cosmochemistry applied to major planetary interiors: Experimental studies. [phase diagram for the ammonia water system Science.gov (United States) Nicol, M. F.; Johnson, M.; Schwake, A. 1983-01-01 Progress is reported in the development of the P-T-X diagram for 0 less than or = X less than or = 0.50 and in the development of techniques for measuring adiabats of phases of NH3-H2O. The partial phase diagram is presented, investigations of the compositions of ammonia ices are described, and methods for obtaining the infrared spectra of ices are discussed. 13. Phase diagram of a bosonic ladder with two coupled chains International Nuclear Information System (INIS) Luthra, Meetu Sethi; Mishra, Tapan; Pai, Ramesh V.; Das, B. P. 2008-01-01 We study a bosonic ladder with two coupled chains using the finite-size density-matrix renormalization group method. We show that in a commensurate bosonic ladder the critical on-site interaction (U C ) for the superfluid to Mott insulator transition gets larger as the interchain hopping (t perpendicular ) increases. We analyze this quantum phase transition and obtain the phase diagram in the t perpendicular -U plane. We also consider the asymmetric case where the on-site interactions are different in the two chains and have shown that the system as a whole will not be in the Mott insulator phase unless both the chains have on-site interactions greater than the critical value 14. Quantum phase diagram of the integrable px+ipy fermionic superfluid DEFF Research Database (Denmark) Rombouts, Stefan; Dukelsky, Jorge; Ortiz, Gerardo 2010-01-01 transition, separating a strong-pairing from a weak-pairing phase. The mean-field solution allows to connect these results to other models with px+ipy pairing order. We define an experimentally accessible characteristic length scale, associated with the size of the Cooper pairs, that diverges......We determine the zero-temperature quantum phase diagram of a px+ipy pairing model based on the exactly solvable hyperbolic Richardson-Gaudin model. We present analytical and large-scale numerical results for this model. In the continuum limit, the exact solution exhibits a third-order quantum phase...... at the transition point, indicating that the phase transition is of a confinement-deconfinement type without local order parameter. We propose an experimental measurement to detect the transition. We show that this phase transition is not limited to the px+ipy pairing model but can be found in any representation... 15. Modeling of metastable phase formation diagrams for sputtered thin films. Science.gov (United States) Chang, Keke; Music, Denis; To Baben, Moritz; Lange, Dennis; Bolvardi, Hamid; Schneider, Jochen M 2016-01-01 A method to model the metastable phase formation in the Cu-W system based on the critical surface diffusion distance has been developed. The driver for the formation of a second phase is the critical diffusion distance which is dependent on the solubility of W in Cu and on the solubility of Cu in W. Based on comparative theoretical and experimental data, we can describe the relationship between the solubilities and the critical diffusion distances in order to model the metastable phase formation. Metastable phase formation diagrams for Cu-W and Cu-V thin films are predicted and validated by combinatorial magnetron sputtering experiments. The correlative experimental and theoretical research strategy adopted here enables us to efficiently describe the relationship between the solubilities and the critical diffusion distances in order to model the metastable phase formation during magnetron sputtering. 16. Magnetic phase diagram of ErGe 1-xSi x (0 Science.gov (United States) Thuéry, P.; El Maziani, F.; Clin, M.; Schobinger-Papamantellos, P.; Buschow, K. H. J. 1993-10-01 The composition-temperature magnetic phase diagram of ErGe 1- xSi x (0 0.40. For 0.17 ≥ x ≤ 0.55, a first-order transition occurs as function of the temperature between these two phases. For x ≥ 0.65, a lock-in transition takes place at TIC, leading from the wavevector ( k' x,0, k' z) to (1/2,0,1/2), as was already observed in ErSi. Finally, for x < 0.17 or 0.55 < x < 0.65, the wavevectors of the incommensurate phases characterized by (0,0, kz) or ( k' x,0, k' z) respectively remain unchanged in the whole temperature range below TN. For x≥0.65, a small amount of a magnetic phase characterized by the wavevector (0,0, 1/2) coexists with the main phases, below a Néel temperature T' N slightly lower than TN. In all cases, the erbium magnetic moments are colinear along the orthorhombic α-axis; the arrangement of the moments in the commensurate phases is the same as in ErSi and the incommensurate orderings correspond to sine-wave amplitude modulations. A brief account on the theoretical interpretation of this phase diagram is finally given. 17. Correlation between viscous-flow activation energy and phase diagram in four systems of Cu-based alloys Energy Technology Data Exchange (ETDEWEB) Ning Shuang [Key Laboratory of Liquid Structure and Heredity of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Bian Xiufang, E-mail: xfbian@sdu.edu.c [Key Laboratory of Liquid Structure and Heredity of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Ren Zhenfeng [Key Laboratory of Liquid Structure and Heredity of Materials, Ministry of Education, Shandong University, Jinan 250061 (China) 2010-09-01 Activation energy is obtained from temperature dependence of viscosities by means of a fitting to the Arrhenius equation for liquid alloys of Cu-Sb, Cu-Te, Cu-Sn and Cu-Ag systems. We found that the changing trend of activation energy curves with concentration is similar to that of liquidus in the phase diagrams. Moreover, a maximum value of activation energy is in the composition range of the intermetallic phases and a minimum value of activation energy is located at the eutectic point. The correlation between the activation energy and the phase diagrams has been further discussed. 18. Molecular Simulation of the Phase Diagram of Methane Hydrate: Free Energy Calculations, Direct Coexistence Method, and Hyperparallel Tempering. Science.gov (United States) Jin, Dongliang; Coasne, Benoit 2017-10-24 Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule. 19. Ground-state phase diagram of an (S, S') = (1, 2) spin-alternating chain with competing single-ion anisotropies International Nuclear Information System (INIS) Tonegawa, T; Okamoto, K; Sakai, T; Kaburagi, M 2009-01-01 Employing various numerical methods, we determine the ground-state phase diagram of an (S, S') = (1, 2) spin-alternating chain with antiferromagnetic nearest-neighboring exchange interactions and uniaxial single-ion anisotropies. The resulting phase diagram consists of eight kinds of phases including two phases which accompany the spontaneous breaking of the translational symmetry and a ferrimagnetic phase in which the ground-state magnetization varies continuously with the uniaxial single-ion anisotropy constants for the S=1 and S =2 spins. The appearance of these three phases is attributed to the competition between the uniaxial single-ion anisotropies of both spins. 20. Optimization and calculation of the MCl-ZnCl2 (M = Li, Na, K) phase diagrams International Nuclear Information System (INIS) Romero-Serrano, Antonio; Hernandez-Ramirez, Aurelio; Cruz-Ramirez, Alejandro; Hallen-Lopez, Manuel; Zeifert, Beatriz 2010-01-01 An earlier structural model for binary silicate melts and glasses is extended to zinc chloride-alkali metal chloride systems. The evaluation of the available thermodynamic and phase diagrams data for the MCl-ZnCl 2 (M = Li, Na, K) binary systems have been carried out using the structural model for the liquid phase. This thermodynamic model is based on the assumption that each alkali chloride produces the depolymerization of ZnCl 2 network with a characteristic free-energy change. A least-squares optimization program permits all available thermodynamic and phase diagram data to be optimized simultaneously. In this manner, data for these binary systems have been analysed and represented with a small number of parameters. 1. Two-phase regime in the magnetic field-temperature phase diagram of a type-II superconductor International Nuclear Information System (INIS) Adams, L.L.A.; Halterman, Klaus; Valls, Oriol T.; Goldman, A.M. 2004-01-01 The magnetic field and temperature dependencies of the magnetic moments of superconducting crystals of V 3 Si have been studied. In a constant magnetic field and at temperatures somewhat below the superconducting transition temperature, the moments are hysteretic in temperature. However, the magnetic moment-magnetic field isotherms are reversible and exhibit features that formally resemble the pressure-volume isotherms of the liquid-gas transition. This suggests the existence of a first-order phase transition, a two-phase regime, and a critical point in the superconducting phase diagram. The two phases are disordered vortex configurations with the same magnetization, but with different vortex densities. The entropy change, determined from the data using the Clausius-Clapeyron equation, is consistent with estimates based on the difference in the vortex densities of the two phases 2. Two-phase regime in the magnetic field-temperature phase diagram of a type-II superconductor Energy Technology Data Exchange (ETDEWEB) Adams, L.L.A.; Halterman, Klaus; Valls, Oriol T.; Goldman, A.M 2004-01-01 The magnetic field and temperature dependencies of the magnetic moments of superconducting crystals of V{sub 3}Si have been studied. In a constant magnetic field and at temperatures somewhat below the superconducting transition temperature, the moments are hysteretic in temperature. However, the magnetic moment-magnetic field isotherms are reversible and exhibit features that formally resemble the pressure-volume isotherms of the liquid-gas transition. This suggests the existence of a first-order phase transition, a two-phase regime, and a critical point in the superconducting phase diagram. The two phases are disordered vortex configurations with the same magnetization, but with different vortex densities. The entropy change, determined from the data using the Clausius-Clapeyron equation, is consistent with estimates based on the difference in the vortex densities of the two phases. 3. Oxidation phase growth diagram of vanadium oxides film fabricated by rapid thermal annealing Institute of Scientific and Technical Information of China (English) Tamura KOZO; Zheng-cao LI; Yu-quan WANG; Jie NI; Yin HU; Zheng-jun ZHANG 2009-01-01 Thermal evaporation deposited vanadium oxide films were annealed in air by rapid thermal annealing (RTP). By adjusting the annealing temperature and time, a series of vanadium oxide films with various oxidation phases and surface morphologies were fabricated, and an oxidation phase growth diagram was established. It was observed that different oxidation phases appear at a limited and continuous annealing condition range, and the morphologic changes are related to the oxidation process. 4. Studies on the QCD Phase Diagram at SPS and FAIR International Nuclear Information System (INIS) Blume, Christoph 2013-01-01 A review of results of the energy scan program at the CERN-SPS by the NA49 experiment is given. Presented are observables related to the search for a critical point in the QCD phase diagram and for the onset of deconfinement. Furthermore, the ongoing experimental program of NA61 at the CRRN-SPS and the plans of the CBM experiment at FAIR are discussed. 5. Au-Ni nanoparticles: Phase diagram prediction, synthesis, characterization, and thermal stability Czech Academy of Sciences Publication Activity Database Sopoušek, J.; Kryštofová, A.; Premovic, M.; Zobač, O.; Postlerová, S.; Brož, P.; Buršík, Jiří 2017-01-01 Roč. 58, SEP (2017), s. 25-33 ISSN 0364-5916 R&D Projects: GA ČR(CZ) GA17-12844S; GA ČR(CZ) GA17-15405S Institutional support: RVO:68081723 Keywords : nanoalloy * CALPHAD * phase diagram Subject RIV: BJ - Thermodynamics OBOR OECD: Thermodynamics Impact factor: 1.600, year: 2016 6. Modelling the continuous cooling transformation diagram of engineering steels using neural networks. Part I. Phase regions Energy Technology Data Exchange (ETDEWEB) Wolk, P.J. van der; Wang, J. [Delft Univ. of Technology (Netherlands); Sietsma, J.; Zwaag, S. van der [Delft Univ. of Technology, Lab. for Materials Science (Netherlands) 2002-12-01 A neural network model for the calculation of the phase regions of the continuous cooling transformation (CCT) diagram of engineering steels has been developed. The model is based on experimental CCT diagrams of 459 low-alloy steels, and calculates the CCT diagram as a function of composition and austenitisation temperature. In considering the composition, 9 alloying elements are taken into account. The model reproduces the original diagrams rather accurately, with deviations that are not larger than the average experimental inaccuracy of the experimental diagrams. Therefore, it can be considered an adequate alternative to the experimental determination of the CCT diagram of a certain steel within the composition range used. The effects of alloying elements can be quantified, either individually or in combination, with the model. Nonlinear composition dependencies are observed. (orig.) 7. Controlling competing electronic orders via non-equilibrium acoustic phonons Science.gov (United States) Schuett, Michael; Orth, Peter; Levchenko, Alex; Fernandes, Rafael The interplay between multiple electronic orders is a hallmark of strongly correlated systems displaying unconventional superconductivity. While doping, pressure, and magnetic field are the standard knobs employed to assess these different phases, ultrafast pump-and-probe techniques opened a new window to probe these systems. Recent examples include the ultrafast excitation of coherent optical phonons coupling to electronic states in cuprates and iron pnictides. In this work, we demonstrate theoretically that non-equilibrium acoustic phonons provide a promising framework to manipulate competing electronic phases and favor unconventional superconductivity over other states. In particular, we show that electrons coupled to out-of-equilibrium anisotropic acoustic phonons enter a steady state in which the effective electronic temperature varies around the Fermi surface. Such a momentum-dependent temperature can then be used to selectively heat electronic states that contribute primarily to density-wave instabilities, reducing their competition with superconductivity. We illustrate this phenomenon by computing the microscopic steady-state phase diagram of the iron pnictides, showing that superconductivity is enhanced with respect to the competing antiferromagnetic phase. 8. Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field Science.gov (United States) Borelli, M. E. S.; Carneiro, C. E. I. 1996-02-01 We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field. 9. Phase diagram, thermodynamic investigations, and modelling of systems relevant to lithium-ion batteries International Nuclear Information System (INIS) Fuertauer, Siegfried; Beutl, Alexander; Flanorfer, Hans; Henriques, David; Giel, Hans; Markus, Thorsten 2017-01-01 This article reports on two consecutive joint projects titled ''Experimental Thermodynamics and Phase Relations of New Electrode Materials for Lithium-Ion Batteries'', which were performed in the framework of the WenDeLIB 1473 priority program ''Materials with new Design for Lithium Ion Batteries''. Hundreds of samples were synthesized using experimental techniques specifically developed to deal with highly reactive lithium and lithium-containing compounds to generate electrochemical, phase diagram and crystal structure data in the Cu-Li, Li-Sn, Li-Sb, Cu-Li-Sn, Cu-Li-Sb and selected oxide systems. The thermochemical and phase diagram data were subsequently used to develop self-consistent thermodynamic descriptions of several binary systems. In the present contribution, the experimental techniques, working procedures, results and their relevance to the development of new electrode materials for lithium ion batteries are discussed and summarized. The collaboration between the three groups has resulted in more than fifteen (15) published articles during the six-year funding period. 10. Phase diagram, thermodynamic investigations, and modelling of systems relevant to lithium-ion batteries Energy Technology Data Exchange (ETDEWEB) Fuertauer, Siegfried; Beutl, Alexander; Flanorfer, Hans [Vienna Univ. (Austria). Dept. of Inorganic Chemistry - Functional Materials; Li, Dajian; Cupid, Damian [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany). Inst. for Applied Materials - Applied Materials Physics (IAM-AWP); Henriques, David; Giel, Hans; Markus, Thorsten [Mannheim Univ. of Applied Sciences (Germany). Inst. for Thermo- and Fluiddynamics 2017-11-15 This article reports on two consecutive joint projects titled ''Experimental Thermodynamics and Phase Relations of New Electrode Materials for Lithium-Ion Batteries'', which were performed in the framework of the WenDeLIB 1473 priority program ''Materials with new Design for Lithium Ion Batteries''. Hundreds of samples were synthesized using experimental techniques specifically developed to deal with highly reactive lithium and lithium-containing compounds to generate electrochemical, phase diagram and crystal structure data in the Cu-Li, Li-Sn, Li-Sb, Cu-Li-Sn, Cu-Li-Sb and selected oxide systems. The thermochemical and phase diagram data were subsequently used to develop self-consistent thermodynamic descriptions of several binary systems. In the present contribution, the experimental techniques, working procedures, results and their relevance to the development of new electrode materials for lithium ion batteries are discussed and summarized. The collaboration between the three groups has resulted in more than fifteen (15) published articles during the six-year funding period. 11. Ordered phase and non-equilibrium fluctuation in stock market Science.gov (United States) 2002-08-01 We analyze the statistics of daily price change of stock market in the framework of a statistical physics model for the collective fluctuation of stock portfolio. In this model the time series of price changes are coded into the sequences of up and down spins, and the Hamiltonian of the system is expressed by spin-spin interactions as in spin glass models of disordered magnetic systems. Through the analysis of Dow-Jones industrial portfolio consisting of 30 stock issues by this model, we find a non-equilibrium fluctuation mode on the point slightly below the boundary between ordered and disordered phases. The remaining 29 modes are still in disordered phase and well described by Gibbs distribution. The variance of the fluctuation is outlined by the theoretical curve and peculiarly large in the non-equilibrium mode compared with those in the other modes remaining in ordinary phase. 12. Towards the QCD phase diagram CERN Document Server De Forcrand, Philippe; Forcrand, Philippe de; Philipsen, Owe 2006-01-01 We summarize our recent results on the phase diagram of QCD with N_f=2+1 quark flavors, as a function of temperature T and quark chemical potential \\mu. Using staggered fermions, lattices with temporal extent N_t=4, and the exact RHMC algorithm, we first determine the critical line in the quark mass plane (m_{u,d},m_s) where the finite temperature transition at \\mu=0 is second order. We confirm that the physical point lies on the crossover side of this line. Our data are consistent with a tricritical point at (m_{u,d},m_s) = (0,\\sim 500) MeV. Then, using an imaginary chemical potential, we determine in which direction this second-order line moves as the chemical potential is turned on. Contrary to standard expectations, we find that the region of first-order transitions shrinks in the presence of a chemical potential, which is inconsistent with the presence of a QCD critical point at small chemical potential. The emphasis is put on clarifying the translation of our results from lattice to physical units, and ... 13. Phase diagrams of Pr-C system International Nuclear Information System (INIS) Eremenko, V.N.; Velikanova, T.Ya.; Gordijchuk, O.V. 1988-01-01 Results of the X-ray phase, metallographic and high-temperature differential thermal analysis are used for the first time to plot a diagram of the Pr-C system state. Carbides are formed in the system: Pr 2 C 3 with the bcc-structure of the Pu 2 C 3 type and with the period a 0 = 0.85722+-0.00026 within the phase region + 2 C 3 >, a 0 0.86078+-0.00016 nm - within the region 2 C 3 >+α-PrC 2 ; dimorphous PrC 2 : α-PrC 2 with the bct-structure of the CaC 2 type and periods a 0.38517+-0.00011, c 0 = 0.64337+-0.00019 nm; β-PrC 2 with the fcc-structure, probably, of KCN type. Dicarbide melts congruently at 2320 grad. C, forming eutectics with graphite at 2254+-6 grad. C and composition of 71.5% (at.)C. It is polymorphously transformed in the phase region 2 C 3 > + 2 > at 1145+-4 grad. C, and in the region 2 >+C at 1134+-4 grad. C. Sesquicarbide melts incongruently at 1545+-4 grad. C. The eutectic reaction L ↔ + 2 C 3 > occurs at 800+-4 grad. C, the eutectic composition ∼ 15% (at.)C. The temperature of the eutectoid reaction ↔ + 2 C 3 > is 675+-6 grad C. The limiting carbon solubility in β-Pr is about 8 and in α-Pr it is about 5% (at.) 14. An investigation of the Pd-Ag-Ru-Gd quaternary system phase diagram International Nuclear Information System (INIS) Zhang Kanghou; Xu Yun 2005-01-01 On the basis of the Ag-Pd-Gd, Ag-Ru-Gd and Pd-Ru-Gd ternary systems, the partial phase diagram of Pd-Ag-Ru-Gd (Gd 3 Gd and Ag 51 Gd 14 ; five two-phase regions: Pd(Ag) + (Ru), Pd(Ag) + Ag 51 Gd 14 (Ru) + Ag 51 Gd 14 , Pd(Ag) + Pd 3 Gd and (Ru) + Pd 3 Gd; three three-phase regions: Pd(Ag) + Pd 3 Gd + (Ru), Pd(Ag) + Ag 51 Gd 14 + (Ru) and (Ru) + Ag 51 Gd 14 + Pd 3 Gd; one four-phase region Pd(Ag) + (Ru) + Ag 51 Gd 14 + Pd 3 Gd. No new quaternary intermetallic phase has been found 15. Determination of phase diagrams via computer simulation: methodology and applications to water, electrolytes and proteins International Nuclear Information System (INIS) Vega, C; Sanz, E; Abascal, J L F; Noya, E G 2008-01-01 In this review we focus on the determination of phase diagrams by computer simulation, with particular attention to the fluid-solid and solid-solid equilibria. The methodology to compute the free energy of solid phases will be discussed. In particular, the Einstein crystal and Einstein molecule methodologies are described in a comprehensive way. It is shown that both methodologies yield the same free energies and that free energies of solid phases present noticeable finite size effects. In fact, this is the case for hard spheres in the solid phase. Finite size corrections can be introduced, although in an approximate way, to correct for the dependence of the free energy on the size of the system. The computation of free energies of solid phases can be extended to molecular fluids. The procedure to compute free energies of solid phases of water (ices) will be described in detail. The free energies of ices Ih, II, III, IV, V, VI, VII, VIII, IX, XI and XII will be presented for the SPC/E and TIP4P models of water. Initial coexistence points leading to the determination of the phase diagram of water for these two models will be provided. Other methods to estimate the melting point of a solid, such as the direct fluid-solid coexistence or simulations of the free surface of the solid, will be discussed. It will be shown that the melting points of ice Ih for several water models, obtained from free energy calculations, direct coexistence simulations and free surface simulations agree within their statistical uncertainty. Phase diagram calculations can indeed help to improve potential models of molecular fluids. For instance, for water, the potential model TIP4P/2005 can be regarded as an improved version of TIP4P. Here we will review some recent work on the phase diagram of the simplest ionic model, the restricted primitive model. Although originally devised to describe ionic liquids, the model is becoming quite popular to describe the behavior of charged colloids 16. A non-equilibrium phase transition in a dissipative forest model International Nuclear Information System (INIS) Messer, Joachim A. 2009-01-01 The shape of the biostress force for a stressed Lotka-Volterra network is for the first time derived from Lindblad's dissipative dynamics. Numerical solutions for stressed prey-predator systems with limited resources show a threshold. A non-equilibrium phase transition to a phase with ecosystem dying after a few enforced oscillations (waldsterben phase) occurs. 17. Experimental investigation and thermodynamic calculations of the Ag–Bi–Ga phase diagram International Nuclear Information System (INIS) Minić, Duško; Premović, Milena; Manasijević, Dragan; Ćosović, Vladan; Živković, Dragana; Marković, Aleksandar 2015-01-01 Phase diagram of the Ag–Bi–Ga ternary system was investigated using differential thermal analysis (DTA), scanning electron microscopy (SEM) with energy dispersive spectrometry (EDS), and x-ray powder diffraction (XRD) methods. Experimentally obtained results were compared with the results of thermodynamic prediction of phase equilibria based on calculation of phase diagram (CALPHAD) method. Phase transition temperatures of alloys with overall compositions along three selected vertical sections Ag–Bi 50 Ga 50 , Bi–Ag 50 Ga 50 and Ga–Ag 50 Bi 50 were measured by DTA. Liquidus temperatures were experimentally determined and compared with the results of thermodynamic calculation. Identification of coexisting phases in samples equilibrated at 200 °C was carried out using SEM-EDS and XRD methods. Obtained results were compared with the calculated isothermal section of the Ag–Bi–Ga ternary system at corresponding temperature. Calculated liquidus projection and invariant equilibria of the Ag–Bi–Ga ternary system were presented. The obtained values were found to be in a close agreement. - Highlights: • Calculated constitutive binary system based on literature data. • Experimentally determined (DTA) temperatures of phase transformations compared with analytical calculation. • Definition of three vertical sections Ag–Bi 50 Ga 50 , Bi–Ag 50 Ga 50 and Ga–Ag 50 Bi 50 . • Calculated horizontal section at 200 °C, confirmed by experimental SEM-EDS and XRD method. • Calculated liquidus surface projection and determined invariant reaction occurred in ternary Ag–Bi–Ga system 18. Evaluating the phase diagram of superconductors with asymmetric spin populations International Nuclear Information System (INIS) Mannarelli, Massimo; Nardulli, Giuseppe; Ruggieri, Marco 2006-01-01 The phase diagram of a nonrelativistic fermionic system with imbalanced state populations interacting via a short-range S-wave attractive interaction is analyzed in the mean-field approximation. We determine the energetically favored state for different values of the mismatch between the two Fermi spheres in the weak- and strong-coupling regimes considering both homogeneous and nonhomogeneous superconductive states. We find that the homogeneous superconductive phase persists for values of the population imbalance that increase with increasing coupling strength. In the strong-coupling regime and for large population differences the energetically stable homogeneous phase is characterized by one gapless mode. We also find that the inhomogeneous superconductive phase characterized by the condensate Δ(x)∼Δ exp(iq·x) is energetically favored in a range of values of the chemical-potential mismatch that shrinks to zero in the strong-coupling regime 19. Phase diagram of a symmetric electron–hole bilayer system: a variational Monte Carlo study Science.gov (United States) Sharma, Rajesh O.; Saini, L. K.; Prasad Bahuguna, Bhagwati 2018-05-01 We study the phase diagram of a symmetric electron–hole bilayer system at absolute zero temperature and in zero magnetic field within the quantum Monte Carlo approach. In particular, we conduct variational Monte Carlo simulations for various phases, i.e. the paramagnetic fluid phase, the ferromagnetic fluid phase, the anti-ferromagnetic Wigner crystal phase, the ferromagnetic Wigner crystal phase and the excitonic phase, to estimate the ground-state energy at different values of in-layer density and inter-layer spacing. Slater–Jastrow style trial wave functions, with single-particle orbitals appropriate for different phases, are used to construct the phase diagram in the (r s , d) plane by finding the relative stability of trial wave functions. At very small layer separations, we find that the fluid phases are stable, with the paramagnetic fluid phase being particularly stable at and the ferromagnetic fluid phase being particularly stable at . As the layer spacing increases, we first find that there is a phase transition from the ferromagnetic fluid phase to the ferromagnetic Wigner crystal phase when d reaches 0.4 a.u. at r s   =  20, and before there is a return to the ferromagnetic fluid phase when d approaches 1 a.u. However, for r s   Wigner crystal is stable over the considered range of r s and d. We also find that as r s increases, the critical layer separations for Wigner crystallization increase. 20. Phase diagram of a symmetric electron-hole bilayer system: a variational Monte Carlo study. Science.gov (United States) Sharma, Rajesh O; Saini, L K; Bahuguna, Bhagwati Prasad 2018-05-10 We study the phase diagram of a symmetric electron-hole bilayer system at absolute zero temperature and in zero magnetic field within the quantum Monte Carlo approach. In particular, we conduct variational Monte Carlo simulations for various phases, i.e. the paramagnetic fluid phase, the ferromagnetic fluid phase, the anti-ferromagnetic Wigner crystal phase, the ferromagnetic Wigner crystal phase and the excitonic phase, to estimate the ground-state energy at different values of in-layer density and inter-layer spacing. Slater-Jastrow style trial wave functions, with single-particle orbitals appropriate for different phases, are used to construct the phase diagram in the (r s , d) plane by finding the relative stability of trial wave functions. At very small layer separations, we find that the fluid phases are stable, with the paramagnetic fluid phase being particularly stable at [Formula: see text] and the ferromagnetic fluid phase being particularly stable at [Formula: see text]. As the layer spacing increases, we first find that there is a phase transition from the ferromagnetic fluid phase to the ferromagnetic Wigner crystal phase when d reaches 0.4 a.u. at r s   =  20, and before there is a return to the ferromagnetic fluid phase when d approaches 1 a.u. However, for r s   Wigner crystal is stable over the considered range of r s and d. We also find that as r s increases, the critical layer separations for Wigner crystallization increase. 1. Phase and vacancy behaviour of hard "slanted" cubes Science.gov (United States) van Damme, R.; van der Meer, B.; van den Broeke, J. J.; Smallenburg, F.; Filion, L. 2017-09-01 We use computer simulations to study the phase behaviour for hard, right rhombic prisms as a function of the angle of their rhombic face (the "slant" angle). More specifically, using a combination of event-driven molecular dynamics simulations, Monte Carlo simulations, and free-energy calculations, we determine and characterize the equilibrium phases formed by these particles for various slant angles and densities. Surprisingly, we find that the equilibrium crystal structure for a large range of slant angles and densities is the simple cubic crystal—despite the fact that the particles do not have cubic symmetry. Moreover, we find that the equilibrium vacancy concentration in this simple cubic phase is extremely high and depends only on the packing fraction and not the particle shape. At higher densities, a rhombic crystal appears as the equilibrium phase. We summarize the phase behaviour of this system by drawing a phase diagram in the slant angle-packing fraction plane. 2. Magnetization plateaus and phase diagrams of the Ising model on the Shastry–Sutherland lattice Energy Technology Data Exchange (ETDEWEB) 2015-11-01 The magnetization properties of a two-dimensional spin-1/2 Ising model on the Shastry–Sutherland lattice are studied within the effective-field theory (EFT) with correlations. The thermal behavior of the magnetizations is investigated in order to characterize the nature (the first- or second-order) of the phase transitions as well as to obtain the phase diagrams of the model. The internal energy, specific heat, entropy and free energy of the system are also examined numerically as a function of the temperature in order to confirm the stability of the phase transitions. The applied field dependence of the magnetizations is also examined to find the existence of the magnetization plateaus. For strong enough magnetic fields, several magnetization plateaus are observed, e.g., at 1/9, 1/8, 1/3 and 1/2 of the saturation. The phase diagrams of the model are constructed in two different planes, namely (h/|J|, |J′|/|J|) and (h/|J|, T/|J|) planes. It was found that the model exhibits first- and second-order phase transitions; hence tricitical point is also observed in additional to the zero-temperature critical point. Moreover the Néel order (N), collinear order (C) and ferromagnetic (F) phases are also found with appropriate values of the system parameters. The reentrant behavior is also obtained whenever model displays two Néel temperatures. These results are compared with some theoretical and experimental works and a good overall agreement has been obtained. - Highlights: • Magnetization properties of spin-1/2 Ising model on SS lattice are investigated. • The magnetization plateaus of the 1/9, 1/8, 1/3 and 1/2 are observed. • The phase diagrams of the model are constructed in two different planes. • The model exhibits the tricitical and zero-temperature critical points. • The reentrant behavior is obtained whenever model displays two Neel temperatures. 3. LEU fuel development at CERCA. Status as of October 1996: U5Si4: A new phase in the U/Si diagram International Nuclear Information System (INIS) Durand, J.P.; Olagnon, G.; Colomb, P.; Lavastre, Y.; Grasse, M.; Noel, H.; Queneau, V. 1996-01-01 A fundamental study has been carried out by CERCA and the French CNRS (National Scientific Research Center) as a partner in order to get a better understanding of the U 3 Si 2 casting process. On the occasion of this study, a new binary phase, U 5 Si 4 , has been discovered in the USi phase equilibrium diagram. Synthesis conditions of U 5 Si 4 have been determined and the impact of such a discovery is evaluated regarding the production process of U 3 Si 2 ingots. It can be concluded that keeping the CERCA's casting tools and within the allowed limits of production parameters, the U 3 Si 2 ingots are homogenous without any detectable trace of U 5 Si 4 even if a long term heat treatment at the hot rolling temperature is carried out. On the production point of view, perfect knowledge of both metallurgical phase synthesis and the casting process guarantee the quality of U 3 Si 2 ingots and powder produced by CERCA. (author) 4. Asymmetric simple exclusion process with position-dependent hopping rates: Phase diagram from boundary-layer analysis. Science.gov (United States) Mukherji, Sutapa 2018-03-01 In this paper, we study a one-dimensional totally asymmetric simple exclusion process with position-dependent hopping rates. Under open boundary conditions, this system exhibits boundary-induced phase transitions in the steady state. Similarly to totally asymmetric simple exclusion processes with uniform hopping, the phase diagram consists of low-density, high-density, and maximal-current phases. In various phases, the shape of the average particle density profile across the lattice including its boundary-layer parts changes significantly. Using the tools of boundary-layer analysis, we obtain explicit solutions for the density profile in different phases. A detailed analysis of these solutions under different boundary conditions helps us obtain the equations for various phase boundaries. Next, we show how the shape of the entire density profile including the location of the boundary layers can be predicted from the fixed points of the differential equation describing the boundary layers. We discuss this in detail through several examples of density profiles in various phases. The maximal-current phase appears to be an especially interesting phase where the boundary layer flows to a bifurcation point on the fixed-point diagram. 5. c-T phase diagram and Landau free energies of (AgAu)55 nanoalloy via neural-network molecular dynamic simulations. Science.gov (United States) Chiriki, Siva; Jindal, Shweta; Bulusu, Satya S 2017-10-21 For understanding the structure, dynamics, and thermal stability of (AgAu) 55 nanoalloys, knowledge of the composition-temperature (c-T) phase diagram is essential due to the explicit dependence of properties on composition and temperature. Experimentally, generating the phase diagrams is very challenging, and therefore theoretical insight is necessary. We use an artificial neural network potential for (AgAu) 55 nanoalloys. Predicted global minimum structures for pure gold and gold rich compositions are lower in energy compared to previous reports by density functional theory. The present work based on c-T phase diagram, surface area, surface charge, probability of isomers, and Landau free energies supports the enhancement of catalytic property of Ag-Au nanoalloys by incorporation of Ag up to 24% by composition in Au nanoparticles as found experimentally. The phase diagram shows that there is a coexistence temperature range of 70 K for Ag 28 Au 27 compared to all other compositions. We propose the power spectrum coefficients derived from spherical harmonics as an order parameter to calculate Landau free energies. 6. Sintering of YBaCu0, implications of the phase diagram International Nuclear Information System (INIS) Gervais, M.; Douy, A.; Dubois, B.; Coutures, J.P.; Odier, P. 1989-01-01 The motivations of this experimental work are to underline the implications between the phases diagram constitution and the sintering of YBaCu0 superconductors. This preliminary work is focussed on the solid → liquid transformations of this system, in the vicinity of the (123) phase. Two transformations are observed at 915 and 935 0 C depending of the composition of the compound. They both have an important role on the sintering process and the chemical homogeneity of the ceramic. No such transformations seems to occur in the domain (123)-(211)-BaCu0 2 , the sintered sample has therefore a better chemical homogeneity [fr 7. Revision of the Ge–Ti phase diagram and structural stability of the new phase Ge{sub 4}Ti{sub 5} Energy Technology Data Exchange (ETDEWEB) Bittner, Roland W. [University of Vienna, Department of Inorganic Chemistry/Materials Chemistry, Währingerstraße 42, 1090 Wien (Austria); Colinet, Catherine [Science et Ingénierie des Matériaux et Procédés, Grenoble INP, UJF, CNRS, 38402 Saint Martin d’Hères Cedex (France); Tedenac, Jean-Claude [Institut de Chimie Moléculaire et des Matériaux I.C.G., UMR-CNRS 5253, Université Montpellier II, Place E. Bataillon, 34095 Montpellier Cedex 5 (France); Richter, Klaus W., E-mail: klaus.richter@univie.ac.at [University of Vienna, Department of Inorganic Chemistry/Materials Chemistry, Währingerstraße 42, 1090 Wien (Austria) 2013-11-15 Highlights: •New compound Ge{sub 4}Ti{sub 5} found by experiments and by DFT ground state calculations. •Enthalpies of formation calculated for different Ge–Ti compounds. •Modifications of the Ge–Ti phase diagram suggested. -- Abstract: The binary phase diagram Ge–Ti was investigated experimentally by powder X-ray diffraction, scanning electron microscopy including EDX analysis, and differential thermal analysis. Total energies of the compounds GeTi{sub 3}, GeTi{sub 2}, Ge{sub 3}Ti{sub 5}, Ge{sub 4}Ti{sub 5}, Ge{sub 5}Ti{sub 6}, GeTi and Ge{sub 2}Ti were calculated for various structure types employing electronic density-functional theory (DFT). Experimental studies as well as electronic calculations show the existence of a new phase Ge{sub 4}Ti{sub 5} (Ge{sub 4}Sm{sub 5}-type, oP36, Pnma) which is formed in a solid state reaction Ge{sub 3}Ti{sub 5} + Ge{sub 5}Ti{sub 6} = Ge{sub 4}Ti{sub 5}. In addition, a significant homogeneity range was observed for the compound Ge{sub 3}Ti{sub 5} and the composition of the liquid phase in the eutectic reaction L = Ge + Ge{sub 2}Ti was found to be at significant higher Ge-content (97.5 at.% Ge) than reported in previous studies. Based on these new results, a modified phase diagram Ge–Ti is suggested. The zero-temperature lattice parameters and the formation enthalpies determined by DTF calculations were found to be in good agreement with experimental data. 8. Study of phase equilibrium of Pu2O3-PuO2 system by the first-principles calculation and CALPHAD approach International Nuclear Information System (INIS) Minamoto, Satoshi; Kato, Masato; Konashi, Kenji 2010-01-01 A combination of a first-principles calculation, lattice dynamics and CALPHAD (CALculation of PHAse Diagrams) modeling is proven as a powerful tool so as to evaluate the Gibbs free energy and a phase equilibrium between compounds including large amount of vacancies. In this work, non-stoichiometric PuO 2-x (dioxide) and Pu 2 O 3 (sesquioxide) has been studied. An electron cohesive energy was evaluated from a first-principles calculations to estimate total energy of the compounds and a vacancy formation energy, and the theory of statistical mechanics was applied to evaluate enthalpy/entropy change due to oxygen vacancies for the non-stoichiometry of the PuO 2 (i.e. PuO 2-x ). Then a vacancy-vacancy interaction energy was determined by fitting to the experimental data of a quantity of non-stoichiometry of the PuO 2 compounds as a function of oxygen potentials at large deviation from stoichiometry. The resulting Gibbs free energy yields phase boundary between the phases with good agreement with to the experimental data. 9. Orientation dependence of phase diagrams and physical properties in epitaxial Ba0.6Sr0.4TiO3 films Science.gov (United States) Qiu, J. H.; Zhao, T. X.; Chen, Z. H.; Yuan, N. Y.; Ding, J. N. 2018-04-01 Orientation dependence of phase diagrams and physical properties of Ba0.6Sr0.4TiO3 films are investigated by using a phenomenological Landau-Devonshire theory. New ferroelectric phases, such as the tetragonal a1 phase and the orthorhombic a2 c phase in (110) oriented film and the monoclinic MA phase in (111) oriented film, appear in the "misfit strain-temperature" phase diagrams as compared with (001) oriented film. Moreover, the phase diagrams of (110) and (111) oriented films are more complex than that of (001) oriented film due to the nonlinear coupling terms appeared in the thermodynamic potential. The dielectric and piezoelectric properties largely depend on the misfit strain and orientation. (111) oriented film has the better piezoelectric property than (110) oriented film. Furthermore, the compressive misfit strain is prone to induce the larger piezoelectric property than tensile misfit strain. 10. Phase diagram and equation of state of TiH2 at high pressures and high temperatures International Nuclear Information System (INIS) Endo, Naruki; Saitoh, Hiroyuki; Machida, Akihiko; Katayama, Yoshinori; Aoki, Katsutoshi 2013-01-01 Highlights: ► We determined the phase diagram of TiH 2 at high pressures and high temperatures. ► Compression induced stain inhibited the phase transition from the bct to fcc phase. ► The phase boundary was appropriately determined using a sample with heat treatment. ► The high temperature Birch–Murnaghan equation of state of fcc TiH 2 was firstly determined. - Abstract: We determined the phase diagram and the equation of state (EoS) of TiH 2 at high pressures up to 8.7 GPa and high temperatures up to 600 °C by in situ synchrotron radiation X-ray diffraction measurements. Compression induced strain inhibited the phase transition from the low-temperature bct phase to the high-temperature fcc phase, making the phase diagram difficult to determine. However, heating around 600 °C relieved the strain, and the phase boundary between the bct and fcc phases was elucidated. The phase transition temperature at ambient pressure increased from around room temperature to 200 °C at 8.7 GPa. The high temperature Birch–Murnaghan EoS was determined for the fcc phase. With the pressure derivative of the bulk modulus K′ 0 = 4.0, the following parameters were obtained: ambient bulk modulus K 0 = 97.7 ± 0.2 GPa, ambient unit cell of the fcc phase V 0 = 88.57 ± 0.02 Å 3 , temperature derivative of the bulk modulus at constant pressure (∂K/∂T) P = −0.01 ± 0.02, and volumetric thermal expansivity α = a + bT with a = 2.62 ± 1.4 × 10 −5 and b = 5.5 ± 4.5 × 10 −8 . K 0 of fcc TiH 2 was close to those for pure Ti and bct TiH 2 reported in previous studies. 11. Optimization and calculation of the MCl-ZnCl{sub 2} (M = Li, Na, K) phase diagrams Energy Technology Data Exchange (ETDEWEB) Romero-Serrano, Antonio, E-mail: romeroipn@hotmail.com [Metallurgy and Materials Department, Instituto Politecnico Nacional-ESIQIE, Apdo. P. 118-431, 07051 Mexico, D.F. (Mexico); Hernandez-Ramirez, Aurelio, E-mail: aurelioh@hotmail.com [Metallurgy and Materials Department, Instituto Politecnico Nacional-ESIQIE, Apdo. P. 118-431, 07051 Mexico, D.F. (Mexico); Cruz-Ramirez, Alejandro, E-mail: alcruzr@ipn.mx [Metallurgy and Materials Department, Instituto Politecnico Nacional-ESIQIE, Apdo. P. 118-431, 07051 Mexico, D.F. (Mexico); Hallen-Lopez, Manuel, E-mail: j_hallen@yahoo.com [Metallurgy and Materials Department, Instituto Politecnico Nacional-ESIQIE, Apdo. P. 118-431, 07051 Mexico, D.F. (Mexico); Zeifert, Beatriz, E-mail: bzeifert@yahoo.com [Metallurgy and Materials Department, Instituto Politecnico Nacional-ESIQIE, Apdo. P. 118-431, 07051 Mexico, D.F. (Mexico) 2010-10-20 An earlier structural model for binary silicate melts and glasses is extended to zinc chloride-alkali metal chloride systems. The evaluation of the available thermodynamic and phase diagrams data for the MCl-ZnCl{sub 2} (M = Li, Na, K) binary systems have been carried out using the structural model for the liquid phase. This thermodynamic model is based on the assumption that each alkali chloride produces the depolymerization of ZnCl{sub 2} network with a characteristic free-energy change. A least-squares optimization program permits all available thermodynamic and phase diagram data to be optimized simultaneously. In this manner, data for these binary systems have been analysed and represented with a small number of parameters. 12. Tuning the phase diagrams: the miscibility studies of multilactate liquid crystalline compounds Czech Academy of Sciences Publication Activity Database Bubnov, Alexej; Tykarska, M.; Hamplová, Věra; Kurp, K. 2016-01-01 Roč. 89, č. 9 (2016), s. 885-893 ISSN 0141-1594 R&D Projects: GA ČR GA13-14133S; GA MŠk(CZ) LD14007; GA ČR GA15-02843S Grant - others:EU - ICT(XE) COST Action IC1208 Institutional support: RVO:68378271 Keywords : miscibility study * binary mixture * polar smectic phase * lactic acid derivative * miscibility study * phase diagram * self-assembling behaviour Subject RIV: JJ - Other Materials Impact factor: 1.060, year: 2016 13. Electron band theory predictions and the construction of phase diagrams International Nuclear Information System (INIS) Watson, R.E.; Bennett, L.H.; Davenport, J.W.; Weinert, M. 1985-01-01 The a priori theory of metals is yielding energy results which are relevant to the construction of phase diagrams - to the solution phases as well as to line compounds. There is a wide range in the rigor of the calculations currently being done and this is discussed. Calculations for the structural stabilities (fcc vs bcc vs hcp) of the elemental metals, quantities which are employed in the constructs of the terminal phases, are reviewed and shown to be inconsistent with the values currently employed in such constructs (also see Miodownik elsewhere in this volume). Finally, as an example, the calculated heats of formation are compared with experiment for PtHf, IrTa and OsW, three compounds with the same electron to atom ratio but different bonding properties 14. Phase diagram and magnetic relaxation phenomena in Cu2OSeO3 Science.gov (United States) Qian, F.; Wilhelm, H.; Aqeel, A.; Palstra, T. T. M.; Lefering, A. J. E.; Brück, E. H.; Pappas, C. 2016-08-01 We present an investigation of the magnetic-field-temperature phase diagram of Cu2OSeO3 based on dc magnetization and ac susceptibility measurements covering a broad frequency range of four orders of magnitude, from very low frequencies reaching 0.1 Hz up to 1 kHz. The experiments were performed in the vicinity of Tc=58.2 K and around the skyrmion lattice A phase. At the borders between the different phases the characteristic relaxation times reach several milliseconds and the relaxation is nonexponential. Consequently the borders between the different phases depend on the specific criteria and frequency used and an unambiguous determination is not possible. 15. Phase equilibrium in a polarized saturated 3He-4He mixture International Nuclear Information System (INIS) Rodrigues, A.; Vermeulen, G. 1997-01-01 We present experimental results on the phase equilibrium of a saturated 3 He- 4 He mixture, which has been cooled to a temperature of 10-15 mK and polarized in a 4 He circulating dilution refrigerator to a stationary polarization of 15 %, 7 times higher than the equilibrium polarization in the external field of 7 T. The pressure dependence of the polarization enhancement in the refrigerator shows that the molar susceptibilities of the concentrated and dilute phase of a saturated 3 He- 4 He mixture are equal at p = 2.60 ± 0.04 bar. This result affects the Fermi liquid parameters of the dilute phase. The osmotic pressure in the dilute phase has been measured as a function of the polarization of the coexisting concentrated phase up to 15 %. We find that the osmotic pressure at low polarization ( < 7 % ) agrees well with thermodynamics using the new Fermi liquid parameters of the dilute phase 16. Phase diagram of a lattice of pancake vortex molecules International Nuclear Information System (INIS) Tanaka, Y.; Crisan, A.; Shivagan, D.D.; Iyo, A.; Shirage, P.M.; Tokiwa, K.; Watanabe, T.; Terada, N. 2009-01-01 On a superconducting bi-layer with thickness much smaller than the penetration depth, λ, a vortex molecule might form. A vortex molecule is composed of two fractional vortices and a soliton wall. The soliton wall can be regarded as a Josephson vortex missing magnetic flux (degenerate Josephson vortex) due to an incomplete shielding. The magnetic energy carried by fractional vortices is less than in the conventional vortex. This energy gain can pay a cost to form a degenerate Josephson vortex. The phase diagram of the vortex molecule is rich because of its rotational freedom. 17. Hydration Phase Diagram of Clay Particles from Molecular Simulations. Science.gov (United States) Honorio, Tulio; Brochard, Laurent; Vandamme, Matthieu 2017-11-07 Adsorption plays a fundamental role in the behavior of clays. Because of the confinement between solid clay layers on the nanoscale, adsorbed water is structured in layers, which can occupy a specific volume. The transition between these states is intimately related to key features of clay thermo-hydro-mechanical behavior. In this article, we consider the hydration states of clays as phases and the transition between these states as phase changes. The thermodynamic formulation supporting this idea is presented. Then, the results from grand canonical Monte Carlo simulations of sodium montmorillonite are used to derive hydration phase diagrams. The stability analysis presented here explains the coexistence of different hydration states at clay particle scale and improves our understanding of the irreversibilities of clay thermo-hydro-mechanical behavior. Our results provide insights into the mechanics of the elementary constituents of clays, which is crucial for a better understanding of the macroscopic behavior of clay-rich rocks and soils. 18. Thermodynamic study of sodium-iron oxides. Part 2. Ternary phase diagram of the Na-Fe-O system International Nuclear Information System (INIS) Huang, Jintao; Furukawa, Tomohiro; Aoto, Kazumi 2003-01-01 Studies on ternary phase diagrams of the Na-Fe-O system have been carried out from the thermodynamic point of view. Thermodynamic data of main ternary Na-Fe oxides Na 4 FeO 3 (s), Na 3 FeO 3 (s), Na 5 FeO 4 (s) and Na 8 Fe 2 O 7 (s) have been assessed. A user database has been created by reviewing literature data together with recent DSC and vapor pressure measurements by the present authors. New ternary phase diagrams of the Na-Fe-O system have been constructed from room temperature to 1000K. Stable conditions of the ternary oxides at 800K were presented in predominance diagram as functions of oxygen pressure and sodium pressure 19. The NaNO2-NaNO3 system – a revised phase diagram DEFF Research Database (Denmark) Berg, Rolf W.; Kerridge, D.H.; Larsen, Peter Halvor 2004-01-01 Three earlier determinations of the phase diagram of the sodium nitrite/sodium nitrate binary system resulted in considerably different conclusions, ranging from simple eutectic to continuous solid solution types, together with different sub-solidus lines. Recent melting enthalpy measurements hav... 20. Experimental investigation and thermodynamic calculations of the Ag–Bi–Ga phase diagram Energy Technology Data Exchange (ETDEWEB) Minić, Duško, E-mail: dminic65@open.telekom.rs [University in Priština, Faculty of Technical Science, Kos. Mitrovica (Serbia); Premović, Milena [University in Priština, Faculty of Technical Science, Kos. Mitrovica (Serbia); Manasijević, Dragan [University of Belgrade, Technical Faculty, Bor (Serbia); Ćosović, Vladan [University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Belgrade (Serbia); Živković, Dragana [University of Belgrade, Technical Faculty, Bor (Serbia); Marković, Aleksandar [University in Priština, Faculty of Technical Science, Kos. Mitrovica (Serbia) 2015-10-15 Phase diagram of the Ag–Bi–Ga ternary system was investigated using differential thermal analysis (DTA), scanning electron microscopy (SEM) with energy dispersive spectrometry (EDS), and x-ray powder diffraction (XRD) methods. Experimentally obtained results were compared with the results of thermodynamic prediction of phase equilibria based on calculation of phase diagram (CALPHAD) method. Phase transition temperatures of alloys with overall compositions along three selected vertical sections Ag–Bi{sub 50}Ga{sub 50}, Bi–Ag{sub 50}Ga{sub 50} and Ga–Ag{sub 50}Bi{sub 50} were measured by DTA. Liquidus temperatures were experimentally determined and compared with the results of thermodynamic calculation. Identification of coexisting phases in samples equilibrated at 200 °C was carried out using SEM-EDS and XRD methods. Obtained results were compared with the calculated isothermal section of the Ag–Bi–Ga ternary system at corresponding temperature. Calculated liquidus projection and invariant equilibria of the Ag–Bi–Ga ternary system were presented. The obtained values were found to be in a close agreement. - Highlights: • Calculated constitutive binary system based on literature data. • Experimentally determined (DTA) temperatures of phase transformations compared with analytical calculation. • Definition of three vertical sections Ag–Bi{sub 50}Ga{sub 50}, Bi–Ag{sub 50}Ga{sub 50} and Ga–Ag{sub 50}Bi{sub 50}. • Calculated horizontal section at 200 °C, confirmed by experimental SEM-EDS and XRD method. • Calculated liquidus surface projection and determined invariant reaction occurred in ternary Ag–Bi–Ga system. 1. Phase diagrams of a nonequilibrium mixed spin-3/2 and spin-2 Ising system in an oscillating magnetic field International Nuclear Information System (INIS) Keskin, Mustafa; Polat, Yasin 2009-01-01 The phase diagrams of the nonequilibrium mixed spin-3/2 and spin-2 Ising ferrimagnetic system on square lattice under a time-dependent external magnetic field are presented by using the Glauber-type stochastic dynamics. The model system consists of two interpenetrating sublattices of spins σ=3/2 and S=2, and we take only nearest-neighbor interactions between pairs of spins. The system is in contact with a heat bath at absolute temperature T abs and the exchange of energy with the heat bath occurs via one-spin flip of the Glauber dynamics. First, we investigate the time variations of average order parameters to find the phases in the system and then the thermal behavior of the dynamic order parameters to obtain the dynamic phase transition (DPT) points as well as to characterize the nature (first- or second-order) phase transitions. The dynamic phase diagrams are presented in two different planes. Phase diagrams contain paramagnetic (p), ferrimagnetic (i 1 , i 2 , i 3 ) phases, and three coexistence or mixed phase regions, namely i 1 +p, i 2 +p and i 3 +p mixed phases that strongly depend on interaction parameters. 2. Phase diagrams of a nonequilibrium mixed spin-3/2 and spin-2 Ising system in an oscillating magnetic field Energy Technology Data Exchange (ETDEWEB) Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)], E-mail: keskin@erciyes.edu.tr; Polat, Yasin [Institutes of Science, Erciyes University, 38039 Kayseri (Turkey) 2009-12-15 The phase diagrams of the nonequilibrium mixed spin-3/2 and spin-2 Ising ferrimagnetic system on square lattice under a time-dependent external magnetic field are presented by using the Glauber-type stochastic dynamics. The model system consists of two interpenetrating sublattices of spins {sigma}=3/2 and S=2, and we take only nearest-neighbor interactions between pairs of spins. The system is in contact with a heat bath at absolute temperature T{sub abs} and the exchange of energy with the heat bath occurs via one-spin flip of the Glauber dynamics. First, we investigate the time variations of average order parameters to find the phases in the system and then the thermal behavior of the dynamic order parameters to obtain the dynamic phase transition (DPT) points as well as to characterize the nature (first- or second-order) phase transitions. The dynamic phase diagrams are presented in two different planes. Phase diagrams contain paramagnetic (p), ferrimagnetic (i{sub 1}, i{sub 2}, i{sub 3}) phases, and three coexistence or mixed phase regions, namely i{sub 1}+p, i{sub 2}+p and i{sub 3}+p mixed phases that strongly depend on interaction parameters. 3. Magnetic phase diagram of UNi2Si2 under magnetic field and high-pressure International Nuclear Information System (INIS) Honda, F.; Oomi, G.; Svoboda, P.; Syshchenko, A.; Sechovsky, V.; Khmelevski, S.; Divis, M.; Andreev, A.V.; Takeshita, N.; Mori, N.; Menovsky, A.A. 2001-01-01 Measurements of electrical resistance under high pressure and neutron diffraction in high-magnetic field of single crystalline UNi 2 Si 2 have been performed. We have found the analogy between the p-T and B-T magnetic phase diagrams. It is also found that the propagation vector q Z of incommensurate antiferromagnetic phase decreases with increasing magnetic field. A new pronounced pressure-induced incommensurate-commensurate magnetic phase transition has been detected 4. A simple non-equilibrium, statistical-physics toy model of thin-film growth International Nuclear Information System (INIS) Ochab, Jeremi K; Nagel, Hannes; Janke, Wolfhard; Waclaw, Bartlomiej 2015-01-01 We present a simple non-equilibrium model of mass condensation with Lennard–Jones interactions between particles and the substrate. We show that when some number of particles is deposited onto the surface and the system is left to equilibrate, particles condense into an island if the density of particles becomes higher than some critical density. We illustrate this with numerically obtained phase diagrams for three-dimensional systems. We also solve a two-dimensional counterpart of this model analytically and show that not only the phase diagram but also the shape of the cross-sections of three-dimensional condensates qualitatively matches the two-dimensional predictions. Lastly, we show that when particles are being deposited with a constant rate, the system has two phases: a single condensate for low deposition rates, and multiple condensates for fast deposition. The behaviour of our model is thus similar to that of thin film growth processes, and in particular to Stranski–Krastanov growth. (paper) 5. The nuclear liquid gas phase transition and phase coexistence International Nuclear Information System (INIS) Chomaz, Ph. 2001-01-01 In this talk we will review the different signals of liquid gas phase transition in nuclei. From the theoretical side we will first discuss the foundations of the concept of equilibrium, phase transition and critical behaviors in infinite and finite systems. From the experimental point of view we will first recall the evidences for some strong modification of the behavior of hot nuclei. Then we will review quantitative detailed analysis aiming to evidence phase transition, to define its order and phase diagram. Finally, we will present a critical discussion of the present status of phase transitions in nuclei and we will draw some lines for future development of this field. (author) 6. The nuclear liquid gas phase transition and phase coexistence Energy Technology Data Exchange (ETDEWEB) Chomaz, Ph 2001-07-01 In this talk we will review the different signals of liquid gas phase transition in nuclei. From the theoretical side we will first discuss the foundations of the concept of equilibrium, phase transition and critical behaviors in infinite and finite systems. From the experimental point of view we will first recall the evidences for some strong modification of the behavior of hot nuclei. Then we will review quantitative detailed analysis aiming to evidence phase transition, to define its order and phase diagram. Finally, we will present a critical discussion of the present status of phase transitions in nuclei and we will draw some lines for future development of this field. (author) 7. Magnetic phase diagram of UNi.sub.2./sub.Si.sub.2./sub. under pressure Czech Academy of Sciences Publication Activity Database Syshchenko, O.; Khmelevski, S.; Diviš, M.; Sechovský, V.; Honda, F.; Oomi, G.; Andreev, Alexander V.; Kamarád, Jiří; Šebek, Josef; Menovsky, A. A. 2001-01-01 Roč. 304, - (2001), s. 477-482 ISSN 0921-4526 R&D Projects: GA ČR GA106/99/0183 Institutional research plan: CEZ:AV0Z1010914 Keywords : U intermetallics * antiferromagnetism * magnetic phase diagram * electrical resistivity * pressure effects on magnetic phases * axial Ising model Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.663, year: 2001 8. Contribution to the study of the thermodynamic properties of solutions using their phase diagrams (1961); Contribution au calcul des proprietes thermodynamiques des solutions a partir des diagrammes de phases (1961) Energy Technology Data Exchange (ETDEWEB) Hagege, R [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1961-10-15 The thermodynamic study of the behaviour of solutions is of great interest in applied metallurgical problems, since the use of physical phenomena (solubility or volatility, for example) makes it possible to effect chemical reactions which, would not take place if the products formed did not mix. It is interesting to be able to predict this behaviour, at least for binary systems, using a knowledge of the phase diagrams. After showing the theoretical impossibility of resolving this problem without further data, an attempt is made to show what can be calculated from a knowledge of the phase diagrams: on the one hand it is possible to study the coherence between different types of data (calorimetric or equilibrium); on the other hand can be calculated the 'model' parameters, whether they be empirical, or statistically derived, and their validity can be checked. (author) [French] L'etude thermodynamique du comportement des solutions presente un grand interet dans les problemes de metallurgie appliquee puisque l'emploi de phenomenes physiques (solubilite ou volatilisation par exemple) permet de realiser des reactions chimiques qui seraient irrealisables si les produits formes ne se melangeaient pas. Il semble tres interessant de pouvoir prevoir ce comportement, tout au moins dans le cas de systemes binaires, a partir de la connaissance des diagrammes de phases. Apres avoir montre l'impossibilite theorique de resoudre ce probleme sans autres donnees, on essaie de voir ce que permet de calculer la connaissance des diagrammes de phases: d'une part, on peut etudier la coherenc entre divers types de donnees (calorimetrique ou d'equilibre); d'autre-part, on peut calculer les parametres de 'modeles', qu'ils soient empiriques ou a base statistique, et verifier leur validite. (auteur) 9. Mechanism of ion exchange in zirconium phosphates. 17. Dehydration behavior of lithium ion exchanged phases Energy Technology Data Exchange (ETDEWEB) Clearfield, A; Pack, S P; Troup, J M [Ohio Univ., Athens (USA). Dept. of Chemistry 1977-01-01 The phases formed by the dehydration of lithium exchanged ..cap alpha..-zirconium phosphate, Zr(HP0/sub 4/).H/sub 2/0, were determined by a combination of X-ray, TGA and DTA studies. Samples containing 10, 20, 30 ..... 100% of theoretical lithium ion capacity were examined. The data are summarized in a phase diagram which however is not an equilibrium diagram because of the slowness of approach to equilibrium. The numerous phases obtained and the ease with which they rearrange indicates a high mobility for the incorporated cations. This suggested that ..cap alpha..-zirconium phosphate may behave as a solid electrolyte and indeed this was demonstrated by having it serve in that capacity in a small sodium sulfur battery. 10. Determination of the Fe-Cr-Ni and Fe-Cr-Mo Phase Diagrams at Intermediate Temperatures using a Novel Dual-Anneal Diffusion-Multiple Approach Science.gov (United States) Cao, Siwei Phase diagrams at intermediate temperatures are critical both for alloy design and for improving the reliability of thermodynamic databases. There is a significant shortage of experimental data for phase diagrams at the intermediate temperatures which are defined as around half of the homologous melting point (in Kelvin). The goal of this study is to test a novel dual-anneal diffusion multiple (DADM) methodology for efficient determination of intermediate temperature phase diagrams using both the Fe-Cr-Ni and Fe-Cr-Mo systems as the test beds since both are very useful for steel development. Four Fe-Cr-Ni-Mo-Co diffusion multiples were made and annealed at 1200 °C for 500 hrs. One sample was used directly for evaluating the isothermal sections at 1200 ° C. The other samples (and cut slices) were used to perform a subsequent dual annealing at 900 °C (500 hrs), 800 °C (1000 hrs), 700 °C (1000 hrs), and 600 °C (4500 hrs), respectively. The second annealing induced phase precipitation from the supersaturated solid solutions that were created during the first 1200 °C annealing. Scanning electron microscopy (SEM), electron probe microanalysis (EPMA), electron backscatter diffraction (EBSD), and transmission electron microscopy (TEM) were used to identify the phases and precipitation locations in order to obtain the compositions to construct the isothermal sections of both ternary systems at four different temperatures. The major results obtained from this study are isothermal sections of the Fe-Cr-Ni and Fe-Cr-Mo systems at 1200 °C, 900 °C, 800 °C, and 700 °C. For the Fe-Cr-Ni system, the results from DADMs agree with the majority of the literature results except for results at both 800 °C and 700 °C where the solubility of Cr in the fcc phase was found to be significantly higher than what was computed from thermodynamic calculations using the TCFE5 database. Overall, it seems that the Fe-Cr-Ni thermodynamic assessment only needs slight improvement to 11. Experimental study of the ternary Ag-Cu-In phase diagram International Nuclear Information System (INIS) Bahari, Zahra; Elgadi, Mohamed; Rivet, Jacques; Dugue, Jerome 2009-01-01 The phase diagram of the Ag-Cu-In system was investigated using powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and electron probe microanalysis (EPMA). Two isothermal sections (at 510 and 607 deg. C) and 15 isopletic sections were studied. The results showed seven ternary peritectics, one ternary eutectic and one ternary metatectic. A complete reaction scheme was constructed, the valleys were drawn and the liquidus surfaces were derived from DSC data in the entire composition range. 12. Experimental study of the ternary Ag-Cu-In phase diagram Energy Technology Data Exchange (ETDEWEB) Bahari, Zahra [Laboratoire de chimie physique et minerale, Faculte des sciences pharmaceutiques et biologiques, Universite Paris Descartes, avenue de l' Observatoire, 75006 Paris (France); Laboratoire de chimie du solide mineral (LCSM), Faculte des sciences, Universite Mohamed 1er, Route Sidi Maafa, B.P. 524, Oujda, Maroc (Morocco); Elgadi, Mohamed [Laboratoire de chimie du solide mineral (LCSM), Faculte des sciences, Universite Mohamed 1er, Route Sidi Maafa, B.P. 524, Oujda, Maroc (Morocco); Rivet, Jacques [Laboratoire de chimie physique et minerale, Faculte des sciences pharmaceutiques et biologiques, Universite Paris Descartes, avenue de l' Observatoire, 75006 Paris (France); Dugue, Jerome [Laboratoire de chimie physique et minerale, Faculte des sciences pharmaceutiques et biologiques, Universite Paris Descartes, avenue de l' Observatoire, 75006 Paris (France)], E-mail: jerome.dugue@univ-paris5.fr 2009-05-27 The phase diagram of the Ag-Cu-In system was investigated using powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and electron probe microanalysis (EPMA). Two isothermal sections (at 510 and 607 deg. C) and 15 isopletic sections were studied. The results showed seven ternary peritectics, one ternary eutectic and one ternary metatectic. A complete reaction scheme was constructed, the valleys were drawn and the liquidus surfaces were derived from DSC data in the entire composition range. 13. Phase diagram of Se-CaIn4Se7 system International Nuclear Information System (INIS) Musaeva, R.I.; Aliev, I.I; Ismailova, F.I; Aliev, A.A 2011-01-01 Full text: The Se-CaIn 4 Se 7 system has been studied using methods of differential thermal analysis, X-ray diffraction, micro structural analysis, density measurements and its phase diagram has been constructed. It has been established that the section Se-CaIn 4 Se 7 is a quasibinary section of the ternary system Ca-In-Se. At room temperature, on the basis of CaIn 2 Se 4 and Se no solid solution has been found 14. Phase diagram for the Kuramoto model with van Hemmen interactions. Science.gov (United States) Kloumann, Isabel M; Lizarraga, Ian M; Strogatz, Steven H 2014-01-01 We consider a Kuramoto model of coupled oscillators that includes quenched random interactions of the type used by van Hemmen in his model of spin glasses. The phase diagram is obtained analytically for the case of zero noise and a Lorentzian distribution of the oscillators' natural frequencies. Depending on the size of the attractive and random coupling terms, the system displays four states: complete incoherence, partial synchronization, partial antiphase synchronization, and a mix of antiphase and ordinary synchronization. 15. SOLGASMIX-PV, Chemical System Equilibrium of Gaseous and Condensed Phase Mixtures International Nuclear Information System (INIS) Besmann, T.M. 1986-01-01 1 - Description of program or function: SOLGASMIX-PV, which is based on the earlier SOLGAS and SOLGASMIX codes, calculates equilibrium relationships in complex chemical systems. Chemical equilibrium calculations involve finding the system composition, within certain constraints, which contains the minimum free energy. The constraints are the preservation of the masses of each element present and either constant pressure or volume. SOLGASMIX-PV can calculate equilibria in systems containing a gaseous phase, condensed phase solutions, and condensed phases of invariant and variable stoichiometry. Either a constant total gas volume or a constant total pressure can be assumed. Unit activities for condensed phases and ideality for solutions are assumed, although nonideal systems can be handled provided activity coefficient relationships are available. 2 - Restrictions on the complexity of the problem: The program is designed to handle a maximum of 20 elements, 99 substances, and 10 mixtures, where the gas phase is considered a mixture. Each substance is either a gas or condensed phase species, or a member of a condensed phase mixture 16. Phase diagram of carbon and the factors limiting the quantity and size of natural diamonds Science.gov (United States) Blank, Vladimir D.; Churkin, Valentin D.; Kulnitskiy, Boris A.; Perezhogin, Igor A.; Kirichenko, Alexey N.; Denisov, Viktor N.; Erohin, Sergey V.; Sorokin, Pavel B.; Popov, Mikhail Yu 2018-03-01 Phase diagrams of carbon, and those focusing on the graphite-to-diamond transitional conditions in particular, are of great interest for fundamental and applied research. The present study introduces a number of experiments carried out to convert graphite under high-pressure conditions, showing a formation of stable phase of fullerene-type onions cross-linked by sp3-bonds in the 55-115 GPa pressure range instead of diamonds formation (even at temperature 2000-3000 K) and the already formed diamonds turn into carbon onions. Our results refute the widespread idea that diamonds can form at any pressure from 2.2 to 1000 GPa. The phase diagram built within this study allows us not only to explain the existing numerous experimental data on the formation of diamond from graphite, but also to make assumptions about the conditions of its growth in Earth’s crust. 17. ± J D-vector spin glass phase diagram and critical behaviour International Nuclear Information System (INIS) Coutinho, S.; Lyra, M.L. 1988-01-01 The phase diagram and the correlation length exponents of the ± J D-Vector Spin-Glass model are studied in the framework of the real space mean field renormalization group method. The boundary between the spin-glass (SG) and the ferromagnetic (F) phases is obtained from the renormalization flow equations and shows a reentrant behaviour over the SG region. This re-entrance increases smoothly with the coordination number. Analytical expressions for the thermal and the correlation length exponents are calculated straight forwardly for all fixed points and figures are presented and compared with availables results from other methods and data. (author) [pt 18. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys International Nuclear Information System (INIS) Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang 2010-01-01 All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology) 19. Condition of Mechanical Equilibrium at the Phase Interface with Arbitrary Geometry Science.gov (United States) Zubkov, V. V.; Zubkova, A. V. 2017-09-01 The authors produced an expression for the mechanical equilibrium condition at the phase interface within the force definition of surface tension. This equilibrium condition is the most general one from the mathematical standpoint and takes into account the three-dimensional aspect of surface tension. Furthermore, the formula produced allows describing equilibrium on the fractal surface of the interface. The authors used the fractional integral model of fractal distribution and took the fractional order integrals over Euclidean space instead of integrating over the fractal set. 20. Aqueous two-phase (polyethylene glycol + sodium sulfate) system for caffeine extraction: Equilibrium diagrams and partitioning study International Nuclear Information System (INIS) Araujo Sampaio, Daniela de; Mafra, Luciana Igarashi; Yamamoto, Carlos Itsuo; Forville de Andrade, Eriel; Oberson de Souza, Michèle; Mafra, Marcos Rogério; Castilhos, Fernanda de 2016-01-01 Highlights: • Binodal curves of PEG (400, 4000 and 6000) + Na_2SO_4 ATPS were determined. • Tie-lines were experimentally determined for aqueous (PEG 400 + Na_2SO_4) system. • Influence of caffeine on LLE of aqueous (PEG 400 + Na_2SO_4) system was investigated. • Partitioning of caffeine in aqueous (PEG 400 + Na_2SO_4) system was investigated. • Caffeine partition showed to be dependent on temperature and TLL. - Abstract: Environmental friendly methods for liquid–liquid extraction have been taken into account due to critical conditions and ecotoxicological effects potentially produced by organic solvents applied in traditional methods. Liquid–liquid extraction using aqueous two phase systems (ATPSs) presents advantages when compared to traditional liquid–liquid extraction. (Polyethylene glycol (PEG) + sodium sulfate + water) ATPS was applied to study partition of caffeine. Binodal curves for ATPSs composed of PEG of different molecular weights (400 g · mol"−"1, 4000 g · mol"−"1 and 6000 g · mol"−"1) sodium sulfate + water were determined by cloud point method at three different temperatures (293.15, 313.15 and 333.15) K. Liquid–liquid equilibrium (LLE) data (tie-lines, slope of the tie-line and tie-lines length) were obtained applying a gravimetric method proposed by Merchuck and co-workers at the same temperatures for aqueous (PEG 400 + sodium sulfate) and aqueous (PEG 400 + sodium sulfate + caffeine) systems. Reliability of the experimental tie-line (TL) data was evaluated using the equations reported by Othmer–Tobias and satisfactory linearity was obtained. Concerning to aqueous (PEG + sodium sulfate) system, the results pointed out that the higher PEG molecular weight the largest is the heterogeneous region. Moreover, temperature showed not to be relevant on binodal curves behavior, but it influenced on tie-line slopes. Partitioning of caffeine in aqueous (PEG 400 + sodium sulfate) system was investigated at different temperatures 1. Edge states and phase diagram for graphene under polarized light Energy Technology Data Exchange (ETDEWEB) Wang, Yi-Xiang, E-mail: wangyixiang@jiangnan.edu.cn [School of Science, Jiangnan University, Wuxi 214122 (China); Li, Fuxiang [Center for Nonlinear Studies and Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States) 2016-07-01 In this work, we investigate the topological phase transitions in graphene under the modulation of circularly polarized light, by analyzing the changes of edge states and its topological structures. A full phase diagram, with several different topological phases, is presented in the parameter space spanned by the driving frequency and light strength. We find that the high-Chern number behavior is very common in the driven system. While the one-photon resonance can create the chiral edge states in the π-gap, the two-photon resonance will induce the counter-propagating edge modes in the zero-energy gap. When the driving light strength is strong, the number and even the chirality of the edge states may change in the π-gap. The robustness of the edge states to disorder potential is also examined. We close by discussing the feasibility of experimental proposals. 2. Proton dynamics and the phase diagram of dense water ice. Science.gov (United States) Hernandez, J-A; Caracas, R 2018-06-07 All the different phases of water ice between 2 GPa and several megabars are based on a single body-centered cubic sub-lattice of oxygen atoms. They differ only by the behavior of the hydrogen atoms. In this study, we investigate the dynamics of the H atoms at high pressures and temperatures in water ice from first-principles molecular dynamics simulations. We provide a detailed analysis of the O-H⋯O bonding dynamics over the entire stability domain of the body-centered cubic (bcc) water ices and compute transport properties and vibrational density-of-states. We report the first ab initio evidence for a plastic phase of water and we propose a coherent phase diagram for bcc water ices compatible with the two groups of melting curves and with the multiple anomalies reported in ice VII around 15 GPa. 3. Phase equilibria in the Cs-U-O system in the temperature range from 873 to 1273 K International Nuclear Information System (INIS) Fee, D.C.; Johnson, C.E. 1978-01-01 Portions of the cesium-uranium-oxygen system have been investigated between 873 and 1273 K and a phase diagram has been constructed using these data and the data of other workers in the field. A consistent set of measured and estimated thermodynamic data for cesium uranates has been used to calculate the equilibrium cesium partial pressure and the equilibrium oxygen partial pressure over two and three phase regions in the Cs-U-O system. For a given temperature, the equilibrium cesium partial pressure in a two phase region decreases as the equilibrium oxygen partial pressure increases. (author) 4. Thin film phase diagram of iron nitrides grown by molecular beam epitaxy Science.gov (United States) Gölden, D.; Hildebrandt, E.; Alff, L. 2017-01-01 A low-temperature thin film phase diagram of the iron nitride system is established for the case of thin films grown by molecular beam epitaxy and nitrided by a nitrogen radical source. A fine-tuning of the nitridation conditions allows for growth of α ‧ -Fe8Nx with increasing c / a -ratio and magnetic anisotropy with increasing x until almost phase pure α ‧ -Fe8N1 thin films are obtained. A further increase of nitrogen content below the phase decomposition temperature of α ‧ -Fe8N (180 °C) leads to a mixture of several phases that is also affected by the choice of substrate material and symmetry. At higher temperatures (350 °C), phase pure γ ‧ -Fe4N is the most stable phase. 5. Effect of elastic compliances and higher order Landau coefficients on the phase diagram of single domain epitaxial Pb(Zr,TiO3 (PZT thin films Directory of Open Access Journals (Sweden) M. Mtebwa 2014-12-01 Full Text Available We report the qualitative study of the influence of both elastic compliances and higher order terms of Landau free energy potential on the phase diagram of Pb(Zr0.5Ti0.5O3 thin films by using a single domain Landau theory. Although the impact of elastic compliances and higher order terms of the Landau free energy potential on the phase diagram of ferroelectric thin films are known, the sensitivity of the phase diagram of PZT thin film on these parameters have not been reported. It is demonstrated that, while values of elastic compliances affect the positions of the phase boundaries including phase transition temperature of the cubic phase; higher order terms can potentially introduce an a1a2-phase previously predicted in PbTiO3 phase diagram. 6. Assessment of the thermodynamic properties and phase diagram of the Bi–Pd system Czech Academy of Sciences Publication Activity Database Vřešťál, J.; Pinkas, J.; Watson, A.; Scott, A.; Houserová, Jana; Kroupa, Aleš 2006-01-01 Roč. 30, č. 1 (2006), s. 14-17 ISSN 0364-5916 R&D Projects: GA MŠk(CZ) OC 531.002 Institutional research plan: CEZ:AV0Z2041904 Keywords : phase diagram * thermodynamic modelling Subject RIV: BJ - Thermodynamics Impact factor: 1.432, year: 2006 7. Absolute determination of the gelling point of gelatin under quasi-thermodynamic equilibrium. Science.gov (United States) Bellini, Franco; Alberini, Ivana; Ferreyra, María G; Rintoul, Ignacio 2015-05-01 Thermodynamic studies on phase transformation of biopolymers in solution are useful to understand their nature and to evaluate their technological potentials. Thermodynamic studies should be conducted avoiding time-related phenomena. This condition is not easily achieved in hydrophilic biopolymers. In this contribution, the simultaneous effects of pH, salt concentration, and cooling rate (Cr) on the folding from random coil to triple helical collagen-like structures of gelatin were systematically studied. The phase transformation temperature at the absolute invariant condition of Cr = 0 °C/min (T(T)Cr=0) ) is introduced as a conceptual parameter to study phase transformations in biopolymers under quasi-thermodynamic equilibrium and avoiding interferences coming from time-related phenomena. Experimental phase diagrams obtained at different Cr are presented. The T(T)(Cr=0) compared with pH and TT(Cr=0) compared with [NaCl] diagram allowed to explore the transformation process at Cr = 0 °C/min. The results were explained by electrostatic interactions between the biopolymers and its solvation milieu. © 2015 Institute of Food Technologists® 8. Comprehensive phase diagram of two-dimensional space charge doped Bi2Sr2CaCu2O8+x. Science.gov (United States) Sterpetti, Edoardo; Biscaras, Johan; Erb, Andreas; Shukla, Abhay 2017-12-12 The phase diagram of hole-doped high critical temperature superconductors as a function of doping and temperature has been intensively studied with chemical variation of doping. Chemical doping can provoke structural changes and disorder, masking intrinsic effects. Alternatively, a field-effect transistor geometry with an electrostatically doped, ultra-thin sample can be used. However, to probe the phase diagram, carrier density modulation beyond 10 14  cm -2 and transport measurements performed over a large temperature range are needed. Here we use the space charge doping method to measure transport characteristics from 330 K to low temperature. We extract parameters and characteristic temperatures over a large doping range and establish a comprehensive phase diagram for one-unit-cell-thick BSCCO-2212 as a function of doping, temperature and disorder. 9. Thermal equilibrium during the electroweak phase transition International Nuclear Information System (INIS) 1991-12-01 The effective potential for the standard model develops a barrier, at temperatures around the electroweak scale, which separates the minimum at zero field and a deeper non-zero minimum. This could create out of equilibrium conditions by inducing the localization of the Higgs field in a metastable state around zero. In this picture vacuum decay would occur through bubble nucleation. I show that there is an upper bound on the Higgs mass for the above scenario to be realized. The barrier must be high enough to prevent thermal fluctuations of the Higgs expectation value from establishing thermal equilibrium between the two minima. The upper bound is estimated to be lower than the experimental lower limit. This is also imposes constraints on extensions of the standard model constructed in order to generate a strongly first order phase transition. (orig.) 10. APPLICATION OF VORONOI DIAGRAM TO MASK-BASED INTERCEPTING PHASE-SPACE MEASUREMENTS Energy Technology Data Exchange (ETDEWEB) Halavanau, A. [Fermilab; Ha, G. [POSTECH 2017-05-19 Intercepting multi-aperture masks (e.g. pepper pot or multislit mask) combined with a downstream transversedensity diagnostics (e.g. based on optical transition radiation or employing scintillating media) are commonly used for characterizing the phase space of charged particle beams and the associated emittances. The required data analysis relies on precise calculation of the RMS sizes and positions of the beamlets originated from the mask which drifted up to the analyzing diagnostics. Voronoi diagram is an efficient method for splitting a plane into subsets according to the distances between given vortices. The application of the method to analyze data from pepper pot and multislit mask based measurement is validated via numerical simulation and applied to experimental data acquired at the Argonne Wakefield Accelerator (AWA) facility. We also discuss the application of the Voronoi diagrams to quantify transverselymodulated beams distortion. 11. Assessment of thermodynamic properties and phase diagram in the Ag–In–Pd system Czech Academy of Sciences Publication Activity Database Zemanová, A.; Semenova, O.; Kroupa, Aleš; Vřešťál, J.; Chandrasekaran, K.; Richter, K. W.; Ipser, H. 2007-01-01 Roč. 15, č. 1 (2007), s. 77-84 ISSN 0966-9795 R&D Projects: GA MŠk(CZ) OC 532.001 Institutional research plan: CEZ:AV0Z20410507 Keywords : phase diagrams * ternary alloy systems * prediction Subject RIV: BJ - Thermodynamics Impact factor: 2.219, year: 2007 12. Phase diagram of structure of radial electric field in helical plasmas International Nuclear Information System (INIS) Toda, S.; Itoh, K. 2002-01-01 A set of transport equations in toroidal helical plasmas is analyzed, including the bifurcation of the radial electric field. Multiple solutions of E r for the ambipolar condition induces domains of different electric polarities. A structure of the domain interface is analyzed and a phase diagram is obtained in the space of the external control parameters. The region of the reduction of the anomalous transport is identified. (author) 13. Bifurcation and phase diagram of turbulence constituted from three different scale-length modes Energy Technology Data Exchange (ETDEWEB) Itoh, S.-I.; Kitazawa, A.; Yagi, M. [Kyushu Univ., Research Inst. for Applied Mechanics, Kasuga, Fukuoka (Japan); Itoh, K. [National Inst. for Fusion Science, Toki, Gifu (Japan) 2002-04-01 Cases where three kinds of fluctuations having the different typical scale-lengths coexist are analyzed, and the statistical theory of strong turbulence in inhomogeneous plasmas is developed. Statistical nonlinear interactions between fluctuations are kept in the analysis as the renormalized drag, statistical noise and the averaged drive. The nonlinear interplay through them induces a quenching or suppressing effect, even if all the modes are unstable when they are analyzed independently. Variety in mode appearance takes place: one mode quenches the other two modes, or one mode is quenched by the other two modes, etc. The bifurcation of turbulence is analyzed and a phase diagram is drawn. Phase diagrams with cusp type catastrophe and butterfly type catastrophe are obtained. The subcritical bifurcation is possible to occur through the nonlinear interplay, even though each one is supercritical turbulence when analyzed independently. Analysis reveals that the nonlinear stability boundary (marginal point) and the amplitude of each mode may substantially shift from the conventional results of independent analyses. (author) 14. Considerations Concerning Matrix Diagram Transformations Associated with Mathematical Model Study of a Three-phase Transformer Directory of Open Access Journals (Sweden) Mihaela Poienar 2014-09-01 Full Text Available The clock hour figure mathematical model of a threephase transformer can be expressed, in the most plain form, through a 3X3 square matrix, called code matrix. The lines position reflect the modification in the high voltage windings terminal and the columns position reflect the modification in the low voltage winding terminal. The main changes on the transformer winding terminal are: the circular permutation of connection between windings; terminal supply reversal; reverse direction for the phase winding wrapping; reversal the beginning with the end for a phase winding; the connection conversion from N in Z between phase winding or inverse. The analytical form of these changes actually affect the configuration of the mathematical model expressed through a transformations diagram proposed and analyzed in two ways: bipolar version and unipolar version (fanwise. In the end of the paper are presented about the practical exploitation of the transformations diagram. 15. Determination and modeling of binary and ternary solid-liquid phase equilibrium for the systems formed by 1,8-dinitronaphthalene and 1,5-dinitronaphthalene and N-methyl-2-pyrrolidone International Nuclear Information System (INIS) Xie, Yong; Du, Cunbin; Cong, Yang; Wang, Jian; Han, Shuo; Zhao, Hongkun 2016-01-01 Highlights: • SLE formed by 1,5 and/or 1,8-dinitronaphthalene and NMP was determined. • The binary and ternary phase diagrams were constructed. • The phase diagrams were correlated and calculated using thermodynamic models. - Abstract: The solubility of 1,8-dinitronaphthalene and 1,5-dinitronaphthalene in N-methyl-2-pyrrolidone at (293.15–343.15) K and the mutual solubility of the ternary 1,5-dinitronaphthalene + 1,8-dinitronaphthalene + N-methyl-2-pyrrolidone mixture at (313.15, 328.15 and 343.15) K were determined experimentally using the isothermal saturation method under atmospheric pressure (101.2 kPa). The solubility of 1,8-dinitronaphthalene in N-methyl-2-pyrrolidone is larger than that of 1,5-dinitronaphthalene. Three isothermal ternary phase diagrams were built according to the measured mutual solubility data. In each ternary phase diagram, there were one co-saturated point, two boundary curves, and three crystalline regions. Two pure solids (pure 1,8-dinitronaphthalene and pure 1,5-dinitronaphthalene) were formed in the ternary system at a given temperature, which were identified by Schreinemaker’s method of wet residue and powder X-ray diffraction (PXRD) pattern. The crystallization region of 1,8-dinitronaphthalene was smaller than that of 1,5-dinitronaphthalene at each temperature. The modified Apelblat equation, λh equation, NRTL model and Wilson model were used to correlate the solubility of 1,8-dinitronaphthalene and 1,5-dinitronaphthalene in N-methyl-2-pyrrolidone; and the NRTL and Wilson models were employed to correlate and calculate the mutual solubility for the ternary 1,5-dinitronaphthalene + 1,8-dinitronaphthalene + N-methyl-2-pyrrolidone system. The largest value of root-mean-square deviation (RMSD) was 20.34 × 10 −4 for the binary systems; and 7.38 × 10 −3 for ternary system. The calculated results via these models are all acceptable for the binary and ternary solid-liquid phase equilibrium. 16. Study on mutual diffusion and phase diagram in the Ni-Ta system International Nuclear Information System (INIS) Pimenov, V.N.; Ugaste, Yu.Eh.; Akkushkarova, K.A. 1977-01-01 The mutual diffusion in the Ni-Ta system has been investigated with a view of refining the constitutional diagram. The mutual diffusion factors and their effective values in the various phases and the diffusion activation energies are calculated. Given are the dependences of the phase growth constants and the mutual diffusion factors upon the temperature. The existence of five new phases Ta 2 Ni, TaNi, TaNi 2 , TaNi 3 , TaNi 8 has been discovered in the range of temperatures between 1150 and 1300 deg C. It is established that all the phases have a small concentration range of existence. It is noted that the diffusion characteristics in the phases (mutual diffusion factor and activation energy) differ widely, but fail to correlate with their melting points 17. A phase-field model for non-equilibrium solidification of intermetallics International Nuclear Information System (INIS) 2007-01-01 Intermetallics may exhibit unique solidification behaviour-including slow growth kinetics, anomalous partitioning and formation of unusual growth morphologies-because of departure from local equilibrium. A phase-field model is developed and used to illustrate these non-equilibrium effects in solidification of a prototype B2 intermetallic phase. The model takes sublattice compositions as primary field variables, from which chemical long-range order is derived. The diffusive reactions between the two sublattices, and those between each sublattice and the liquid phase are taken as 'internal' kinetic processes, which take place within control volumes of the system. The model can thus capture solute and disorder trapping effects, which are consistent-over a wide range of the solid/liquid interface thickness-with the predictions of the sharp-interface theory of solute and disorder trapping. The present model can also take account of solid-state ordering and thus illustrate the effects of chemical ordering on microstructure formation and crystal growth kinetics 18. New mean-field calculations for the phase diagram of the Annni model International Nuclear Information System (INIS) Tome, T.; Salinas, S.R.A. 1987-01-01 A variational procedure, with the inclusion of some spin fluctuations, to go beyond the standard layer-by-layer mean-field calculations for the T-p phase diagram of the ANNNI model is used. The high temperature region is studied analytically. The transition lines meet smoothly at the Lifshitz point, which is an inflection point of the second-order paramagnetic border. At low temperature, these numerical resuls confirm the stability of the main commensurate phases and show a quantitative trend towards the preductions f the Monte Carlo analyses. (author) [pt 19. Phase Equilibrium in the System Ln-Mn-O II. Ln=Nd at 1100 C International Nuclear Information System (INIS) 2001-01-01 Phase equilibrium is established in a Nd-Mn-O system at 1100 C by changing the oxygen partial pressure from 0 to 12.00 in -log(Po(sub 2)/atm); a phase diagram at 1100 C is presented for a Nd(sub 2)O(sub 3)-MnO-MnO(sub 2) system. Under the experimental conditions, Nd(sub 2)O(sub 3), MnO, Mn(sub 3)O(sub 4), and NdMnO(sub 3) phases are present at 1100 C, but Nd(sub 2)MnO(sub 4), Mn(sub 2)O(sub 3), and MnO(sub 2) are not stable in the system. Wide ranges of nonstoichiometry were found in the NdMnO(sub 3) phase, which coexisted with Nd(sub 2)O(sub 3). X ranges from -0.006 at log Po(sub 2)=-10.85 to 0.104 at log Po(sub 2)=0 in the form of NdMnO(sub 3+X). The nonstoichiometry is represented by the equation No/N(sub NdMnO(sub 3))=4.34x10(sup -5)(log Po(sub 2))(sup 3)+1.99x10(sup -3)(log Po(sub 2))(sup 2)+2.65x10(sup -2)(log Po(sub 2))+0.104; the activities of the components in the solid solution are also calculated with this equation. NdMnO(sub 3) has a composition range to the Nd(sub 2)O(sub 3)-rich or Nd(sub 2)O(sub 3)-poor side of LaMnO(sub 3). Lattice constants of NdMnO(sub 3) made in different oxygen partial pressures were determined 20. Novel phase diagram behavior and materials design in heterostructural semiconductor alloys. Science.gov (United States) Holder, Aaron M; Siol, Sebastian; Ndione, Paul F; Peng, Haowei; Deml, Ann M; Matthews, Bethany E; Schelhas, Laura T; Toney, Michael F; Gordon, Roy G; Tumas, William; Perkins, John D; Ginley, David S; Gorman, Brian P; Tate, Janet; Zakutayev, Andriy; Lany, Stephan 2017-06-01 Structure and composition control the behavior of materials. Isostructural alloying is historically an extremely successful approach for tuning materials properties, but it is often limited by binodal and spinodal decomposition, which correspond to the thermodynamic solubility limit and the stability against composition fluctuations, respectively. We show that heterostructural alloys can exhibit a markedly increased range of metastable alloy compositions between the binodal and spinodal lines, thereby opening up a vast phase space for novel homogeneous single-phase alloys. We distinguish two types of heterostructural alloys, that is, those between commensurate and incommensurate phases. Because of the structural transition around the critical composition, the properties change in a highly nonlinear or even discontinuous fashion, providing a mechanism for materials design that does not exist in conventional isostructural alloys. The novel phase diagram behavior follows from standard alloy models using mixing enthalpies from first-principles calculations. Thin-film deposition demonstrates the viability of the synthesis of these metastable single-phase domains and validates the computationally predicted phase separation mechanism above the upper temperature bound of the nonequilibrium single-phase region. 1. The phase behavior of a hard sphere chain model of a binary n-alkane mixture International Nuclear Information System (INIS) Malanoski, A. P.; Monson, P. A. 2000-01-01 Monte Carlo computer simulations have been used to study the solid and fluid phase properties as well as phase equilibrium in a flexible, united atom, hard sphere chain model of n-heptane/n-octane mixtures. We describe a methodology for calculating the chemical potentials for the components in the mixture based on a technique used previously for atomic mixtures. The mixture was found to conform accurately to ideal solution behavior in the fluid phase. However, much greater nonidealities were seen in the solid phase. Phase equilibrium calculations indicate a phase diagram with solid-fluid phase equilibrium and a eutectic point. The components are only miscible in the solid phase for dilute solutions of the shorter chains in the longer chains. (c) 2000 American Institute of Physics 2. Multicritical dynamical phase diagrams of the kinetic Blume-Emery-Griffiths model with repulsive biquadratic coupling in an oscillating field Energy Technology Data Exchange (ETDEWEB) Temizer, Umuet [Department of Physics, Bozok University, 66100 Yozgat (Turkey); Kantar, Ersin [Institute of Science, Erciyes University, 38039 Kayseri (Turkey); Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)], E-mail: keskin@erciyes.edu.tr; Canko, Osman [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2008-06-15 We study, within a mean-field approach, the stationary states of the kinetic Blume-Emery-Griffiths model with repulsive biquadratic coupling under the presence of a time-varying (sinusoidal) magnetic field. We employ the Glauber-type stochastic dynamics to construct set of dynamic equations of motion. The behavior of the time dependence of the order parameters and the behavior of the average order parameters in a period, which is also called the dynamic order parameters, as functions of the reduced temperature are investigated. The dynamic phase transition points are calculated and phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane. The dynamical transition from one regime to the other can be of first- or second order depending on the region in the phase diagram. According to the values of the crystal field interaction or single-ion anisotropy constant and biquadratic exchange constant, we find 20 fundamental types of phase diagrams which exhibit many dynamic critical points, such as tricritical points, zero-temperature critical points, double critical end points, critical end point, triple point and multicritical point. Moreover, besides a disordered and ordered phases, seven coexistence phase regions exist in the system. 3. CFD analysis of laboratory scale phase equilibrium cell operation Science.gov (United States) 2017-10-01 For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process.: Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium. 4. CFD analysis of laboratory scale phase equilibrium cell operation. Science.gov (United States) 2017-10-01 For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium. 5. The pressure-temperature phase diagram of pressure induced organic superconductors β-(BDA-TTP){2}MCl{4} (M = Ga, Fe) Science.gov (United States) Choi, E. S.; Graf, D.; Brooks, J. S.; Yamada, J.; Tokumoto, M. 2004-04-01 We investigate the pressure-temperature phase diagram of β -(BDA-TTP){2}MCl{4} (M=Ga, Fe), which shows a metal-insulator (MI) transition around 120 K at ambient pressure. By applying pressure, the insulating phase is suppressed. When the pressure is higher than 5.5 kbar, the superconducting phase appears in both salts with Tc ˜ 3 K for M=Ga and 2.2 K for M=Fe. We also observed Shubnikov-de Haas (SdH) oscillations at high magnetic field in both salts, where the SdH frequencies are found to be very similar each other. Key words. organic superconductor, pressure, phase diagram. 6. Low-frequency phase diagram of irradiated graphene and a periodically driven spin-1/2 X Y chain Science.gov (United States) Mukherjee, Bhaskar; Mohan, Priyanka; Sen, Diptiman; Sengupta, K. 2018-05-01 We study the Floquet phase diagram of two-dimensional Dirac materials such as graphene and the one-dimensional (1D) spin-1/2 X Y model in a transverse field in the presence of periodic time-varying terms in their Hamiltonians in the low drive frequency (ω ) regime where standard 1 /ω perturbative expansions fail. For graphene, such periodic time-dependent terms are generated via the application of external radiation of amplitude A0 and time period T =2 π /ω , while for the 1D X Y model, they result from a two-rate drive protocol with a time-dependent magnetic field and nearest-neighbor couplings between the spins. Using the adiabatic-impulse method, whose predictions agree almost exactly with the corresponding numerical results in the low-frequency regime, we provide several semianalytic criteria for the occurrence of changes in the topology of the phase bands (eigenstates of the evolution operator U ) of such systems. For irradiated graphene, we point out the role of the symmetries of the instantaneous Hamiltonian H (t ) and the evolution operator U behind such topology changes. Our analysis reveals that at low frequencies, topology changes of irradiated graphene phase bands may also happen at t =T /3 and2 T /3 (apart from t =T ) showing the necessity of analyzing the phase bands of the system for obtaining its phase diagrams. We chart out the phase diagrams at t =T /3 ,2 T /3 ,and T , where such topology changes occur, as a function of A0 and T using exact numerics, and compare them with the prediction of the adiabatic-impulse method. We show that several characteristics of these phase diagrams can be analytically understood from results obtained using the adiabatic-impulse method and point out the crucial contribution of the high-symmetry points in the graphene Brillouin zone to these diagrams. We study the modes that can appear at the edges of a finite-width strip of graphene and show that the change in the number of such modes agrees with the change in the 7. Dynamic phase transition and multicritical dynamic phase diagrams of the kinetic spin-3/2 Blume Emery Griffiths model with repulsive biquadratic coupling under a time-dependent oscillating external field Science.gov (United States) Deviren, Bayram; Keskin, Mustafa; Canko, Osman 2008-03-01 We extend our recent paper [O. Canko, B. Deviren, M. Keskin, J. Phys.: Condens. Mater 118 (2006) 6635] to present a study, within a mean-field approach, the stationary states of the kinetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic interaction under the presence of a time varying (sinusoidal) magnetic field. We found that the dynamic phase diagrams of the present work exhibit more complex, richer and more topological different types of phase diagrams than our recent paper. Especially, the obtained dynamic phase diagrams show the ferrimagnetic ( i) phase in addition to the ferromagnetic ±3/2 ( f), ferromagnetic ±1/2 ( f), antiquadrupolar or staggered ( a) and disordered ( d) phases, and the f+i, f+d, i+d, f+i+d, a+d and/or f+i+a coexistence regions in addition to the f+f, f+d, f+a, f+d and/or f+a+d coexistence regions, depending on interaction parameters. Moreover, the phase diagrams exhibit dynamic zero-temperature critical, critical end, double critical end, multicritical, and/or pentacritical special points in addition to the dynamic tricritical, double critical end point, triple, quadruple and/or tetracritical special points that depending on the interaction parameters. 8. Hydrogen in niobium, tantalum, and vanadium: Structures, phase diagrams, and morphologies International Nuclear Information System (INIS) Schober, T. 1978-07-01 The paper discusses basic aspects of the reactions between the metals niobium, tantalum, vanadium, and hydrogen or deuterium. After an introduction to problems of preparation experimental technqiues for the investigation of hydrides are presented. The possible hydride structures are discussed. With vanadium, there are great differences between the structures of hydrides and deuterides. Detailed mention is also made of recent measurements of the NGH, TaH, VH, and VD phase diagrams. (orig./WBU) [de 9. Phase coexistence in thin liquid films stabilized by colloidal particles: equilibrium and non-equilibrium properties International Nuclear Information System (INIS) Blawzdziewicz, J.; Wajnryb, E. 2005-01-01 Phase equilibria between regions of different thickness in thin liquid films stabilized by colloidal particles are investigated using a quasi-two-dimensional thermodynamic formalism. Appropriate equilibrium conditions for the film tension, normal pressure, and chemical potential of the particles in the film are formulated, and it is shown that the relaxation of these parameters occurs consecutively on three distinct time scales. Film stratification is described quantitatively for a hard-sphere suspension using a Monte-Carlo method to evaluate thermodynamic equations of state. Coexisting phases are determined for systems in constrained- and full-equilibrium states that correspond to different stages of film relaxation. We also evaluated the effective viscosity coefficients for two-dimensional compressional and shear flows of a film and the self and collective mobility coefficients of the stabilizing particles. The hydrodynamic calculations were performed using a multiple-reflection representation of Stokes flow between two free surfaces. In this approach, the particle-laden film is equivalent to a periodic system of spheres with a unit cell that is much smaller in the transverse direction than in the lateral direction. (author) 10. Phase diagrams of a nonequilibrium mixed spin-1/2 and spin-2 Ising ferrimagnetic system under a time-dependent oscillating magnetic field International Nuclear Information System (INIS) Keskin, M.; Canko, O.; Gueldal, S. 2009-01-01 We present phase diagrams for a nonequilibrium mixed spin-1/2 and spin-2 Ising ferrimagnetic system on a square lattice in the presence of a time dependent oscillating external magnetic field. We employ the Glauber transition rates to construct the mean-field dynamical equations. The time variation of the average magnetizations and the thermal behavior of the dynamic magnetizations are investigated, extensively. The nature (continuous or discontinuous) of the transitions is characterized by studying the thermal behaviors of the dynamic magnetizations. The dynamic phase transition points are obtained and the phase diagrams are presented in two different planes. Phase diagrams contain paramagnetic (p) and ferrimagnetic (i) phases, and one coexistence or mixed phase region, namely the i+p, that strongly depend on interaction parameters. The system exhibits the dynamic tricritical point and the reentrant behaviors. 11. Phase diagrams of a nonequilibrium mixed spin-1/2 and spin-2 Ising ferrimagnetic system under a time-dependent oscillating magnetic field Energy Technology Data Exchange (ETDEWEB) Keskin, M., E-mail: keskin@erciyes.edu.t [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Canko, O. [Department of Physics, Erciyes University, 38039 Kayseri (Turkey); Gueldal, S. [Institute of Science, Erciyes University, 38039 Kayseri (Turkey) 2009-12-14 We present phase diagrams for a nonequilibrium mixed spin-1/2 and spin-2 Ising ferrimagnetic system on a square lattice in the presence of a time dependent oscillating external magnetic field. We employ the Glauber transition rates to construct the mean-field dynamical equations. The time variation of the average magnetizations and the thermal behavior of the dynamic magnetizations are investigated, extensively. The nature (continuous or discontinuous) of the transitions is characterized by studying the thermal behaviors of the dynamic magnetizations. The dynamic phase transition points are obtained and the phase diagrams are presented in two different planes. Phase diagrams contain paramagnetic (p) and ferrimagnetic (i) phases, and one coexistence or mixed phase region, namely the i+p, that strongly depend on interaction parameters. The system exhibits the dynamic tricritical point and the reentrant behaviors. 12. The Kibble-Zurek mechanism in phase transitions of non-equilibrium systems Science.gov (United States) Cheung, Hil F. H.; Patil, Yogesh S.; Date, Aditya G.; Vengalattore, Mukund 2017-04-01 We experimentally realize a driven-dissipative phase transition using a mechanical parametric amplifier to demonstrate key signatures of a second order phase transition, including a point where the susceptibilities and relaxation time scales diverge, and where the system exhibits a spontaneous breaking of symmetry. Though reminiscent of conventional equilibrium phase transitions, it is unclear if such driven-dissipative phase transitions are amenable to the conventional Landau-Ginsburg-Wilson paradigm, which relies on concepts of scale invariance and universality, and recent work has shown that such phase transitions can indeed lie beyond such conventional universality classes. By quenching the system past the critical point, we investigate the dynamics of the emergent ordered phase and find that our measurements are in excellent agreement with the Kibble-Zurek mechanism. In addition to verifying the Kibble-Zurek hypothesis in driven-dissipative phase transitions for the first time, we also demonstrate that the measured critical exponents accurately reflect the interplay between intrinsic coherent dynamics and environmental correlations, showing a clear departure from mean field exponents in the case of non-Markovian system-bath interactions. We further discuss how reservoir engineering and the imposition of artificial environmental correlations can result in the stabilization of novel many-body quantum phases and aid in the creation of exotic non-equilibrium states of matter. 13. Topological Phase Diagrams of Bulk and Monolayer TiS2−xTex KAUST Repository Zhu, Zhiyong 2013-02-12 With the use of ab initio calculations, the topological phase diagrams of bulk and monolayer TiS2−xTex are established. Whereas bulk TiS2−xTex shows two strong topological phases [1;(000)] and [1;(001)] for 0.44phases in three and two dimensions simultaneously. 14. Topological Phase Diagrams of Bulk and Monolayer TiS2−xTex KAUST Repository Zhu, Zhiyong; Cheng, Yingchun; Schwingenschlö gl, Udo 2013-01-01 With the use of ab initio calculations, the topological phase diagrams of bulk and monolayer TiS2−xTex are established. Whereas bulk TiS2−xTex shows two strong topological phases [1;(000)] and [1;(001)] for 0.44phases in three and two dimensions simultaneously. 15. In-situ studies on phase transformations under electron irradiation in ... M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22 under 1 MeV electron irradiation at 300 K has been recorded in HVEM experiments. The similarity of the diffuse intensity distribution in these two cases brings out the importance of the lattice collapse mechanism in both the cases. 2. Crystallography of the ordered phases in Ni–Mo system. The equilibrium phase diagram of ... 16. A re-examination of thermodynamic modelling of U-Ru binary phase diagram Energy Technology Data Exchange (ETDEWEB) Wang, L.C.; Kaye, M.H., E-mail: matthew.kaye@uoit.ca [University of Ontario Institute of Technology, Oshawa, ON (Canada) 2015-07-01 Ruthenium (Ru) is one of the more abundant fission products (FPs) both in fast breeder reactors and thermal reactors. Post irradiation examinations (PIE) show that both 'the white metallic phase' (MoTc-Ru-Rh-Pd) and 'the other metallic phase' (U(Pd-Rh-Ru)3) are present in spent nuclear fuels. To describe this quaternary system, binary subsystems of uranium (U) with Pd, Rh, and Ru are necessary. Presently, only the U-Ru system has been thermodynamically described but with some problems. As part of research on U-Ru-Rh-Pd quaternary system, an improved consistent thermodynamic model describing the U-Ru binary phase diagram has been obtained. (author) 17. A partial isothermal section at 1000 ˚C of Al-Mn-Fe phase diagram in vicinity of Taylor phase and decagonal quasicrystal Czech Academy of Sciences Publication Activity Database Priputen, P.; Černíčková, I.; Lejček, Pavel; Janičkovič, D.; Janovec, J. 2016-01-01 Roč. 37, č. 2 (2016), 130-134 ISSN 1547-7037 R&D Projects: GA ČR GBP108/12/G043 Institutional support: RVO:68378271 Keywords : aluminium alloys * equilibria * experimental phase * intermetallics * isothermal section * phase diagram Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.938, year: 2016 18. On the superconducting phase diagram of high Tc superconductors International Nuclear Information System (INIS) de la Cruz, F. 1990-01-01 The tendency of oxide superconductors to show granularity has been pointed out since the beginning of research on superconductivity in this type of materials. Nevertheless, only very recently the full phase diagram and characteristics of the grains have been determined. In this paper, the authors review and discuss the different critical fields and their relation to the transport of superconducting current. The superconducting response of single crystals of High Tc superconductors is discussed. Special attention is devoted to the behavior of the vortex lattice and, in particular, to the recent discovery of the quenching of H c1 in YBaCuO, several degrees below Tc 19. Defect phase diagram for doping of Ga2O3 OpenAIRE Stephan Lany 2018-01-01 For the case of n-type doping of β-Ga2O3 by group 14 dopants (C, Si, Ge, Sn), a defect phase diagram is constructed from defect equilibria calculated over a range of temperatures (T), O partial pressures (pO2), and dopant concentrations. The underlying defect levels and formation energies are determined from first-principles supercell calculations with GW bandgap corrections. Only Si is found to be a truly shallow donor, C is a deep DX-like (lattice relaxed donor) center, and Ge and Sn have d... 20. Numerical insights into the phase diagram of p-atic membranes with spherical topology DEFF Research Database (Denmark) Hansen, Allan Grønhøj; Ramakrishnan, N.; Sunil Kumar, P. B. 2017-01-01 Abstract.: The properties of self-avoiding p-atic membranes restricted to spherical topology have been studied by Monte Carlo simulations of a triangulated random surface model. Spherically shaped p-atic membranes undergo a Kosterlitz-Thouless transition as expected with topology induced mutually...... of disclinations. We confirm the proposed buckling of disclinations in the p-atic ordered phase, while the expected associated disordering (crumpling) transition at low bending rigidities is absent in the phase diagram. Graphical abstract: [Figure not available: see fulltext.]... 1. Phase Diagrams of Some Sodium and Potassium Salts In Light and Heavy Water Energy Technology Data Exchange (ETDEWEB) Holmberg, K E 1968-12-15 Phase diagrams for fluorides, chlorides, bromides, iodides, nitrates, sulphates and carbonates of sodium and potassium with D{sub 2}O and H{sub 2}O have been determined in the range from eutectic temperature to 60 deg C. Generally the relative solubility is less in D{sub 2}O, but there are some exceptions in cases of a hydrate as the solid phase. The freezing point depression for freezing of ice is often somewhat smaller in the case of D{sub 2}O. 2. Assessment of the thermodynamic properties and phase diagram of the Bi-Pd system Czech Academy of Sciences Publication Activity Database Vřešťál, Jan; Pinkas, J.; Watson, A.; Scott, A.; Houserová, Jana; Kroupa, Aleš 2006-01-01 Roč. 30, č. 1 (2006), s. 14-17 ISSN 0364-5916 R&D Projects: GA MŠk OC 531.001; GA MŠk OC 531.002 Institutional research plan: CEZ:AV0Z20410507 Keywords : phase diagram * ab initio calculations * calorimetry Subject RIV: BJ - Thermodynamics Impact factor: 1.432, year: 2006 3. High-pressure high-temperature phase diagram of organic crystal paracetamol Science.gov (United States) Smith, Spencer J.; Montgomery, Jeffrey M.; Vohra, Yogesh K. 2016-01-01 High-pressure high-temperature (HPHT) Raman spectroscopy studies have been performed on the organic crystal paracetamol in a diamond anvil cell utilizing boron-doped heating diamond anvil. Isobaric measurements were conducted at pressures up to 8.5 GPa and temperature up to 520 K in five different experiments. Solid state phase transitions from monoclinic Form I  →  orthorhombic Form II were observed at various pressures and temperatures as well as transitions from Form II  →  unknown Form IV. The melting temperature for paracetamol was observed to increase with increasing pressures to 8.5 GPa. This new data is combined with previous ambient temperature high-pressure Raman and x-ray diffraction data to create the first HPHT phase diagram of paracetamol. 4. High-pressure high-temperature phase diagram of organic crystal paracetamol International Nuclear Information System (INIS) Smith, Spencer J; Montgomery, Jeffrey M; Vohra, Yogesh K 2016-01-01 High-pressure high-temperature (HPHT) Raman spectroscopy studies have been performed on the organic crystal paracetamol in a diamond anvil cell utilizing boron-doped heating diamond anvil. Isobaric measurements were conducted at pressures up to 8.5 GPa and temperature up to 520 K in five different experiments. Solid state phase transitions from monoclinic Form I  →  orthorhombic Form II were observed at various pressures and temperatures as well as transitions from Form II  →  unknown Form IV. The melting temperature for paracetamol was observed to increase with increasing pressures to 8.5 GPa. This new data is combined with previous ambient temperature high-pressure Raman and x-ray diffraction data to create the first HPHT phase diagram of paracetamol. (paper) 5. Phase diagram of dense two-color QCD within lattice simulations Directory of Open Access Journals (Sweden) Braguta V.V. 2017-01-01 Full Text Available We present the results of a low-temperature scan of the phase diagram of dense two-color QCD with Nf = 2 quarks. The study is conducted using lattice simulation with rooted staggered quarks. At small chemical potential we observe the hadronic phase, where the theory is in a confining state, chiral symmetry is broken, the baryon density is zero and there is no diquark condensate. At the critical point μ = mπ/2 we observe the expected second order transition to Bose-Einstein condensation of scalar diquarks. In this phase the system is still in confinement in conjunction with nonzero baryon density, but the chiral symmetry is restored in the chiral limit. We have also found that in the first two phases the system is well described by chiral perturbation theory. For larger values of the chemical potential the system turns into another phase, where the relevant degrees of freedom are fermions residing inside the Fermi sphere, and the diquark condensation takes place on the Fermi surface. In this phase the system is still in confinement, chiral symmetry is restored and the system is very similar to the quarkyonic state predicted by SU(Nc theory at large Nc. 6. Global phase equilibrium calculations: Critical lines, critical end points and liquid-liquid-vapour equilibrium in binary mixtures DEFF Research Database (Denmark) Cismondi, Martin; Michelsen, Michael Locht 2007-01-01 A general strategy for global phase equilibrium calculations (GPEC) in binary mixtures is presented in this work along with specific methods for calculation of the different parts involved. A Newton procedure using composition, temperature and Volume as independent variables is used for calculation... 7. Water adsorbate phases on ZnO and impact of vapor pressure on the equilibrium shape of nanoparticles Science.gov (United States) Kenmoe, Stephane; Biedermann, P. Ulrich 2018-02-01 ZnO nanoparticles are used as catalysts and have potential applications in gas-sensing and solar energy conversion. A fundamental understanding of the exposed crystal facets, their surface chemistry, and stability as a function of environmental conditions is essential for rational design and improvement of synthesis and properties. We study the stability of water adsorbate phases on the non-polar low-index (10 1 ¯ 0 ) and (11 2 ¯ 0 ) surfaces from low coverage to multilayers using ab initio thermodynamics. We show that phonon contributions and the entropies due to a 2D lattice gas at low coverage and multiple adsorbate configurations at higher coverage have an important impact on the stability range of water adsorbate phases in the (T,p) phase diagram. Based on this insight, we compute and analyze the possible growth mode of water films for pressures ranging from UHV via ambient conditions to high pressures and the impact of water adsorption on the equilibrium shape of nanoparticles in a humid environment. A 2D variant of the Wulff construction shows that the (10 1 ¯ 0 ) and (11 2 ¯ 0 ) surfaces coexist on 12-faceted prismatic ZnO nanoparticles in dry conditions, while in humid environment, the (10 1 ¯ 0 ) surface is selectively stabilized by water adsorption resulting in hexagonal prisms. 8. Thermodynamic modeling of the CeO{sub 2}–CoO nano-phase diagram Energy Technology Data Exchange (ETDEWEB) Kim, Sung S., E-mail: sungkim@wow.hongik.ac.kr 2014-03-05 Highlights: • The CeO{sub 2}–CoO nano-phase diagram was modeled thermodynamically. • The surface energies of the solution phases were modeled with Butler’s equation. • The present work agreed with the experimental work on the nanoparticle sintering. -- Abstract: A nano-phase diagram of the CeO{sub 2}–CoO system was modeled thermodynamically with experimental data available in the literatures. The surface energies of CeO{sub 2} and CoO unavailable in the literatures were estimated reasonably on the thermodynamic basis. Butler’s model was used to describe the surface energy and the surface composition of the solution phases and then the nano interaction parameters on the particle radius were assessed through the multiple linear regression method. A consistent set of optimized interaction parameters in the present system was derived for describing the Gibbs energy of liquid, fluorite, and halite solution phases as a function of particle radius. The eutectic temperatures calculated in the present work interpreted well the experimental data for the unusual low sintering temperature of the nanoparticles with the tri-modal particle size distribution. Furthermore, with the aid of the present result, the microstructure evolution in the CGO–CoO system during the nanoparticle sintering was described reasonably. It is concluded that the present modeling will be a good guide for the condition of the liquid phase sintering to obtain the rapid densification of the nanoparticles at lower temperatures. 9. Modeling of two-phase flow with thermal and mechanical non-equilibrium International Nuclear Information System (INIS) Houdayer, G.; Pinet, B.; Le Coq, G.; Reocreux, M.; Rousseau, J.C. 1977-01-01 To improve two-phase flow modeling by taking into account thermal and mechanical non-equilibrium a joint effort on analytical experiment and physical modeling has been undertaken. A model describing thermal non-equilibrium effects is first presented. A correlation of mass transfer has been developed using steam water critical flow tests. This model has been used to predict in a satisfactory manner blowdown tests. It has been incorporated in CLYSTERE system code. To take into account mechanical non-equilibrium, a six equations model is written. To get information on the momentum transfers special nitrogen-water tests have been undertaken. The first results of these studies are presented 10. Study of phase equilibrium of Pu{sub 2}O{sub 3}-PuO{sub 2} system by the first-principles calculation and CALPHAD approach Energy Technology Data Exchange (ETDEWEB) Minamoto, Satoshi [ITOCHU Techno-Solutions Corporation, Kasumigaseki 3, Chiyoda-ku, Tokyo, Energy and Industrial Systems Department (Japan); Kato, Masato [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki (Japan); Konashi, Kenji, E-mail: satoshi.minamoto@ctc-g.co.jp, E-mail: masato.kato@jaea.go.jp, E-mail: konashi@imr.tohoku-u.ac.jp [Institute for Materials Research, Tohoku University, Oarai-chou, Ibaraki (Japan) 2010-03-15 A combination of a first-principles calculation, lattice dynamics and CALPHAD (CALculation of PHAse Diagrams) modeling is proven as a powerful tool so as to evaluate the Gibbs free energy and a phase equilibrium between compounds including large amount of vacancies. In this work, non-stoichiometric PuO{sub 2-x} (dioxide) and Pu{sub 2}O{sub 3} (sesquioxide) has been studied. An electron cohesive energy was evaluated from a first-principles calculations to estimate total energy of the compounds and a vacancy formation energy, and the theory of statistical mechanics was applied to evaluate enthalpy/entropy change due to oxygen vacancies for the non-stoichiometry of the PuO{sub 2} (i.e. PuO{sub 2-x}). Then a vacancy-vacancy interaction energy was determined by fitting to the experimental data of a quantity of non-stoichiometry of the PuO{sub 2} compounds as a function of oxygen potentials at large deviation from stoichiometry. The resulting Gibbs free energy yields phase boundary between the phases with good agreement with to the experimental data. 11. The Eh-pH Diagram and Its Advances Directory of Open Access Journals (Sweden) Hsin-Hsiung Huang 2016-01-01 Full Text Available Since Pourbaix presented Eh versus pH diagrams in his “Atlas of Electrochemical Equilibria in Aqueous Solution”, diagrams have become extremely popular and are now used in almost every scientific area related to aqueous chemistry. Due to advances in personal computers, such diagrams can now show effects not only of Eh and pH, but also of variables, including ligand(s, temperature and pressure. Examples from various fields are illustrated in this paper. Examples include geochemical formation, corrosion and passivation, precipitation and adsorption for water treatment and leaching and metal recovery for hydrometallurgy. Two basic methods were developed to construct an Eh-pH diagram concerning the ligand component(s. The first method calculates and draws a line between two adjacent species based on their given activities. The second method performs equilibrium calculations over an array of points (500 × 800 or higher are preferred, each representing one Eh and one pH value for the whole system, then combines areas of each dominant species for the diagram. These two methods may produce different diagrams. The fundamental theories, illustrated results, comparison and required conditions behind these two methods are presented and discussed in this paper. The Gibbs phase rule equation for an Eh-pH diagram was derived and verified from actual plots. Besides indicating the stability area of water, an Eh-pH diagram normally shows only half of an overall reaction. However, merging two or more related diagrams together reveals more clearly the possibility of the reactions involved. For instance, leaching of Au with cyanide followed by cementing Au with Zn (Merrill-Crowe process can be illustrated by combining Au-CN and Zn-CN diagrams together. A second example of the galvanic conversion of chalcopyrite can be explained by merging S, Fe–S and Cu–Fe–S diagrams. The calculation of an Eh-pH diagram can be extended easily into another dimension, such 12. Dynamic phase diagrams of the Ising metamagnet in an oscillating magnetic field within the effective-field theory Energy Technology Data Exchange (ETDEWEB) Deviren, Bayram [Department of Physics, Nevsehir University, 50300 Nevsehir (Turkey); Institute of Science, Erciyes University, 38039 Kayseri (Turkey); Keskin, Mustafa, E-mail: keskin@erciyes.edu.t [Department of Physics, Erciyes University, 38039 Kayseri (Turkey) 2010-07-12 Dynamic aspects of a two-sublattice Ising metamagnet on honeycomb, square and hexagonal lattices under the presence of a time-dependent oscillating external magnetic field are studied by using the effective-field theory with correlations. The set of effective-field dynamic equations is derived by employing Glauber transition rates. The phases in the system are obtained by solving these dynamic equations. The thermal behavior of the dynamic staggered magnetization, the hysteresis loop area and correlation are investigated in order to characterize the nature of the dynamic transitions and to obtain dynamic phase transition temperatures. The phase diagrams are constructed in two different planes, and exhibit dynamic tricritical behavior, which strongly depends on interaction parameters. In order to investigate the spin correlation effect on the dynamic phase diagrams of the system, the results are also given within the framework of the dynamic mean-field approximation. 13. Dynamic phase diagrams of the Ising metamagnet in an oscillating magnetic field within the effective-field theory International Nuclear Information System (INIS) Deviren, Bayram; Keskin, Mustafa 2010-01-01 Dynamic aspects of a two-sublattice Ising metamagnet on honeycomb, square and hexagonal lattices under the presence of a time-dependent oscillating external magnetic field are studied by using the effective-field theory with correlations. The set of effective-field dynamic equations is derived by employing Glauber transition rates. The phases in the system are obtained by solving these dynamic equations. The thermal behavior of the dynamic staggered magnetization, the hysteresis loop area and correlation are investigated in order to characterize the nature of the dynamic transitions and to obtain dynamic phase transition temperatures. The phase diagrams are constructed in two different planes, and exhibit dynamic tricritical behavior, which strongly depends on interaction parameters. In order to investigate the spin correlation effect on the dynamic phase diagrams of the system, the results are also given within the framework of the dynamic mean-field approximation. 14. Thermochemical and phase diagram studies of the Sn-Zn-Ni system International Nuclear Information System (INIS) Gandova, V.D.; Broz, P.; Bursik, J.; Vassilev, G.P. 2011-01-01 Highlights: → Sn-Zn-Ni phase diagram in the vicinity of the Sn-Zn system. → Unidentified compositions (UX1-UX4) are repeatedly observed. → This indicates up to 6 ternary compounds in the system. → A ternary eutectic reaction at around 190 o C is found. - Abstract: The phase diagram Sn-Zn-Ni was studied by means of DSC and electron microprobe analysis. The samples were positioned in three isopleth sections with nickel contents of 0.04 (section 1), 0.08 (section 2) and 0.12 (section 3) mole fractions. The mole fractions of Sn corresponding to the particular sections were as follows: from 0.230 to 0.768 (section 1), from 0.230 to 0.736 (section 2); from 0.220 to 0.704 (section 3). Mixtures of pure metals were sealed under vacuum in quartz ampoules and annealed at 350 o C. The solid phases identified in the samples were: γ-(i.e. Ni 5 Zn 21 ), (Zn) and the ternary phase T1. Unidentified compositions were observed. One of them: UX1 (X Ni = 0.071 ± 0.005, X Sn = 0.439 ± 0.009 and X Zn = 0.490 ± 0.010) might indicate another (stable or metastable) ternary compound (T3) in the system Sn-Zn-Ni. Considering the data obtained by combining DSC with microstructure observations, the studied alloys could be divided in two groups (A and B). A ternary eutectic reaction at around 190 o C is common for the A-group alloys. The phases taking part in this reaction are, probably, Ni 5 Zn 21 , (Zn), (βSn) and liquid. B-group samples do not show ternary eutectic reaction and are also characterized by the presence of the ternary compound T1 (absent in the A-group alloys). Four other groups of thermal arrests were registered (TA 1 -TA 4 ). It was found that TA 2 peaks were characteristic for most of the A-group samples, while TA 1 peaks were registered with all B-group samples. 15. On the solid–liquid phase diagrams of binary mixtures of even saturated fatty alcohols: Systems exhibiting peritectic reaction Energy Technology Data Exchange (ETDEWEB) Carareto, Natália D.D. [EXTRAE, Department of Food Engineering, Food Engineering Faculty, University of Campinas, UNICAMP, CEP 13083-862 Campinas, SP (Brazil); Santos, Adenílson O. dos [Social Sciences, Health and Technology Center, University of Maranhão, UFMA, CEP 65900-410 Imperatriz, MA (Brazil); Rolemberg, Marlus P. [Institute of Science and Technology, University of Alfenas, UNIFAL, Rodovia José AurélioVilela, CEP 37715400 Poços de Caldas, MG (Brazil); Cardoso, Lisandro P. [Institute of Physics GlebWataghin, University of Campinas, UNICAMP, C.P. 6165, CEP 13083-970 Campinas, SP (Brazil); Costa, Mariana C. [School of Applied Science, University of Campinas, UNICAMP, CEP 13484-350 Limeira, SP (Brazil); Meirelles, Antonio J.A., E-mail: tomze@fea.unicamp.br [EXTRAE, Department of Food Engineering, Food Engineering Faculty, University of Campinas, UNICAMP, CEP 13083-862 Campinas, SP (Brazil) 2014-08-10 Highlights: • SLE of binary mixtures of saturated fatty alcohols was studied. • Experimental data were obtained using DSC and stepscan DSC. • Microscopy and X-ray diffraction used as complementary techniques. • Systems presented eutectic, peritectic and metatectic points. - Abstract: The solid–liquid phase diagrams of the following binary mixtures of even saturated fatty alcohols are reported in the literature for the first time: 1-octanol (C8OH) + 1-decanol (C10OH), 1-decanol + 1-dodecanol (C12OH), 1-dodecanol + 1-hexadecanol (C16OH) and 1-tetradecanol (C14OH) + 1-octadecanol (C18OH). The phase diagrams were obtained by differential scanning calorimetry (DSC) using a linear heating rate of 1 K min{sup −1} and further investigated by using a stepscan DSC method. X-ray diffraction (XRD) and polarized light microscopy were also used to complement the characterization of the phase diagrams which have shown a complex global behavior, presenting not only peritectic and eutectic reactions, but also the metatectic reaction and partial immiscibility on solid state. 16. On the solid–liquid phase diagrams of binary mixtures of even saturated fatty alcohols: Systems exhibiting peritectic reaction International Nuclear Information System (INIS) Carareto, Natália D.D.; Santos, Adenílson O. dos; Rolemberg, Marlus P.; Cardoso, Lisandro P.; Costa, Mariana C.; Meirelles, Antonio J.A. 2014-01-01 Highlights: • SLE of binary mixtures of saturated fatty alcohols was studied. • Experimental data were obtained using DSC and stepscan DSC. • Microscopy and X-ray diffraction used as complementary techniques. • Systems presented eutectic, peritectic and metatectic points. - Abstract: The solid–liquid phase diagrams of the following binary mixtures of even saturated fatty alcohols are reported in the literature for the first time: 1-octanol (C8OH) + 1-decanol (C10OH), 1-decanol + 1-dodecanol (C12OH), 1-dodecanol + 1-hexadecanol (C16OH) and 1-tetradecanol (C14OH) + 1-octadecanol (C18OH). The phase diagrams were obtained by differential scanning calorimetry (DSC) using a linear heating rate of 1 K min −1 and further investigated by using a stepscan DSC method. X-ray diffraction (XRD) and polarized light microscopy were also used to complement the characterization of the phase diagrams which have shown a complex global behavior, presenting not only peritectic and eutectic reactions, but also the metatectic reaction and partial immiscibility on solid state 17. Phase diagram of a polarized Fermi gas across a Feshbach resonance in a potential trap International Nuclear Information System (INIS) Yi, W.; Duan, L.-M. 2006-01-01 We map out the detailed phase diagram of a trapped ultracold Fermi gas with population imbalance across a wide Feshbach resonance. We show that under the local density approximation, the properties of the atoms in any (anisotropic) harmonic traps are universally characterized by three dimensionless parameters: the normalized temperature, the dimensionless interaction strength, and the population imbalance. We then discuss the possible quantum phases in the trap, and quantitatively characterize their phase boundaries in various typical parameter regions 18. Analytical Determining Of The Steinmetz Equivalent Diagram Elements Of Single-Phase Transformer Directory of Open Access Journals (Sweden) T. Aly Saandy 2015-08-01 Full Text Available This article presents to an analytical calculation methodology of the Steinmetz Equivalent Diagram Elements applied to the prediction of Eddy current loss in a single-phase transformer. Based on the electrical circuit theory the active and reactive powers consumed by the core are expressed analytically in function of the electromagnetic parameters as resistivity permeability and the geometrical dimensions of the core. The proposed modeling approach is established with the duality parallel series. The equivalent diagram elements empirically determined by Steinmetz are analytically expressed using the expressions of the no loaded transformer consumptions. To verify the relevance of the model validations both by simulations with different powers and measurements were carried out to determine the resistance and reactance of the core. The obtained results are in good agreement with the theoretical approach and the practical results. 19. Reduced temperature phase diagrams of the silver-rare earths binary systems International Nuclear Information System (INIS) Ferro, R.; Delfino, S.; Capelli, R.; Borsese, A. 1975-01-01 Phase equilibria of the silver-rare earth binary systems have been reported in ''reduced temperature'' diagrams (the ''reduced temperature'' being defined as the ratio between a characteristic temperature of the Agsub(x)R.E. phase and the melting temperature of the corresponding R.E. metal, both in 0 K). The smooth trends of the various characteristic reduced temperatures, when plotted against the R.E. atomic number, have been demonstrated. On passing from the light- to the heavy-rare-earths, a correlation has been found between the crossing of these curves and other phenomena, such as the disappearing of the Ag 5 R.E. phases from incongruently, to congruently melting compounds. The trends of the reduced-temperature curves have been briefly discussed in terms of the treatment suggested by Gschneidner together with the volumetric data known for the different Agsub(x)R.E. phases. In addition, the characteristic data of the 1:1 AgR.E. compounds have been compared with those of the analogous AuR.E. phases. (Auth.) 20. Two-phase quasi-equilibrium in β-type Ti-based bulk metallic glass composites Science.gov (United States) Zhang, L.; Pauly, S.; Tang, M. Q.; Eckert, J.; Zhang, H. F. 2016-01-01 The microstructural evolution of cast Ti/Zr-based bulk metallic glass composites (BMGCs) containing β-Ti still remains ambiguous. This is why to date the strategies and alloys suitable for producing such BMGCs with precisely controllable volume fractions and crystallite sizes are still rather limited. In this work, a Ti-based BMGC containing β-Ti was developed in the Ti-Zr-Cu-Co-Be system. The glassy matrix of this BMGC possesses an exceptional glass-forming ability and as a consequence, the volume fractions as well as the composition of the β-Ti dendrites remain constant over a wide range of cooling rates. This finding can be explained in terms of a two-phase quasi-equilibrium between the supercooled liquid and β-Ti, which the system attains on cooling. The two-phase quasi-equilibrium allows predicting the crystalline and glassy volume fractions by means of the lever rule and we succeeded in reproducing these values by slight variations in the alloy composition at a fixed cooling rate. The two-phase quasi-equilibrium could be of critical importance for understanding and designing the microstructures of BMGCs containing the β-phase. Its implications on the nucleation and growth of the crystalline phase are elaborated. PMID:26754315 1. Magnetic transition phase diagram of cobalt clusters electrodeposited on HOPG: Experimental and micromagnetic modelling study Energy Technology Data Exchange (ETDEWEB) Rivera, M., E-mail: mrivera@fisica.unam.m [Imperial College London, Department of Chemistry, South Kensington Campus, London SW7 2AZ (United Kingdom); Rios-Reyes, C.H. [Universidad Autonoma Metropolitana-Azcapotzalco, Departamento de Materiales, Av. San Pablo 180, Col. Reynosa Tamaulipas, C.P. 02200, Mexico D.F. (Mexico); Universidad Autonoma del Estado de Hidalgo, Centro de Investigaciones Quimicas, Mineral de la Reforma, Hidalgo, C.P. 42181 (Mexico); Mendoza-Huizar, L.H. [Universidad Autonoma del Estado de Hidalgo, Centro de Investigaciones Quimicas, Mineral de la Reforma, Hidalgo, C.P. 42181 (Mexico) 2011-04-15 The magnetic transition from mono- to multidomain magnetic states of cobalt clusters electrodeposited on highly oriented pyrolytic graphite electrodes was studied experimentally using Magnetic Force Microscopy. From these images, it was found that the critical size of the magnetic transition is dominated by the height rather than the diameter of the aggregate. This experimental behavior was found to be consistent with a theoretical single-domain ferromagnetic model that states that a critical height limits the monodomain state. By analyzing the clusters magnetic states as a function of their dimensions, magnetic exchange constant and anisotropy value were obtained and used to calculate other magnetic properties such as the exchange length, magnetic wall thickness, etc. Finally, a micromagnetic simulation study correctly predicted the experimental magnetic transition phase diagram. - Research highlights: > Electrodeposition of cobalt clusters. > Mono to multidomain magnetic transition. > Magnetic phase diagram. 2. Magnetic transition phase diagram of cobalt clusters electrodeposited on HOPG: Experimental and micromagnetic modelling study International Nuclear Information System (INIS) Rivera, M.; Rios-Reyes, C.H.; Mendoza-Huizar, L.H. 2011-01-01 The magnetic transition from mono- to multidomain magnetic states of cobalt clusters electrodeposited on highly oriented pyrolytic graphite electrodes was studied experimentally using Magnetic Force Microscopy. From these images, it was found that the critical size of the magnetic transition is dominated by the height rather than the diameter of the aggregate. This experimental behavior was found to be consistent with a theoretical single-domain ferromagnetic model that states that a critical height limits the monodomain state. By analyzing the clusters magnetic states as a function of their dimensions, magnetic exchange constant and anisotropy value were obtained and used to calculate other magnetic properties such as the exchange length, magnetic wall thickness, etc. Finally, a micromagnetic simulation study correctly predicted the experimental magnetic transition phase diagram. - Research highlights: → Electrodeposition of cobalt clusters. →Mono to multidomain magnetic transition. → Magnetic phase diagram. 3. T-p phase diagrams and the barocaloric effect in materials with successive phase transitions Science.gov (United States) Gorev, M. V.; Bogdanov, E. V.; Flerov, I. N. 2017-09-01 An analysis of the extensive and intensive barocaloric effect (BCE) at successive structural phase transitions in some complex fluorides and oxyfluorides was performed. The high sensitivity of these compounds to a change in the chemical pressure allows one to vary the succession and parameters of the transformations (temperature, entropy, baric coefficient) over a wide range and obtain optimal values of the BCE. A comparison of different types of schematic T-p phase diagrams with the complicated T( p) dependences observed experimentally shows that in some ranges of temperature and pressure the BCE in compounds undergoing successive transformations can be increased due to a summation of caloric effects associated with distinct phase transitions. The maximum values of the extensive and intensive BCE in complex fluorides and oxyfluorides can be realized at rather low pressure (0.1-0.3 GPa). In a narrow temperature range around the triple points conversion from conventional BCE to inverse BCE is observed, which is followed by a gigantic change of both \\vertΔ S_BCE\\vert and \\vertΔ T_AD\\vert . 4. Overview of the phase diagram of ionic magnetic colloidal dispersions International Nuclear Information System (INIS) Cousin, F.; Dubois, E.; Cabuil, V.; Boue, F.; Perzynski, R. 2001-01-01 We study ionic magnetic colloidal dispersions, which are constituted of γ-Fe 2 O 3 nanoparticles dispersed in water, and stabilized with electrostatic interparticle repulsion. The phase diagram PV versus Φ (P: osmotic pressure, V: particle volume, Φ: particle volume fraction) is explored, especially in the range of high Π and high Φ. The osmotic pressure P of the colloidal dispersion is known either by a measurement either because it is imposed during the sample preparation by osmotic compression. The structure of the colloidal dispersion is determined from Small Angle Neutron Scattering. Two regimes can be distinguished. At high pressure, fluid and solid phases can exist. Their structure is governed by strong electrostatic repulsion, the range of which is here evaluated. At low pressure, gas, liquid and glassy solids can exist. Their structure results from a sticky hard sphere potential. (author) 5. Phase diagram of N = 2 superconformal field theories and bifurcation sets in catastrophe theory International Nuclear Information System (INIS) Kei Ito. 1989-08-01 Phase diagrams of N=2 superconformal field theories are mapped out. It is shown that they coincide with bifurcation sets in catastrophe theory. The results are applied to the determination of renormalization group flows triggered by a combination of two or more relevant operators. (author). 13 refs, 2 figs 6. A partial phase diagram of Pt-rich Pt-Mn alloys CERN Document Server Sembiring, T; Ohshima, K I; Ota, K; Shishido, T 2002-01-01 We have performed the X-ray and electron diffraction studies to reconstruct a partial phase diagram of Pt-rich Pt-Mn alloys in the composition range of 10 to 35 at.% Mn. Electrical resistivity measurement was also used for determining the order-disorder transition temperature in Pt-14.2 at.% Mn alloy. The phase boundary between Cu sub 3 Au type and ABC sub 6 type ordered structures is established, in which the latter has been found recently by the present [J.Phys. Soc. Jpn. 71 (2002) 681]. In the ABC sub 6 type ordered phase, superlattice reflections both at 1/2 1/2 1/2 and its equivalent position (L-point) and at 100, 110 and their equivalent positions (X-point) appear in the composition range from 12.5 to 14.4 at.% Mn below 682degC. In the Cu sub 3 Au type ordered phase, diffuse maxima at L-point appear in the composition range from 15.9 to 19.7 at.% Mn in addition to the superlattice reflections at X-point. The Cu sub 3 Au type ordered structure is found to be stable in the composition range from 19.7 to 3... 7. The forms of azeotropic rule for multidimensional diagrams of equilibrium distillation Science.gov (United States) Pisarenko, Yu. A.; Usol'tseva, O. O.; Cardona, C. A.; Gerard, O. T. 2013-09-01 Linear independent forms of the azeotropy rule applicable to diagrams of distillation (reaction distillation) and their fragments are established and presented as simple polyhedra of arbitrary dimensions. 8. Solid phases in the systems glycine–ZnX2–H2O (X = Cl−, Br−, I−) at 25 °C Czech Academy of Sciences Publication Activity Database Tepavitcharova, S.; Havlíček, D.; Matulková, I.; Rabadjieva, D.; Balarew, C.; Gergulova, R.; Němec, I.; Císařová, I.; Plocek, Jiří 2018-01-01 Roč. 149, č. 2 (2018), s. 299-311 ISSN 0026-9247 Institutional support: RVO:61388980 Keywords : Equilibrium crystallization * IR spectroscopy * Non-equilibrium crystallization * Phase diagrams * Raman spectroscopy Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 1.282, year: 2016 9. Diagram of state of stiff amphiphilic macromolecules NARCIS (Netherlands) Markov, Vladimir A.; Vasilevskaya, Valentina V.; Khalatur, Pavel G.; ten Brinke, Gerrit; Khokhlov, Alexei R. 2007-01-01 We studied coil-globule transitions in stiff-chain amphiphilic macromolecules via computer modeling and constructed phase diagrams for such molecules in terms of solvent quality and persistence length. We showed that the shape of the phase diagram essentially depends on the macromolecule degree of 10. Magnetic phase diagram of Ba3CoSb2O9 as determined by ultrasound velocity measurements Science.gov (United States) Quirion, G.; Lapointe-Major, M.; Poirier, M.; Quilliam, J. A.; Dun, Z. L.; Zhou, H. D. 2015-07-01 Using high-resolution sound velocity measurements we have obtained a very precise magnetic phase diagram of Ba3CoSb2O9 , a material that is considered to be an archetype of the spin-1/2 triangular-lattice antiferromagnet. Results obtained for the field parallel to the basal plane (up to 18 T) show three phase transitions, consistent with predictions based on simple two-dimensional isotropic Heisenberg models and previous experimental investigations. The phase diagram obtained for the field perpendicular to the basal plane clearly reveals an easy-plane character of this compound and, in particular, our measurements show a single first-order phase transition at Hc 1=12.0 T which can be attributed to a spin flop between an umbrella-type configuration and a coplanar V -type order where spins lie in a plane perpendicular to the a b plane. At low temperatures, softening of the lattice within some of the ordered phases is also observed and may be a result of residual spin fluctuations. 11. Ab initio calculation of the bcc Fe-Al phase diagram including magnetic interactions International Nuclear Information System (INIS) Gonzales-Ormeno, Pablo Guillermo; Petrilli, Helena Maria; Schoen, Claudio Geraldo 2006-01-01 The metastable phase diagram of the body-centered cubic-based ordering equilibria in the Fe-Al system has been calculated by the cluster expansion method, through the combination of the full potential-linear augmented plane wave and cluster variation methods. The results are discussed with reference to the effect of including the spin polarizations of Fe in the thermodynamic model 12. Investigation on U - O - Na, Pu - O - Na and U,Pu - O - Na phase diagrams International Nuclear Information System (INIS) Pillon, S. 1989-03-01 The thermochemical interaction between the nuclear fuel (uranium and plutonium mixed oxides) and the sodium has been investigated and particularly the three phase diagrams: U - O - Na; Pu - O - Na; U,Pu - O - Na. High temperature neutron diffraction, microcalorimetry and powder X-ray diffraction were used for the characterization of the compounds synthetized. This study allowed to complete the knowledge about each of these diagrams and to measure some physical and thermal properties on the compounds. The limits on the modelization of the fuel-sodium interaction are discussed from the results of the UO 2 - Na reaction [fr 13. Gauge/gravity duality. From quantum phase transitions towards out-of-equilibrium physics International Nuclear Information System (INIS) Ngo Thanh, Hai 2011-01-01 In this dissertation we use gauge/gravity duality to investigate various phenomena of strongly coupled field theories. Of special interest are quantum phase transitions, quantum critical points, transport phenomena of charges and the thermalization process of strongly coupled medium. The systems studied in this thesis might be used as models for describing condensed matter physics in a superfluid phase near the quantum critical point and the physics of quark-gluon plasma (QGP), a deconfinement phase of QCD, which has been recently created at the Relativistic Heavy Ion Collider (RHIC). Moreover, we follow the line of considering different gravity setups whose dual field descriptions show interesting phenomena of systems in thermal equilibrium, slightly out-of-equilibrium and far-from-equilibrium. We first focus on systems in equilibrium and construct holographic superfluids at finite baryon and isospin charge densities. For that we use two different approaches, the bottom-up with an U(2) Einstein-Yang-Mills theory with back-reaction and the top-down approach with a D3/D7 brane setup with two coincident D7-brane probes. In both cases we observe phase transitions from a normal to a superfluid phase at finite and also at zero temperature. In our setup, the gravity duals of superfluids are Anti-de Sitter black holes which develop vector-hair. Studying the order of phase transitions at zero temperature, in the D3/D7 brane setup we always find a second order phase transition, while in the Einstein-Yang-Mills theory, depending on the strength of the back-reaction, we obtain a continuous or first order transition. We then move to systems which are slightly out-of-equilibrium. Using the D3/D7 brane setup with N c coincident D3-branes and N f coincident D7-brane probes, we compute transport coefficients associated with massive N=2 supersymmetric hypermultiplet fields propagating through an N=4 SU(N c ) super Yang-Mills plasma in the limit of N f c . Introducing a baryon 14. Gauge/gravity duality. From quantum phase transitions towards out-of-equilibrium physics Energy Technology Data Exchange (ETDEWEB) Ngo Thanh, Hai 2011-05-02 In this dissertation we use gauge/gravity duality to investigate various phenomena of strongly coupled field theories. Of special interest are quantum phase transitions, quantum critical points, transport phenomena of charges and the thermalization process of strongly coupled medium. The systems studied in this thesis might be used as models for describing condensed matter physics in a superfluid phase near the quantum critical point and the physics of quark-gluon plasma (QGP), a deconfinement phase of QCD, which has been recently created at the Relativistic Heavy Ion Collider (RHIC). Moreover, we follow the line of considering different gravity setups whose dual field descriptions show interesting phenomena of systems in thermal equilibrium, slightly out-of-equilibrium and far-from-equilibrium. We first focus on systems in equilibrium and construct holographic superfluids at finite baryon and isospin charge densities. For that we use two different approaches, the bottom-up with an U(2) Einstein-Yang-Mills theory with back-reaction and the top-down approach with a D3/D7 brane setup with two coincident D7-brane probes. In both cases we observe phase transitions from a normal to a superfluid phase at finite and also at zero temperature. In our setup, the gravity duals of superfluids are Anti-de Sitter black holes which develop vector-hair. Studying the order of phase transitions at zero temperature, in the D3/D7 brane setup we always find a second order phase transition, while in the Einstein-Yang-Mills theory, depending on the strength of the back-reaction, we obtain a continuous or first order transition. We then move to systems which are slightly out-of-equilibrium. Using the D3/D7 brane setup with N{sub c} coincident D3-branes and N{sub f} coincident D7-brane probes, we compute transport coefficients associated with massive N=2 supersymmetric hypermultiplet fields propagating through an N=4 SU(N{sub c}) super Yang-Mills plasma in the limit of N{sub f}< 15. Transport and Phase Equilibria Properties for Steam Flooding of Heavy Oils Energy Technology Data Exchange (ETDEWEB) Gabitto, Jorge; Barrufet, Maria 2002-11-20 The objectives of this research included experimental determination and rigorous modeling and computation of phase equilibrium diagrams, volumetric, and transport properties of hydrocarbon/CO2/water mixtures at pressures and temperatures typical of steam injection processes for thermal recovery of heavy oils. 16. Modelling of phase diagrams and thermodynamic properties using Calphad method – Development of thermodynamic databases Czech Academy of Sciences Publication Activity Database Kroupa, Aleš 2013-01-01 Roč. 66, JAN (2013), s. 3-13 ISSN 0927-0256 R&D Projects: GA MŠk(CZ) OC08053 Institutional support: RVO:68081723 Keywords : Calphad method * phase diagram modelling * thermodynamic database development Subject RIV: BJ - Thermodynamics Impact factor: 1.879, year: 2013 17. A development of multi-Species mass transport model considering thermodynamic phase equilibrium DEFF Research Database (Denmark) Hosokawa, Yoshifumi; Yamada, Kazuo; Johannesson, Björn 2008-01-01 ) variation in solid-phase composition when using different types of cement, (ii) physicochemical evaluation of steel corrosion initiation behaviour by calculating the molar ratio of chloride ion to hydroxide ion [Cl]/[OH] in pore solution, (iii) complicated changes of solid-phase composition caused......In this paper, a multi-species mass transport model, which can predict time dependent variation of pore solution and solid-phase composition due to the mass transport into the hardened cement paste, has been developed. Since most of the multi-species models established previously, based...... on the Poisson-Nernst-Planck theory, did not involve the modeling of chemical process, it has been coupled to thermodynamic equilibrium model in this study. By the coupling of thermodynamic equilibrium model, the multi-species model could simulate many different behaviours in hardened cement paste such as: (i... 18. Thermodynamic Calculations of Ternary Polyalcohol and Amine Phase Diagrams for Thermal Energy Storage Materials Science.gov (United States) Shi, Renhai Organic polyalcohol and amine globular molecular crystal materials as phase change materials (PCMs) such as Pentaglycerine (PG-(CH3)C(CH 2OH)3), Tris(hydroxymethyl)aminomethane (TRIS-(NH2)C(CH 2OH)3), 2-amino-2methyl-1,3-propanediol (AMPL-(NH2)(CH3)C(CH2OH)2), and neopentylglycol (NPG-(CH3)2C(CH2OH) 2) can be considered to be potential candidates for thermal energy storage (TES) applications such as waste heat recovery, solar energy utilization, energy saving in buildings, and electronic device management during heating or cooling process in which the latent heat and sensible heat can be reversibly stored or released through solid state phase transitions over a range of temperatures. In order to understand the polymorphism of phase transition of these organic materials and provide more choice of materials design for TES, binary systems have been studied to lower the temperature of solid-state phase transition for the specific application. To our best knowledge, the study of ternary systems in these organic materials is limited. Based on this motivation, four ternary systems of PG-TRIS-AMPL, PG-TRIS-NPG, PG-AMPL-NPG, and TRIS-AMPL-NPG are proposed in this dissertation. Firstly, thermodynamic assessment with CALPHAD method is used to construct the Gibbs energy functions into thermodynamic database for these four materials based on available experimental results from X-Ray Diffraction (XRD) and Differential Scanning Calorimetry (DSC). The phase stability and thermodynamic characteristics of these four materials calculated from present thermodynamic database with CALPHAD method can match well the present experimental results from XRD and DSC. Secondly, related six binary phase diagrams of PG-TRIS, PG-AMPL, PG-NPG, TRIS-AMPL, TRIS-NPG, and AMPL-NPG are optimized with CALPHAD method in Thermo-Calc software based on available experimental results, in which the substitutional model is used and excess Gibbs energy is expressed with Redlich-Kister formalism. The 19. Using CCT Diagrams to Optimize the Composition of an As-Rolled Dual-Phase Steel Science.gov (United States) Coldren, A. Phillip; Eldis, George T. 1980-03-01 A continuous-cooling transformation (CCT) diagram study was conducted for the purpose of optimizing the composition of a Mn-Si-Cr-Mo as-rolled dual-phase (ARDP) steel. The individual effects of chromium, molybdenum, and silicon on the allowable cooling rates were determined. On the basis of the CCT diagram study and other available information, an optimum composition was selected. Data from recent mill trials at three steel companies, involving steels with compositions in or near the newly recommended range, are presented and compared with earlier mill trial data. The comparison shows that the optimized composition is highly effective in making the steel's properties more uniform and reproducible in the as-rolled condition. 20. Phase diagram, correlation gap, and critical properties of the Coulomb glass Science.gov (United States) Palassini, Matteo; Goethe, Martin 2009-03-01 We investigate the lattice Coulomb glass model in three dimensions via extensive Monte Carlo simulations. 1. No evidence for an equilibrium glass phase is found down to very low temperatures, contrary to mean-field predictions, although the correlation length increases rapidly near T=0. 2. The single-particle density of states near the Coulomb gap satisfies the scaling law g(e,T)=T^λf(e/T) with λ 2.2. 3. A charge-ordered phase exists at low disorder. The phase transition from the fluid to the charge ordered phase is consistent with the Random Field Ising universality class, which shows that the interaction is effectively screened at moderate temperature. Results from nonequilibrium simulations will also be briefly discussed. Reference: M.Goethe and M.Palassini, arXiv:0810.1047
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270265460014343, "perplexity": 3289.933490723081}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00195.warc.gz"}
http://en.wikipedia.org/wiki/User:Snood1205
# User:Snood1205 Jump to: navigation, search Name This user's name is Eli. 8 This Wikipedian joined Wikipedia 8 years, 6 months, 29 days ago as of September 17, 2014. 46% This user has been a Wikipedian for 45.7% of their life. CWRU This user attends or attended Case Western Reserve University. Thou/Ye This user wants to resurrect the T–V distinction in English. y'all This user thinks y'all serves a useful purpose as a second-person plural pronoun, and would like to see y'all use it more often. Ain't This user believes that ain't is a proper word to use in place of a contraction of a verb and a pronoun. Ain't that right? This editor is a Burba and is entitled to display this First Book of Wikipedia. This editor is a Novice Editor and is entitled to display this Service Badge. Subj This user prefers that the subjunctive mood be used. Were this user you, he would use it. to¦go This user chooses to sometimes use split infinitives. whom This user insists upon using whom wherever it is called for, and fixes the errors of whomever he sees. less & fewer This user understands the difference between less & fewer. Latin Plurals: "Data is are..." This user uses "data", "media", "memoranda", "criteria", and "agenda" as the plurals of "datum", "medium", "memorandum", "criterion", and "agendum". This user is of Ashkenazi Jewish ancestry. This user is male, and as such, he would prefer to be addressed and/or referred to with masculine pronouns. 13+ This user is a teenager. This user is an undergraduate student double majoring in Mathematics and Electrical Engineering. ; This user's favorite punctuation is semicolons; he or she likes using them a lot. tuff, doe, chru, tought This user thinks English spelling reform is dumb, since we'll have to do it again anyway in a few hundred years, and besides, no writing system is telling me how to talk. Hebr-N This user has a native-like understanding of the Cursive form of the Hebrew alphabet. math-N This user can contribute with fluent mathematical skills $\int\limits _a^b\,$ This user knows the difference between an integral and an antiderivative; and so should you! Erdős 3 This user has an Erdős number of 3 ft & m This American user is equally at ease with both metric units and U.S. customary units. $0.\bar{9}$=1 This user knows that 0.9...(repeating) is exactly 1 and can prove it, but wishes that other people could understand it the way he does. $\sum_{n=1}^{\infty}\frac{n^x}{x!}$ This user understands capital sigma and pi notation; and you should too! This user is a Conservative Jew. MATH This user's favorite subject is Mathematics. ACT This user obtained a 34 on the ACT. PhD This user aspires to be a doctoral student. This user is interested in Africa This user is from Pennsylvania. PHI This user is a fan of the Philadelphia Flyers NYY This user is a fan of the New York Yankees NYG This user is a fan of the New York Giants Hello I am Eli from The United States of America. I began editing articles in 2006. Wikipedia:Babel en This user is a native speaker of English. fr-4 Cet utilisateur parle français à un niveau comparable à la langue maternelle. he-1 משתמש זה מסוגל לתרום ברמה בסיסית של עברית.‏ Search user languages ## Vandalism information Wikipedia vandalism information Moderate to high level of vandalism. [view 4.3CVS / 5.5RPM according to DefconBot 01:00, 17 September 2014 (UTC) ## Places I've Been States visited home current residence Countries visited Home
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28276193141937256, "perplexity": 5747.10497585774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120974.20/warc/CC-MAIN-20140914011200-00267-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://scp-wiki.wikidot.com/scp-1968
SCP-1968 rating: +117+x Item #: SCP-1968 Object Class: Keter Special Containment Procedures: SCP-1968 is to be secured in a bunker 300m underground accessible only by a single elevator requiring positive action at both the top and bottom of the shaft to operate. Armed guards are to be present at both ends. In case of incursion from within or without, the elevator shaft is to have an explosive self-destruct activated rendering it impassable. In the event of an incursion, guards must be considered expendable. Description: SCP-1968 appears in its inactive state to be a bronze torus of unknown composition. It has a major diameter of 320cm and a minor diameter of 90cm. It is marked with raised features or glyphs, the presumption being that they act as control surfaces. It is difficult to photograph or visually inspect the artifact as it appears to bend light. Mild, fluctuating gravitational effects have also been observed. It has proven impossible to take a sample of the artifact. Spectrographic attempts have proven inconclusive. Although not particularly heavy (weighing ~14Kg), inertial and angular momentum studies suggest that neutronium1 (in vanishingly small quantities) may be present in the body of the mechanism. SCP-1968 demonstrates its anomalous properties when it is handled by a human being. When moderate force is applied to it, it will begin to deform in unpredictable ways, its material composition will appear to change, and it will become animated, surrounding the subject in convolutions and undulating increasingly faster. Its primary effect will manifest itself when an unpredictable threshold is met, after which the artifact will return to its original state. At this point, the subject will have had their memories altered. They will no longer agree with the historical record, often profoundly. Their self-reported personal history will be at odds with Foundation personnel records. As a consequence, they will often assume a posture of agitation and paranoia. The more pronounced the deformation of the artifact, the more divergent their memories will be. It is theorized that the glyphs, via means as of yet unknown, control the degree of deformation and its resultant effects. Recovery Log: SCP-1968 was recovered in late 2001 from a core-sample extracted ██ Km deep during a petrochemical survey near Zackenburg, Greenland. Based on the depth from which it was recovered — along with corroborating paleoatmospheric readings — the artifact is estimated to be $31 \pm 2.3$ million years old. Foundation personnel intercepted the radio transmission of its discovery and, owing to its unusual nature and age, moved to secure the artifact. Class B amnestics were administered to the personnel in Greenland, along with those individuals at the governing authority in Denmark who had been made aware of its discovery. Once on site, it was discovered that one of the geological engineers had been placed under a 72 hour psychiatric hold after violently assaulting a colleague and behaving in a manner consistent with the Foundation test subject (see below). It is presumed that they had handled the artifact. Classified The unauthorized viewing of the following material is prohibited without the consent of a majority of the O5 level administrators. Failure to adhere to this directive will result in termination, as well as the termination of any other personnel made aware of this material. Note: this directive is rescinded in the event of an imminent CK-class event. page revision: 112, last edited: 26 Nov 2013 05:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42523685097694397, "perplexity": 2280.8926463485977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345773230/warc/CC-MAIN-20131218054933-00094-ip-10-33-133-15.ec2.internal.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=5621
2017 Том 69 № 7 Trigonometric widths of classes of periodic functions of many variables Derev’yanko N. V. Abstract We obtain exact-order estimates for the trigonometric widths of the classes $B^{\Omega}_{p\theta}$ of periodic functions of many variables in the space $L_q$ for some relations between the parameters $p$ and $q$. English version (Springer): Ukrainian Mathematical Journal 64 (2012), no. 8, pp 1185-1198. Citation Example: Derev’yanko N. V. Trigonometric widths of classes of periodic functions of many variables // Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1041-1052. Full text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528057217597961, "perplexity": 1513.2640933090477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695375.98/warc/CC-MAIN-20170926085159-20170926105159-00237.warc.gz"}
https://socratic.org/questions/597cb33511ef6b3a58b187e4
Chemistry Topics # Question #187e4 Jul 29, 2017 The unknown nuclide is nitrogen-13. #### Explanation: The thing to keep in mind about nuclear equations is that mass and charge must be conserved. Before doing anything else, grab a Periodic Table and look for the atomic number of boron, $\text{B}$, and for the atomic number of helium, $\text{He}$. Now, you know that $\text{_(color(white)(1)5)^10"B" + ""_2^4"He" -> ""_Z^A? + ""_0^1"n}$ In order to find the identity of the unknown nuclide, you must take into account that $10 + 4 = A + 1 \to$ conservation of mass $\textcolor{w h i t e}{1} 5 + 2 = Z + 0 \to$ conservation of charge You should end up with $14 = A + 1 \implies A = 13$ $\textcolor{w h i t e}{1} 7 = Z$ The unknown element is nitrogen, $\text{N}$, because you have $Z = 7 \to$ the atomic number of nitrogen The unknown nuclide is nitrogen-13 because you have $A = 13 \to$ the mass number of nitrogen-13 The complete nuclear equation will look like this $\text{_ (color(white)(1)5)^10"B" + ""_ 2^4"He" -> ""_ (color(white)(1)7)^13"N" + ""_0^1"n}$ According to this nuclear equation, when a boron-10 nucleus is bombarded with a helium-4 nucleus, also known as an alpha particle, $\alpha$, a nitrogen-13 nucleus and a neutron, $\text{_0^1"n}$, are produced. ##### Impact of this question 718 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6897300481796265, "perplexity": 1808.3245584061272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00001.warc.gz"}
http://www.cdmjinc.com/docs/74qdx1m.php?tag=what-is-c-in-chemistry-equation-4bbe45
# what is c in chemistry equation where $$\Delta \bar{H}$$ and $$\Delta \bar{V}$$ is the molar change in enthalpy (the enthalpy of fusion in this case) and volume respectively between the two phases in the transition. Use the BACK button on your browser to return to this page. In the following equation, note that there is a 3 in front of molecular hydrogen and a 2 in front of ammonia. Now we can do our calculations in one step instead of 2. As a member, you'll also get unlimited access to over 83,000 The zj may be incorporated[7][8] A chemical reaction rearranges the constituent atoms of the reactants to create different substances as products. Fe(s) + S(s) → FeS(s) The plus sign indicates that iron reacts with sulfur. The reason there are so many names is because more than one law is involved. In this lesson, you will learn how chemical equations are used to represent chemical reactions. Matthew has a Master of Arts degree in Physics Education. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons The only thing in this equilibrium which isn't a solid is the carbon dioxide. Use a piece of paper and derive the Clausius-Clapeyron equation so that you can get the form: \begin{align} \Delta H_{sub} &= \dfrac{ R \ln \left(\dfrac{P_{273}}{P_{268}}\right)}{\dfrac{1}{268 \;K} - \dfrac{1}{273\;K}} \nonumber \\[4pt] &= \dfrac{8.3145 \ln \left(\dfrac{4.560}{2.965} \right)}{ \dfrac{1}{268\;K} - \dfrac{1}{273\;K} } \nonumber \\[4pt] &= 52,370\; J\; mol^{-1}\nonumber \end{align} \nonumber. Often, it's important to know the physical state of a reactant or product. To make a beautiful cake, you need to add the correct amounts and proportions of eggs, flour, sugar, oil, and flavorings. A chemical equation is a short-hand way to represent the components of a chemical reaction. succeed. - Definition & Effects, Quiz & Worksheet - Composition & Uses of Topsoil, Quiz & Worksheet - Causes of Soil Erosion, Quiz & Worksheet - The Deformation Process, Stereotypes, Discrimination & Prejudice in Social Psychology, CPA Subtest IV - Regulation (REG): Study Guide & Practice, CPA Subtest III - Financial Accounting & Reporting (FAR): Study Guide & Practice, ANCC Family Nurse Practitioner: Study Guide & Practice, Elements, Principles & Development of Art, Developing Physical Fitness & Correct Posture, Planning & Strategies for Theater Instruction in Texas, Finding Good Online Homeschool Programs for the 2020-2021 School Year, Coronavirus Safety Tips for Students Headed Back to School, Parent's Guide for Supporting Stressed Students During the Coronavirus Pandemic, Interrogation: Definition, Techniques & Types, How Scientific Objectivity Influences Scientific Progress, Modifying Planned Procedures in Audit Engagements, Leading System Change to Improve Health Outcomes in Nursing, Quiz & Worksheet - Historical Research Reference Sources & Materials, Quiz & Worksheet - Manorialism vs. Feudalism, Quiz & Worksheet - Strategic Model of Judicial Decision Making, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, Special Education in Schools | History & Law, AQA A-level Anthropology: Practice & Study Guide, Middle School Life Science: Tutoring Solution, AP Biology - DNA and RNA: Help and Review, Continuity in Precalculus: Tutoring Solution, Quiz & Worksheet - The Outsiders Characterization, Quiz & Worksheet - Features of Limited Liability Companies, Quiz & Worksheet - Louis Pasteur's Contributions to Science, Quiz & Worksheet - Strength of an Electric Field & Coulomb's Law, Quiz & Worksheet - Literary Devices in Death of a Salesman. Let's look at the balanced equation for this reaction. Note that A and B are reactants, and AB is the product (a newly formed substance in a chemical reaction). Alternatively you might have to calculate equilibrium concentrations from a given value of Kc and given starting concentrations. Energy (E) and Wavelength (l) Relationships- Since energy is calculated from frequency, we can substitute for frequency (n) in the equation E=hn, using n=c/l, (from c=ln). It is really important to write down the equilibrium reaction whenever you talk about an equilibrium constant. Ionic equations are used for single and double displacement reactions that occur in aqueous solutions. The enthalpy of sublimation is $$\Delta{H}_{sub}$$. There is another equilibrium constant called Kp which is more frequently used for gases. There is a deviation from experimental value, that is because the enthalpy of vaporization various slightly with temperature. Write a chemical reaction that represents these proportions of the elements. For example, in this chemical equation, the subscripted 2 indicates that 2 atoms of hydrogen make up molecular hydrogen, and there is one atom of nitrogen and 3 atoms of hydrogen in ammonia. Note the order of the temperatures in Equation \ref{2} matters as the Clausius-Clapeyron Equation is often written with a negative sign (and switched order of temperatures): $\ln \left( \dfrac{P_1}{P_2} \right) = - \dfrac{\Delta H_{vap}}{R} \left( \dfrac{1}{T_1}- \dfrac{1}{T_2} \right) \label{2B} \nonumber$, Example $$\PageIndex{1}$$: Vapor Pressure of Water. 0 replies
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6498327255249023, "perplexity": 2959.2226834953367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00490.warc.gz"}
https://www.physicsforums.com/threads/designing-a-toy-rocket-to-be-launched-by-a-spring.70683/
# Designing a toy rocket to be launched by a spring 1. Apr 9, 2005 You're designing a toy rocket to be launched by a spring. The launching apparatus has room for a spring that can be compressed 14 cm, and the rocket's mass is 65 g. If the rocket is to reach 35 m altitude, what should be the spring constant? $$U_o = 1/2*kx^2$$ $$K_o = 0$$ well work is being stored so K = 0 and U has the spring energy right? $$U_f = mgh$$ $$K_f = 0$$ U_f = mgh because of the gravitational potention energy right? and K_f = 0 because it's at rest right when it reaches 35m? I'm not sure if K_f = 0 or not because i dont really know if the rocket is at it's max height when it reaches 35m. well, setting them equal to each other... $$1/2*kx^2 = mgh$$ and solving for k, i get 3.1 but the book gets 2.3 kN/M. what am i doing wrong? 2. Apr 9, 2005 ### xanthym Your work is correct except for UNITS. Recalculate ALL quantities in MKS (meter, kg, sec) units, and you'll obtain the book answer. ~~ Last edited: Apr 9, 2005 Similar Discussions: Designing a toy rocket to be launched by a spring
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814629852771759, "perplexity": 1267.0239816285261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00329-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/simple-pendulum-with-friction-ode.639877/
# Homework Help: Simple pendulum with friction ODE 1. Sep 29, 2012 ### S_Flaherty I'm trying to figure out how to find the general solution for a simple pendulum with friction. y'' + ky' + (g/L)y = 0 I know how to find the solution for a simple pendulum without friction: y'' = -(g/L)y ... which leads to ... y = Acos((g/L)x) So far I have: y'' + ky' + (g/L)y = 0, substituting m = y' and w^2 = g/L, m^2 + km + w^2 = 0 so m = [-k ± sqrt(k^2 - 4(1)(w^2))]/2 ... I'm stuck here because I can't remember how to reduce this so I can find the homogeneous equation for y. 2. Sep 29, 2012 ### Staff: Mentor I don't think there is an exact solution, that's why people tend to numerically solve its motion via computer. You could try making the magnitude of the frictionless pendulum (A) changing over time but not the period and see if it works. 3. Sep 29, 2012 ### voko This is a linear ODE with constant coefficients. You solve it by first solving its characteristic equation, which in this case is $\lambda^2 + k\lambda + g/L = 0$, giving $\lambda = \frac {-k \pm \sqrt{k^2 - 4g/L} } {2}$. If the roots are different, then the solutions are given by $y = Ae^{\frac {-k + \sqrt{k^2 - 4g/L} } {2}t} + Be^{\frac {-k - \sqrt{k^2 - 4g/L} } {2}t}$. Note that the roots are typically complex, so A and B are also complex. By choosing them properly, you can transform the equation to $y = e^{-(k/2)t} (A'\cos (\sqrt{(k/2)^2 - g/L})t + B'\sin (\sqrt{(k/2)^2 - g/L})t)$, where the constants are real. 4. Sep 29, 2012 ### HallsofIvy I confused as to what you mean by "homogeneous equation". In differential equations there are two uses of the word "homogeneous" one of which applies only to first order equations and so does not apply here. The other is that a linear differential of higher order is "homogeneous" if and only if every term involves y or a derivative of y- in other words there is no function of x only. What you have here is a homogeneous equation! The 'characteristic equation' of this differential equation* is $r^2+ kr+ (g/L)= 0$. By the quadratic formula, that has solution $r= (-k\pm\sqrt{k^2- 4g/L})/2$. We can write the general solution as $Ce^{[(-k+\sqrt{k^2-4g/L})/2]x}+ De^{[-k-\sqrt{k^2-4g/L})/2]x}= e^{-kx}(Ce^{\sqrt{k^2- 4g/L}x}+ De^{-\sqrt{k^2- 4g/L}x})$ What that "really" is depends on whether $k^2- 4g/L$ is positive, negative or 0. If it is positive then we simply have two exponentials. If it is 0, we have "resonance"- only a single root to the characteristic equation and will have one of the solutions multiplied by x. If it is negative, that square root gives imaginary roots so we will have $e^{-kx}$ times sine and cosine. * We get that 'characteristic equation' by "assuming" a solution of the form $y= e^{rx}$ so that $y'= re^{rx}$ and $y''= r^2e^{rx}$. Then $y''+ ky'+ (g/L)y= r^2e^{rx}+ rke^{rx}+ (g/L)e^{rx}= (r^2+ kr+ (g/L))e^{rx}= 0$. Since $e^{rx}$ is never 0, we must have $r^2+ kr+ (g/L)= 0$. Last edited by a moderator: Sep 29, 2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264887571334839, "perplexity": 345.59993408947605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516892.84/warc/CC-MAIN-20181023174507-20181023200007-00177.warc.gz"}
http://eprint.iacr.org/2009/394/20100614:053845
## Cryptology ePrint Archive: Report 2009/394 Provably Secure Convertible Undeniable Signatures with Unambiguity Le Trieu Phong and Kaoru Kurosawa and Wakaha Ogata Abstract: This paper shows some efficient and provably-secure convertible undeniable signature schemes (with both selective conversion and all conversion), in the standard model and discrete logarithm setting. They further satisfy unambiguity, which is traditionally required for anonymous signatures. Briefly, unambiguity means that it is hard to generate a (message, signature) pair which is valid for two {\em different} public-keys. In other words, our schemes can be viewed as anonymous signature schemes as well as convertible undeniable signature schemes. Besides other applications, we show that such schemes are very suitable for anonymous auction. Category / Keywords: public-key cryptography / Undeniable signature, selective/all conversion, discrete logarithm, standard model. Publication Info: Full version of a paper accepted to SCN 2010 Date: received 12 Aug 2009, last revised 13 Jun 2010 Contact author: letrieu letrieuphong at gmail com Available format(s): PDF | BibTeX Citation [ Cryptology ePrint archive ]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139636516571045, "perplexity": 13121.738757825193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639121.73/warc/CC-MAIN-20150417045719-00114-ip-10-235-10-82.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/tags/zero-knowledge-proofs/hot?filter=year
# Tag Info 7 By Theorem 3 on page 15 of this paper, no secure-with-abort protocol for equality of long strings can be within 1/5 of fair. If there is a protocol for equality on a domain of size at least 3 which is secure against honest-but-curious adversaries, then oblivious transfer protocols exist. If oblivious transfer protocols exist, then there are protocols for ... 5 SRP does DH key exchange with authentication, and has the capability to also authenticate the server as well (though usually the server is authenticated by keeping the verifier secret). If the key is generated strictly from a password and salt, with the salt stored on the server, you can do a dictionary attack on the verifier (e.g. if the server is ... 4 Rough scetch, assuming Bob is standing next to you in the same room: Prepare cards with the correct numbers on them Lay down the cards according to the setup, face up Lay down the remaining cards with the correct solution, face down, so that Bob can't see them. Now you let Bob choose one column, row or sector. You pick up the cards in that row, column ... 4 One could split both secrets into smaller parts, commit to parts and "gradually" open that commitments to each other, so that no party is better than (ahead of the other) one such part. For example, let secret be a big number split into bits. With an additively homomorphic bit commitment scheme, the other party could verify that bit commitments correspond ... 4 This cannot be done. It is provably impossible. In order to explain this in technical terms, what you are looking for is a FAIR protocol to compute equality of long random strings (I added the latter since it adds a constraint and so in theory could make it easier). In any case, if I had such a protocol, then I could toss a fair unbiased coin. Here is the ... 4 Even following your edits, there's still some confusion about honest verifier zero knowledge and plain-old (i.e., "possibly malicious verifier") zero knowledge, which is a much stronger property. Your description of HVZK is essentially correct, but with the following clarifications: A 3-move protocol between a prover P and a verifier V for a language ... 4 If one-way functions exist, then there is a distribution over graphs (or SAT formulas, or ...) having the property you're asking for. In short, just put the OWF through the Cook-Levin reduction. In a little more detail, Cook-Levin transforms the NP witness-finding question "what is a preimage of $y = f(x)$?" (for random unknown $x$) into the NP ... 3 Answering the question in your title (and not addressing your proposed alternative which I don't quite understand) there is a zero knowledge proof of password protocol "SRP" which is fast and effective. SRP does not seem to have been given as wide publicity as it should get. Having implemented it, and being an advocate of its use, I don't really understand ... 3 Having a client (ex. your web browser) use zero-knowledge proofs to authenticate itself to a server only makes sense if the server knows about the client's public key in advance, and if the client keeps the same private key forever. So you could have the client-side generate a keypair when you register your account, and the server records your public key ... 3 There are two answers. One, go non-interactive with the Fiat-Shamir transform. This requires the Random Oracle Model (ROM) to analyse, but the ROM is standard enough in cryptography and ROM proofs have been used in practice for long enough that this shouldn't worry you. It gets you full ZK, curiously enough for the exact same reason that plain Schnorr is ... 3 Yes, it is possible. Actually, any statement in NP can be proven in zero knowledge. This means that if something can be proven by releasing some information, it is possible to prove the same without releasing any information, i.e. in zero knowledge. 3 $ax^2+bx+c=0$ is the general expression of a quadratic equation in one variable. Here, there are more than one. You may want to look into how the degree of a multivariate polynomial is defined. 3 The initial idea of Fiat and Shamir was to eliminate the interaction in public coin protocols (note that public coin means that the random choices of the verifier are made public) and was used to convert three move public coin identification scheme into conceptually simple signature schemes (it has later been proven by Pointcheval and Stern that under the ... 3 I assume you are familiar with $P$ and $NP$. Also, my knowledge of SNARKs is based mostly on the work of Parno et al., other work may differ in some fine details. So, a SNARK is a succinct non-interactive argument of knowledge. Leaving the "knowledge" part aside for the moment, let's look at "plain" succinct non-interactive arguments (called SNARGs in the ... 3 A zero-knowledge proof is a protocol by which the Prover demonstrate to the Verifier that he knows the solution to a given problem, without giving to the Verifier any additional information about the solution -- that is, no information that the Verifier could not already obtain alone. In the case of the discrete logarithm, the y value is not part of what the ... 3 The motivation, to me, is that in reality you can consider any router on the internet to be successfully executing an "intruder-in-the-middle" attack just by forwarding messages unchanged. After a successful execution of the identification scheme, Bob knows that someone on the channel is Alice, which is all the protocol was hoping to achieve. It was ... 2 Without a sign the verifier learns that the number he received is a QR modulo n. Whether a number is a QR is a hard problem as he does not know the factors of n. 2 In context of interactive proof systems (including zero-knowledge proofs) completeness means the same as the term correctness as used for many other (interactive) cryptographic schemes or protocols. I guess that's mainly due to historical reasons (there are even some people that use correctness instead of completeness in context of zero-knowledge proofs). ... 2 A straightforward way to prove this when you can prove AND as well as OR statements about discrete logarithms is to take all the $K=\binom{M}{N}$ subsets $A_i=\{A_{i_1},\ldots,A_{i_N}\}$ with $N$ elements of points from the set of your $M$ points and prove the statement PK\{(\alpha_1,\ldots,\alpha_N): \bigvee_{j\in K} \big( \bigwedge_{A_{j_i}\in A_j} ... 2 The common reference string in NIZK does not have to be uniformly distributed. It is to be sampled from whatever distribution the NIZK protocol specifies. However, the common random string in NIZK does have to be uniformly distributed, and the setup strings in NIZK also have to be uniformly distributed. 2 I believe a zero knowledge proof that $-1$ is a quadratric nonresidue would accomplish that. If we know that $n$ has two prime factors, and that $n \equiv 1 \pmod{4}$, then $n$ is either a product of two primes both $1 \bmod 4$, or two primes both $3 \bmod 4$. If it were the former, then $-1$ is a QR modulo $p$, and $-1$ is a QR modulo $q$, and hence $-1$ ... 2 This has some issues, with both soundness and zero-knowledge. The issue with zero-knowledge is that an eavesdropper who knows $L$ and overhears legitimate traffic can compromise the secret quite easily. While factoring is hard, taking a GCD is very efficient. That means that given $M=pr$ and $L=pq$, an eavesdropper Eve can efficiently compute $\gcd(M,L)=p$. ... 2 You are on the right track. However, as Ricky Demer points out in the comments, your suggestion would not work because the input is encrypted with different public keys. To fix this you need to use the properties of the threshold-encryption scheme. In a threshold-encryption scheme the players run a key-generation protocol in order to generate a common ... 2 There is quite a bit of confusion in your question. First, differentiate between the real and ideal models. The adversary in the ideal model sends the adversary's input and gets its output (and can also sometimes determine if the honest party gets output, depending on the model). We often call the ideal adversary a "simulator" since this is how we build the ... 2 The probabilistic nature is not specific to special-honest verifier zero-knowledge but that's what zero-knowledge is about. With zero-knowledge you want to formulate that such an interactive proof does not leak any information besides the validity of the claim, as it is efficiently simulatable meaning that real and simulated transcripts are not ... 2 Yes, it's okay. This is actually mentioned in passing in the SRP 6 design paper. Previous versions used a random $u$ where an attacker who saw (or could predict) it before revealing $A$ could compute $A = g^a v^{-u}$ and use this to effectively cancel out the long term secret. With $u$ derived from a hash, even if the attacker saw $B$, the dependence of $u$ ... 1 They both symmetrical encrypt their keys by itself in an algorithm (or aes with enough iterations) that it takes minutes, even hours to complete (this gives ek1). Then they will do the same thing again (encrypt ek1 by itself) (this gives ek2) and send ek2 to the other person when they both say they are done. If they don't align, both parties then send ek1 to ... 1 I'm new here so I'm not sure about the best way to hold this discussion. So, I am adding a different answer to relate to why my proof sketch showing the impossibility of the problem in this question, versus Ricky's proof above that the protocol in this paper (page 16) is impossible. The answer is very connected to technical details to how you define and ... 1 Yes. $\:$ The verifier(s) need(s) to know a statistically binding commitment to $\Psi \hspace{-0.02 in}$. 1 My guess is, responses $\hat x_{(g,i)}..\hat x_{(1,i)}$ ($s$ in the example) are computed modulo group order that is not available to verifiers of the statement claimed. Challenge difference is always one ($1$) while rewinding for binary ($0$/$1$) challenges, and it is not expected to be one for "large" challenges. Dividing by a non-one (in other words, ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.737861692905426, "perplexity": 780.020404429321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00295-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.heikkiarponen.net/2013_09_01_archive.html
## Saturday, September 14, 2013 ### Funny statistics in stock market data Funny statistics in stock market data # Market data structure functions I recently got some minute level stock market data from The Bonnot Gang for some data analytic (and stat arb design) purposes, when I noticed some funny behavior in the data structure function. Now the concept of a structure function may not be very widely known with quants/ data analysts/ economists, so here's a definition: Suppose there's a time series Xt$X_t$. The structure function of Xt$X_t$ is defined as Sn(τ)E(|Xt+τXt|n) where for a given sample of data you just replace the ensemble expectation E()$\mathbb E()$ by the sample mean, 1Nt=0N()$\frac{1}{N} \sum\limits_{t=0}^N()$. These types of structure functions have been studied for some time now in finance in the context of similarities between financial markets and hydrodynamic turbulence. I think it all started in 1996 with the paper Turbulent cascades in foreign exchange markets by Ghashghaie et al. They computed the structure functions for some FX market data, and found a scaling relation Sn(τ)ξn$S_n(\tau) \propto \xi_n$, where ξn$\xi_n$ is a concave function of n$n$, implying multiscaling in FX markets, similarly to hydrodynamic turbulence (BTW their conclusions about the result were a bit out there, but I guess the data analysis is still good). So I did some of my own data analysis with the Bonnot Gang data (I hope it's not bad data!). Here's a few plots of the structure functions, first for n=1$n=1$: Then n=3$n=3$: This is close to linear, i.e. ξ31$\xi_3 \approx 1$, as in turbulence. Then n=10$n=10$: Clearly you can't fit a power law in all of this, but there seems to be clear power law regimes divided by about 6, 18, 60 and 180 minutes! I don't know the reason for this, but if I had to guess, I'd say it's because of traders/ algorithms operating w.r.t different data timeframes... or maybe it's because of the finite tick size... Anyway, I don't have time to get to the bottom of this, but maybe someone else will... so if you see this stuff on a paper someday, you saw it first here!! ;) Written with StackEdit. Try it out, it's awesome!! You can do MathJax and sync everything in Google Drive or Dropbox and publish directli in Blogger, Wordpress, Tumblr etc.!
{"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6286358833312988, "perplexity": 1371.9209926205924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00171-ip-10-233-31-227.ec2.internal.warc.gz"}
https://gowers.wordpress.com/2012/09/06/edp26-three-generalizations/
EDP26 — three generalizations This short post is designed as a possible way in to EDP for anyone who might be interested in participating but daunted by the idea of reading large amounts of material. One of the natural strategies for proving EDP is to try to formulate and prove stronger statements. At first that sounds paradoxical: isn’t it even harder to prove a stronger statement? But the answer to that question is often no. To give a slightly silly example, suppose you were asked to prove that for every $c>0$ there exists $N$ such that for every $n\geq N$ if $n$ is odd and has at least $c\log n$ prime factors (counted with multiplicity), then $2^{\phi(n)}\equiv 1$ mod $n$, where $\phi$ is Euler’s totient function. You could make the problem easier by proving Euler’s theorem, that $a^{\phi(n)}\equiv 1$ mod $n$ for every $n$ and every $a$ that is coprime to $n$. You wouldn’t have as many hypotheses to use, but that’s good, since they can’t be used. Perhaps a better and more relevant example is when you generalize the class of numbers you are working with so as to allow a wider set of methods. For instance, suppose you want to prove that the largest possible product of three positive integers that add to 300 is at most $10^6$. If you replace positive integers by positive reals, then you suddenly have available methods that you didn’t have before — for example, you could use compactness plus a lemma that says that if any two numbers are not equal then you can increase the product by replacing both of them by their average. (I’m not saying that’s the easiest proof — just that it’s a proof that you can’t do without first generalizing the statement.) In this post I want to mention three strengthenings of EDP. One of them I find interesting but not promising as a way of proving EDP, for reasons that I will explain. The other two look to me also very interesting and much more promising. All of them have been mentioned already, but the point of this post is to collect them in one convenient place. Restricting the allowable common differences. I know of no $\pm 1$-sequence that has bounded discrepancy on all HAPs with common difference either a prime or a power of 2. The sequence $1,1,-1,-1,1,1,-1,-1,\dots$ has bounded discrepancy on all HAPs of prime common difference, but if you allow powers of 2 as well, then no periodic construction can work, since if the period $m$ is $2^ka$ for some odd integer $a$, then a HAP of common difference $2^k$ will have a bias of at least 1 in each interval of length $m$, for parity reasons, and this bias will gradually accumulate. One can defeat powers of 2 by using the Morse sequence $1,-1,-1,1,-1,1,1,-1,-1,1,1,-1,1,-1,-1,1,-1,\dots$ (if you haven’t seen it, then it’s a nice exercise to see how it is generated), but that has unbounded discrepancy on HAPs of odd prime length. (I can’t remember exactly why.) Why do I not think that this potential strengthening of EDP is likely to be useful for attacking the problem? Well, one promising feature of EDP is that it seems to generalize to sequences taking other kinds of values, such as complex numbers of modulus 1 or even unit vectors in an arbitrary Hilbert space. However, something that I’ve only just noticed (though others may have spotted it ages ago) is that if one restricts to, say, prime-power common differences, then even the complex version of the problem becomes false. A very simple counterexample is the sequence $\omega,\omega^2,\omega^3,\dots$, where $\omega=\exp(\pi i/3)$ (that is, a primitive sixth root of 1). Then the sum along any HAP with common difference $d$ that isn’t a multiple of 6 will be a sum of a GP with common ratio $\exp(d\pi i/3)\ne 1$, and will therefore be (uniformly) bounded. In particular, this is true of HAPs with prime-power common difference. This simple example also shows that a certain real generalization of EDP is false when you restrict the common differences in this way: you cannot prove unbounded discrepancy for sequences that take values in the set $\{-2,-1,1,2\}$. The counterexample is simply twice the real part of the example above: the sequence $1,-1,-2,-1,1,2$ repeated over and over again. So if (and I think it’s a big if) EDP is true for HAPs with common differences that are either primes or powers of 2, then any proof must make pretty strong use of the fact that the sequence is a $\pm 1$-valued sequence. This rules out a lot of promising techniques, so it appears to make the problem harder rather than easier. In my earlier post I mentioned what I called the non-symmetric vector-valued EDP. I have subsequently realized (though perhaps a better word is “remembered” since I must have been sort of aware of this at some point) that it is equivalent to another discrepancy statement that I now find more appealing. The statement is the following. Conjecture. Let $f$ be a function from $\mathbb{N}^2$ to $\mathbb{R}$ and suppose that $f(n,n)=1$ for every $n\in\mathbb{N}$. Then for every real number $C$ there exist HAPs $P$ and $Q$ such that $|\sum_{m\in P,n\in Q}f(m,n)|\geq C$. If we take a function $g:\mathbb{N}\to\{-1,1\}$ and define $f(m,n)=g(m)g(n)$, then for every $C$ the above conjecture, if true, gives us HAPs $P$ and $Q$ such that $|\sum_{m\in P}g(m)||\sum_{n\in Q}g(n)|\geq C$, which proves that the discrepancy of $g$ is at least $C^{1/2}$. So the above conjecture implies EDP. But the class of functions of the form $g(m)g(n)$ with $g:\mathbb{N}\to\{-1,1\}$ is a very small subset of the class of all functions $f$ that take the value 1 along the diagonal, so this conjecture is very much stronger than EDP. It is also equivalent to the statement that for every $c>0$ there exists a diagonal matrix $D$ of trace at least 1 that can be written as a linear combination $\sum_i\lambda_iu_i\otimes v_i$, where for each $i$ $u_i$ and $v_i$ are characteristic functions of HAPs $P_i$ and $Q_i$ and $\sum_i|\lambda_i|\leq c$. (This is the statement that I discussed at length in my previous post.) In one direction this is easy: if a decomposition of that kind exists, then $\sum_nf(n,n)D(n,n)$ equals the trace of $D$ and is therefore greater than 1, but it also equals $\sum_i\lambda_i|\sum_{(m,n)\in P_i\times Q_i}f(m,n)|$, which is at most $c\max_i|\sum_{(m,n)\in P_i\times Q_i}f(m,n)|$. It follows that there is some $i$ such that $|\sum_{(m,n)\in P_i\times Q_i}f(m,n)|\geq c^{-1}$. In the other direction, if no such decomposition of a diagonal matrix exists, then the Hahn-Banach theorem gives us a linear functional that separates the class of diagonal matrices with trace at least 1 from the class of linear combinations of HAP products defined above. It is easy to check that this functional must be a diagonal matrix $(f(m,n))$ with a constant diagonal such that the value along the diagonal is at least 1 and such that $|\sum_{(m,n)\in P\times Q}f(m,n)|\leq c^{-1}$ for any two HAPs $P$ and $Q$. The conjecture is so much stronger than EDP that I think it would be a mistake just to assume that it is true. My guess is that it is true, but I would be very interested if there was a counterexample (even if, like the complex sequence above, it is disappointingly simple — in fact, if there is a counterexample, then it seems quite likely that it will be fairly simple). And if there isn’t a counterexample, then the fact that it is so much stronger a conjecture than EDP does this time make me think that the result might be easier to prove than EDP itself. Until very recently, I had been mainly interested in the dual version of the question (that is, the question about decomposing diagonal matrices), but now it seems to me that the matrix discrepancy question is worth thinking about directly. It is a clean question, and it has the big advantage over the original EDP question that it does not restrict values to the set $\{1,-1\}$, so a number of methods can be used that cannot be used directly for EDP. For instance, linear programming can be used to get experimental results: Sasha Nikolov may be going to look into this. Given any matrix $f$, one can find vectors $v_m$ and $w_n$ such that $f(m,n)=\langle v_m,w_n\rangle$, so this matrix question is actually trivially equivalent to the non-symmetric vector-valued question I had formulated earlier. But expressing it in terms of vectors makes it harder to think about rather than easier, and that is what had previously put me off thinking about the question directly. There is one class of matrices that is particularly worth mentioning I think. If EDP is true but this matrix question is false, then the best candidates for counterexamples to the matrix problem are probably matrices of high rank, and an obvious class of matrices that tend to have high rank is matrices that are constant on diagonals (that is, Toeplitz matrices). Suppose, then, that our matrix $f(m,n)$ is defined to be $g(m-n)$ for some function $g:\mathbb{Z}\to\mathbb{R}$ such that $g(0)=1$. Then $\sum_{m\in P,n\in Q}f(m,n)=\sum_xg(x)P*(-Q)(x)$, where by $P*(-Q)(x)$ I mean the number of ways of writing $x$ as $y-z$ with $y\in P$ and $z\in Q$. So we have a class of functions that I’ll call HAP convolutions, and we’d like to show that if $g:\mathbb{Z}\to\mathbb{R}$ is any function with $g(0)=1$ and $C$ is any constant, then there exists a HAP convolution $\phi$ such that $\langle g,\phi\rangle\geq C$. This is true if and only if there is an efficient way of writing the function that is 1 at 0 and 0 everywhere else as a linear combination of HAP convolutions. This is another question that could be investigated using linear programming, and perhaps since it concerns functions of just one variable we could get more extensive results than we could for the general matrix problem. The modular conjecture. In his most recent guest post, Gil Kalai reformulates EDP as a question about sums mod $p$. EDP is trivially equivalent to the following assertion. Conjecture. For every $p$ there exists $n$ such that for every $\pm 1$ sequence $\epsilon$ of length $n$ and every $r$ there exists a HAP $P\subset\{1,2\dots,n\}$ such that $\sum_{x\in P}\epsilon(x)\equiv r$ mod $p$. At first that doesn’t look like a very interesting reformulation since it is too obviously equivalent to EDP. But what makes it interesting is that it has a very natural generalization that doesn’t have any obvious counterexamples. Stronger Conjecture. For every $p$ there exists $n$ such that for every sequence $s$ of length $n$ of non-zero numbers mod $p$ and every $r$ there exists a HAP $P\subset\{1,2\dots,n\}$ such that $\sum_{x\in P}s(x)\equiv r$ mod $p$. In other words, we replace the condition that the sequence takes values $\pm 1$ by the much weaker condition that it is never zero. Gil calls this the modular conjecture. (He also presented it in a comment on a much earlier EDP post.) As Gil points out in his post, one can write down a polynomial that is identically zero (mod $p$) if and only if the modular conjecture is true for $n$. It is tempting to try to prove that it is zero by analysing its coefficients. More generally, this approach to the problem appears to open the door to a number of algebraic methods. If you want to prove EDP this way, you have to solve the conjecture for some non-zero $r$. (Since $p$ is prime, if you can show it for one then you’ve shown it for all.) However, the problem with $r=0$ is interesting in its own right, and in particular it seems to be interestingly different from the problem with non-zero $r$. At the time of writing, I don’t see any way of modifying the EDP examples to obtain exponentially long sequences mod $p$ that avoid zero sums — it seems to me that the upper bound on the length could be significantly smaller. That would be interesting, as it would place constraints on what a proof could look like for non-zero $r$. A fourth strengthening. This isn’t really meant as part of the body of the post, but more of an afterthought. A strengthening of EDP that has been considered since very early on in the project is where you replace a $\pm 1$ sequence by a sequence of unit vectors in a Hilbert space. To be precise, one looks at the following statement. Conjecture. Let $v_1,v_2,\dots$ be a sequence of unit vectors in a Hilbert space and let $C$ be a real number. Then there exists a HAP $P$ such that $\|\sum_{n\in P}v_n\|\geq C$. There is something slightly curious about this conjecture, which is that it is very hard to see how having infinitely many dimensions to play with could help one find a counterexample. In some sense, if you use too many dimensions, then it ought to be the case that there will be a HAP $P$ such that the vectors $v_n$ with $n\in P$ are pointing in lots of different directions and therefore not cancelling. That makes me wonder whether one can prove that if there is a counterexample to the above conjecture, then there must be a counterexample for some finite-dimensional Hilbert space. Or if that is too much to ask, perhaps there might have to be a counterexample where all the $v_i$ live in some compact set. I think it would be very interesting to think about whether the vague intuition I have just expressed can be made precise. As it stands, it is of course not even close to a proper argument. (I should stress one aspect of what I am saying. If there is any vector sequence $(v_n)$ of bounded discrepancy, then one can arbitrarily modify $v_p$ for each prime $p$ and the sequence will still have bounded discrepancy. So I’m not suggesting that every bounded-discrepancy sequence lives, or almost lives, in finite dimensions, but that it can be used to construct one that does.) As I write that, another question occurs to me. For this one I don’t even have a vague intuitive argument. Let’s suppose that we can find a counterexample $f$ to the matrix discrepancy question earlier. Suppose also that $f(m,n)$ takes the form $\langle v_m,w_n\rangle$ for some (not necessarily unit) vectors $v_m$ and $w_n$ in a Hilbert space. Must there be an example where the $v_m$ and $w_n$ lie in a finite-dimensional Hilbert space, or at least in a compact subset of a Hilbert space? A combined generalization. A second afterthought is that the matrix question has a modular version that might be of some interest. Let $p$ be a prime and let $f$ be a function from $\mathbb{N}^2$ to $\mathbb{Z}_p$ with $f(n,n)=1$ for every $n$. Must the sums of $f$ on products $P\times Q$ take all possible values mod $p$? What if we merely ask for $f$ to take non-zero values on the diagonal? If the strong modular conjecture is false, then we can turn a counterexample $(x_n)$ into a diagonal matrix in the obvious way and we get a counterexample to the second question. Indeed, suppose that $f(n,n)=x_n$ for every $n$ and $f(m,n)=0$ when $m\ne n$. Then $\sum_{m\in P,n\in Q}f(m,n)=\sum_{n\in P\cap Q}x_n$, which avoids some value, since $P\cap Q$ is a HAP. But with matrices there are many more ways of trying to avoid particular values, so the strong modular matrix conjecture looks like a much stronger statement. Again there are two cases — avoiding 0 and avoiding a non-zero value — of which only the second obviously implies EDP. 24 Responses to “EDP26 — three generalizations” 1. gowers Says: I have an suggestion for how one might attempt to get a handle on the matrix question in either its real or its mod-$p$ form in the special case where we insist that the HAPs have the same common difference. Let’s imagine trying to build a counterexample: that is, an $N\times N$ matrix (with $N$ arbitrarily large) such that the sum over any product $P\times Q$ of HAPs $P$ and $Q$ of the same common difference has absolute value at most $C$ (or avoids $t$ mod $p$). We might attempt to do this by finding an ordering $a_1,\dots,a_N$ of the numbers from 1 to $N$ such that every number in that ordering precedes all its factors. We could then attempt to define the matrix by choosing its values at points $(x,y)$ with highest common factor $a_1$, then points with highest common factor $a_2$, and so on. If we pursued such a strategy, then what would we have to do when we got to $a_k$? We could decide that we would worry only about sums over $P\times Q$ where the common differences of $P$ and $Q$ were both $a_k$. If we did this, we wouldn’t mess up any of our earlier work, since we would be choosing values at points $(x,y)$ that were not both multiples of any predecessor of $a_k$ (since all factors of $a_k$ appear after $a_k$ in the ordering). So far so good, but the problem is that the number of points where we get to choose values is smaller than the number of HAP products that we are trying to deal with. For example, when we get to $a_N$, which equals 1, we have chosen the values of the matrix for all non-coprime pairs $(x,y)$. That leaves us approximately $6N^2/\pi^2$ values to choose, since that is roughly the number of coprime pairs, but the number of sums we need to worry about is $N^2$, since that is the number of products of HAPs of common difference 1. However, in the mod-$p$ version, it’s not clear that we can’t deal with several HAPs at once. For example, if I have numbers $a_1,\dots,a_m$ and want to choose $x$ such that none of $a_1x,\dots,a_mx$ is congruent to $r$ mod $p$, I can do it easily if $m. So here is a question that could be helpful. Suppose you have defined a function $f:\{1,2,\dots,N\}^2\to\mathbb{Z}_p$ on all pairs $(x,y)$ that have a non-trivial common factor. Can you choose the values of $f$ at coprime pairs in such a way that the sum of $f$ over any set of the form $\{1,2,\dots,m\}\times\{1,2,\dots,n\}$ is not equal to $r$? 2. gowers Says: Let me try to answer the question at the end of the previous comment in a trivial way. I don’t expect to succeed, but would like to know what goes wrong. I think it will be easier if I ask a more abstract question. Suppose I’ve got sets $A_1,\dots,A_m\subset\{1,2,\dots,n\}$ (where $m$ and $n$ are not the same as the $m$ and $n$ above) and numbers $t_1,\dots,t_m$ and I want to find a sequence $x_1,\dots,x_n$ of numbers mod $p$ such that $\sum_{i\in A_j}x_i$ is never congruent to $t_j$ mod $p$. Is there a simple sufficient condition that allows me to choose such a sequence? I’m hoping for something a bit like Hall’s condition for finding a perfect matching, but unlike with that problem I think it is unrealistic to ask for a simple sufficient condition that is also necessary. A very simple sufficient condition is this: that for each $i$ the number of sets $A_j$ with maximum element $i$ is at most $p-1$. If that is the case, then let’s suppose we have chosen $x_1,\dots,x_r$ in such a way that the conclusion we want is true for all sets $A_j$ that are contained in $\{1,2,\dots,r\}$. There are at most $p-1$ sets of maximum element $r+1$, so we can choose $x_{r+1}$ so as to ensure that none of the sums over those $p-1$ sets takes a value we don’t want it to. (Proof: for each set, there is one bad value of $x_{r+1}$, so at most $p-1$ bad values in all.) That condition can be generalized in a trivial way: it is enough for it to hold for some total ordering of the ground set. So now let’s ask whether we can find a permutation that does the job when the ground set is the set of all coprime pairs $(x,y)\in\{1,2,\dots,N\}^2$ and the sets are intersections of this set with sets of the form $I_{m,n}=\{1,2,\dots,m\}\times\{1,2,\dots,n\}$. We would like to find a total ordering $(x_1,y_1),(x_2,y_2),\dots$ such that for each $r$ the number of sets $I_{m,n}$ that contain $(x_r,y_r)$ and no further $(x_s,y_s)$ is at most $p-1$. Is this possible? The answer would be trivially no if the number of coprime pairs were less than $p^{-1}N^2$, but that is not the case, and becomes less and less the case as $p$ gets bigger. However, there are little problematic places, such as rows where one of the coordinates has lots of small prime factors, which leads to long strings of points that are not coprime pairs. I don’t yet see whether that kind of problem makes the answer obviously no, or whether it might be possible to get round it in a clever way. 3. gowers Says: So far that isn’t particularly reminiscent of Hall’s theorem. But what if we ask for conditions under which we can be sure that an appropriate ordering of the ground set exists? If there were a neat characterization, then it might conceivably look a bit like Hall’s condition. One small remark is that if all the sets in the set system have size 2, then we are looking for an ordering of the vertices of a graph such that each vertex is joined to at most $p-1$ earlier vertices. That is a well-known condition: such a graph is said to be $(p-1)$degenerate. So we are looking at a certain hypergraph generalization of degeneracy. Another small remark is that without the coprimeness condition the problem is easy: just take any ordering of $\{1,2,\dots,N\}^2$ that extends the partial order where $(x,y)<(z,w)$ if and only if $x and $y. Then the only $I_{m,n}$ that has $(x,y)$ as its last element in this ordering is $I_{x,y}$. So that system is 1-degenerate. Suppose we totally order just the coprime pairs in a way that extends the coordinate-domination order. That alone doesn’t work, since if, for example, a segment of our order is $(m!,m!+2),(m!,m!+3),\dots,(m!,m!+m)$, and if all those points are bigger than all points $(u,v)$ such that $u and $v\leq m$, then the point $(m!,m!+1)$ is the maximal coprime pair belonging to all the intervals $I_{m!,m!+j}$ with $2\leq j\leq m$. So if $m>p$, then the condition is violated. More generally, suppose we have an $m\times m$ square of points $(x,y)$ such that none of them is a coprime pair. Is there any hope of finding an ordering that works? The answer is not obviously no. Suppose, for instance, that we take the complement of the set of all points in the square $\{(x,y):u\leq x and try to order this complement in such a way that the degeneracy condition holds. Hmm … I’ve changed my mind. I think the answer is obviously no. Suppose that the maximal pair in our ordering that belongs to the set $I_{u+m,v+m}$ is the pair $(a,b)$. Then it is the maximal pair in any set $I_{x,y}$ such that $a\leq x\leq u+m$ and $b\leq y\leq v+m$. Since $(a,b)$ does not belong to the square above, there are at least $m$ such sets $I_{x,y}$. • gowers Says: OK, the failure above is good news, because it shows that there isn’t a trivial just-do-it counterexample to the strong modular matrix EDP. Before I discuss what that teaches us, let me briefly remark that the simple argument just given points to an obvious necessary condition for an ordering to exist: it must be the case that every set $A\in\mathcal{A}$ at least contains a point that is in at most $p-1$ of the other sets in $\mathcal{A}$ that are subsets of $A$. If that fails, then trivially there is no ordering of the points such that the maximal element of $A$ is the maximal element of at most $p-1$ other sets in $\mathcal{A}$. But that’s not particularly relevant, because an ordering of that kind does not exist. So if we are determined to find a counterexample, then either we have to worry about big rectangles of points that do not contain any coprime pairs, or we have to ensure somehow that the values we have already chosen in those rectangles make choosing the coprime pairs easier than it would normally be, or we have to give up on the idea of dealing with HAPs with common difference $d$ before we deal with HAPs with common difference a factor of $d$. None of these options seems very palatable. In short, finding a counterexample looks hard. 4. Alec Edgington Says: A trivial observation about the ‘stronger’ modular conjecture for $r=0$: to find an exponentially long sequence modulo $p$ that avoids zero sums along HAPs it would be natural to look for a multiplicative function whose partial sums avoided zero. • Alec Edgington Says: If I’ve understood the comments that Gil linked to from earlier posts, it seems that, at least for $p=5$, avoiding zero is easier than avoiding a non-zero value. (One can have sequences at least as long as 155 in the former case, but no longer than 83 in the latter.) • gowers Says: One reason I thought that avoiding zero might be harder than avoiding a non-zero value is that finding a multiplicative sequence of that kind seems quite hard. The techniques we have used up to now tend to rely heavily on the fact that a sum of small numbers is small. But if you want to avoid zero, then you can’t use that. You can use the fact that a big number plus a small number is big, but then you have to have a bigger supply of big numbers than it seems possible to get. I’m therefore quite surprised by what you say about $p=5$, but perhaps it is the law of small numbers striking. • Jason Conti Says: I was going to add that it may not necessarily be easier to avoid 0 than 1, since with zero we have more symbols to work with: {1, 2, 3, 4} for 0 mod 5 but only {2, 3, 4} for 1 mod 5. But I tried avoiding 0 with only {2, 3, 4} and I actually get an example of length 149 (which I think is maximal): 2, 2, 2, 2, 3, 2, 3; 2, 3, 3, 2, 2, 3, 3; 2, 2, 3, 3, 2, 3, 3 2, 2, 2, 3, 3, 2, 3; 2, 2, 3, 2, 3, 3, 2; 3, 3, 2, 2, 3, 2, 3 3, 2, 3, 2, 3, 2, 4; 3, 3, 3, 2, 2, 2, 3; 3, 2, 3, 2, 3, 3, 2 2, 3, 3, 2, 3, 2, 2; 3, 3, 2, 3, 2, 2, 3; 2, 2, 3, 3, 4, 3, 3 2, 3, 2, 2, 3, 3, 2; 2, 3, 3, 2, 2, 4, 2; 2, 3, 2, 3, 2, 3, 3 2, 2, 2, 4, 2, 2, 3; 4, 3, 3, 2, 3, 3, 2; 2, 3, 3, 2, 3, 2, 2 3, 2, 3, 3, 2, 3, 4; 2, 3, 3, 2, 2, 4, 2; 2, 3, 2, 3, 3, 2, 2 3, 2 5. gowers Says: It occurs to me that I haven’t completely killed off the simple approach I was trying to use to obtain a counterexample to the strong modular matrix conjecture in the case of HAPs with the same common difference. I’ve shown that the simple approach fails if you insist on choosing the value at $(a,b)$ before you choose the value at $(c,d)$ if the highest common factor of $c$ and $d$ strictly divides the highest common factor of $a$ and $b$. But what if you don’t insist on that? The question then becomes the following. Let $\mathcal{A}$ be the set of all products $P\times Q$, where $P$ and $Q$ are HAPs and subsets of $\{1,2,\dots,N\}$. Does there exist an ordering of $\{1,2,\dots,N\}^2$ minus the diagonal such that with respect to that ordering each point $(x,y)$ is the maximal element of at most $p-1$ sets in $\mathcal{A}$? Let me try to rephrase that question as a question about a certain bipartite graph, because I think it will have a dual formulation. The bipartite graph has two sets of vertices: points in $\{1,2,\dots,N\}^2$ and sets in $\mathcal{A}$, with an edge joining a point to a set whenever the point is an element of the set. In the abstract, we have a bipartite graph with vertex sets $X$ and $Y$, and we would like to find an ordering of $X$ such that each $x\in X$ is the maximal neighbour of at most $s=p-1$ vertices in $Y$. Here is a sufficient condition for such an ordering to exist. Suppose we have an ordering of the vertices not of $X$ but of $Y$. We could order the vertices of $X$ by saying that $x_1 if the minimal $y\in Y$ that is joined to $x_1$ is smaller than the minimal $y\in Y$ that is joined to $x_2$. In other words, we go through the vertices of $Y$ in order and write down any neighbours that we have not yet written down. (In cases where several new neighbours appear at once, we choose the order arbitrarily.) Under this ordering, how many elements of $Y$ have maximum neighbour $x$? Well, let’s write the elements of $Y$ in order as $y_1,\dots,y_n$. Then $x$ will first appear as a new neighbour of some $y_i$. It will need to be the largest new neighbour. And then we will need the neighbours of $y_{i+1},\dots,y_{j-1}$ all to be vertices of $X$ that have already appeared as neighbours of earlier points in $Y$. And finally $y_j$ will have a neighbour that has not yet appeared. Under those and only those circumstances, $x$ will be the maximal neighbour of $j-i$ vertices of $Y$. So if we want to avoid that, then we need to find an ordering of the vertices of $Y$ such that as we write down their neighbours, there is never a string of more than $s$ vertices in $Y$ without a new neighbour appearing. Going back to our specific bipartite graph, that tells us that we would like to find an ordering of the HAP products $P\times Q$ such that if we look at the union of the first $k$ products, there is never a string of $s$ of these unions without a new element appearing. Can that be done? It is crucial that the common differences should be the same, or there will be too many HAP products and the answer is trivially no. But if the common differences have to be the same, then the number of HAP products is roughly $N^2+(N/2)^2+(N/3)^2+\dots$ and is therefore at most $CN^2$ for some absolute constant $C$. So the existence of such an ordering is not ruled out on trivial grounds. So far I have established that if such an ordering exists, then it will sometimes be necessary for a HAP with common difference $a$ to come earlier than a HAP with common difference $b$ even when $b$ is a non-trivial multiple of $a$. That’s worrying — in general, it seems much better to put smaller sets earlier in the ordering — but it doesn’t by itself rule out the existence of the ordering, especially as the problematic places (that is, large rectangles full of points that all have coordinates with non-trivial common factors) appear to be quite rare, and the rectangles are small compared with how far out they are from the origin. Here is the kind of thing that we might try to do. Suppose $R$ is one of these troublesome rectangles. What worries us is that when we come to HAP products with small common differences (I’m thinking about common differences of 1 as I write this) and maximal points (that is, top right-hand corners) in $R$, we may find that we have already chosen lots of other HAP products with maximal points in $R$, so new points will not easily appear. To counter this, we could simply identify the HAP products with top right-hand corners in factor-rich rectangles and label them “troublesome”. Then we could do a preliminary ordering of the non-troublesome HAP products and insert the troublesome ones in at various points, in such a way that each troublesome product appears only after all its points have already appeared (so it won’t mess up anything that has gone on earlier, but also won’t have a new element) and also in such a way that the troublesome products are reasonably spread out, so that the longest interval of sets that don’t contribute a new point is not much longer after the insertions than it was before. 6. gowers Says: I’ve just noticed that my argument that the ordering of coprime pairs doesn’t exist was wrong. So I’m going to go back to that question, but think about it in its dual formulation. That is, I’d like to find an ordering of the rectangles $I_{m,n}=\{1,2,\dots,m\}\times\{1,2,\dots,n\}$ such that when you take partial unions you add at least one new point that’s a coprime pair for every $s$ new rectangles that you include in the union. Let’s suppose that we’ve chosen an initial segment of the ordering, and let’s suppose that we’ve done it in such a way that if $a\leq c$ and $b\leq d$ then $I_{a,b}$ is not chosen after $I_{c,d}$. Then the union will be a down-set (that is, a set of points such that if $(c,d)$ belongs to the set and $a\leq c$ and $b\leq d$, then $(a,b)$ belongs to the set) and the next rectangle we pick must be a minimal element of its complement. One way we could choose our ordering would result from time to time in the union itself being a rectangle. If we did that, then we would have very little choice about which points to pick next, since the complement would have only two minimal elements. But if we’re a bit smarter, we can aim for something more like a triangle — say, the set of all $(x,y)$ such that $x+y\leq r$ — in which case, the complement will have lots of minimal elements. Here’s another example of a tricky situation that could arise. Suppose that we decide to fill up diagonal by diagonal. Things will go OK for a while, but at some point we will try to fill up a diagonal such as $x+y=m!$ for some large $m$. If $x+y=m!$ then $x$ and $y$ are coprime if and only if $x$ has no prime factor less than or equal to $m$. But the proportion of $x$ that satisfy this condition tends to 0 as $m$ tends to infinity, so there will be some diagonals that contain very few coprime pairs. And we can also have several consecutive diagonals with very few coprime pairs: what it takes is several consecutive numbers each of which has many small prime factors. This happens, by the Chinese remainder theorem and the fact that the sum of the reciprocals of the primes diverges. We just have to pick several disjoint sets of primes all of which have large sums of reciprocals and then pick an integer $m$ such that $m$ is a multiple of all the primes in the first set, $m+1$ is a multiple of all the primes in the second set, and so on. But $m$ has to be very large for us to be able to do this, and there is absolutely no obligation to fill up diagonal by diagonal. If you think of the boundary of the down-set you have filled up so far as a kind of curve that is gradually moving away from the origin, then you can make sure that when a dangerous set of diagonal comes along, the curve doesn’t cross them in too parallel a way. Of course, there are plenty of other constraints of a similar kind, and the difficulty is trying to work out how to satisfy all of them at the same time. It’s a tough one because we can’t hope to use some nice description of the set of coprime pairs: I don’t see any alternative to identifying a property that this set has that is sufficient for the expanding curve to be able to pick up points in the set on a regular basis. This comment is getting a bit long, so I’ll move to a new one. • gowers Says: I’m not sure what got into my head there — the argument I said was wrong was right. So quite a bit of what I’ve written in response to its supposed wrongness is not all that relevant to anything. 7. gowers Says: Again it feels helpful to think about an abstract version of the problem. Because of the constraint I’m putting on the ordering, $(m,n)$ is always the unique new element of the union contributed by the set $I_{m,n}$. So what I’m looking for is a total ordering of $\{1,2,\dots,N\}^2$ that extends the obvious partial ordering and has the property that any consecutive $s$ points in the total ordering (where $s$ is independent of $N$) include a coprime pair. In the abstract, I can ask for conditions that allow me to extend a partial ordering to a total ordering in such a way that any $s$ consecutive elements in the total ordering contain an element from some prescribed set. Is there a nice sufficient condition for this? 8. gowers Says: To answer that last question, it seems a good idea to try to find some trivial necessary conditions for such a total ordering to exist. If the partial order doesn’t make any comparisons at all, then a necessary and sufficient condition is obviously that the prescribed set has size at least $N/s$ (give or take integer parts). If the partial order is non-trivial, then we’ll need some kind of relationship between the set and the partial order. But what? Since I’m looking for a necessary condition, I should try to think of a fairly general situation where it is not possible to find a total ordering with the desired property. One situation would be if you can find points $x such that at least $m$ points are obliged to lie between $x$ and $y$, and fewer than $m/s$ points that are allowed to lie between $x$ and $y$ belong to the prescribed set. This is a rather strong-seeming condition. A point $z$ is obliged to lie between $x$ and $y$ if $x, whereas a point $w$ is allowed to lie between $x$ and $y$ if neither $w\leq x$ nor $w\geq y$ hold. Typically there will be many more points of the second kind than the first. Thus, this observation gives rise to only a very weak necessary condition for the ordering to exist, which makes it rather unlikely to be sufficient — though of course there are some beautiful combinatorial theorems where apparently weak necessary conditions do turn out to be sufficient. 9. gowers Says: A trivial sufficient condition is that there is a partition of the poset $X$ into antichains $X_1,\dots,X_k$ such that if $i then no point in $X_i$ is greater than any point in $X_j$ and such that at least $|X_i|/s$ points in $X_i$ belong to the prescribed set. That is perhaps the natural generalization of the comment about the trival poset. • gowers Says: I also wanted to try to find an algorithm that would fail only under certain circumstances that one could hope didn’t happen. Let $A$ be the prescribed set. The rough idea would be as follows. We pick the total ordering $x_1 element by element. For each $k$ let $Y_k$ be the complement of $\{x_1,\dots,x_k\}$. Then we try to pick the points $x_1,x_2,\dots$ in such a way that $|Y_k\cap A|/|Y_k|$ stays as large as possible. It’s not obvious that that is the right quantity to preserve. The reason I go for it is that I think of the process as “saving points of $A$ for when the going gets tough”. If you find yourself approaching a region where lots of points don’t belong to $A$, you want to have a number of points of $A$ waiting to be chosen, so that you can intersperse them amongst the points not in $A$ as you continue to choose your sequence. If you have chosen a point of $A$ recently (by which I mean more recently than $s$ steps ago) and have a choice between picking a point in $A$ and picking a point not in $A$, could there ever be a disadvantage in picking the point not in $A$? I don’t see how there could, but I haven’t quite seen a formal argument for that. If there is no disadvantage, then we can assume that the algorithm has a kind of reverse-greedy property: you pick points of $A$ only if either that’s all you’ve got in front of you or if not doing so leads to a sequence of $s$ consecutive points not in $A$. 10. gowers Says: I think I may have made a mistake earlier. Suppose I have a total ordering of the HAP products in such a way that a new element appears at least every $s$ steps? If I order the elements according to when they first appear in one of the HAP products, is it really true that each element can be a maximum of at most $s$ of the HAP products? 11. gowers Says: Let me go back to the abstract question about bipartite graphs. Suppose you have a bipartite graph with vertex sets $X$ and $Y$ and a total ordering on $X$ such that for every $x\in X$ there are at most $s$ vertices $y\in Y$ such that $x$ is the maximal neighbour of $y$. I’d like to characterize this condition in a more $Y$-focused way. That is, I’d like conditions on the neighbourhoods of the vertices in $Y$ that are necessary and sufficient for such an ordering to exist. A sufficient condition is that $Y$ can be partitioned into sets $Y_1,\dots,Y_m$ of size at most $s$ such that no neighbourhood of any vertex in $Y_i$ is contained in the union of the neighbourhoods of the vertices in $Y_1\cup\dots\cup Y_{i-1}$. If such a partition exists, then let $X_i$ be the set of vertices that are neighbours of a vertex in $Y_i$ but not of any vertex in $Y_j$ for any $j. If we order the vertices in $X$ in such a way that if $j then every vertex in $X_j$ precedes every vertex in $X_i$, then a vertex in $X_i$ can only be a maximal neighbour of $y$ if $y\in Y_i$. (Proof: it isn’t a neighbour at all if $y\in Y_j$ for some $j, and if $j>i$ then by assumption $y$ has a neighbour that is not joined to any of $Y_1,\dots,Y_i$, so it is greater than all vertices in $X_i$ in the ordering.) Since the $Y_i$ have size at most $s$, we are done. This condition is also trivially necessary, since if an ordering $x_1,\dots,x_n$ satisfies the original condition, then we can define $Y_i$ to be the set of all $y\in Y$ with maximal element $x_i$ and the sets $Y_i$ satisfy this condition. In the case of HAP products, we want to partition the set $\Sigma$ of all such products into sets $\Sigma_1,\dots,\Sigma_m$ of size at most $s$ in such a way that no $P\times Q$ in $\Sigma_i$ is contained in the union of all the sets in $\Sigma_1\cup\dots\cup\Sigma_{i-1}$. 12. gowers Says: I feel like returning to EDP itself, and the diagonal decomposition approach where one is allowed to use products of HAPs that may have different common differences (since that would prove EDP and it’s not clear that the result is even true if you insist on the same common difference). In general, it seems that the reason it is difficult to come up with a counterexample to EDP is that eventually you start facing too many constraints all at once. That happens because some numbers are contained in lots of HAPs. It occurs to me that the reason we have trouble proving anything is that we are trying to find decompositions in a place where they don’t exist, and what we should be doing is looking at some long interval in which every number has lots of small factors. Exactly what that means is not quite clear, and one might perhaps loosen it to almost all numbers having lots of small factors, but let me try to give a clearer idea. An observation we made a long time ago was that instead of looking at HAPs we could instead look at shifted HAPs. The way it would work is that if $n$ is a prime power $p^k$ then we choose an integer $a_n$ and let $A_n$ be the residue class of all integers congruent to $a_n$ mod $n$, making sure that $A_p\supset A_{p^2}\supset$. Then for general $n=p_1^{r_1}\dots p_k^{r_k}$ we let $A_n$ be the intersection of the sets $A_{p_i^{r_i}}$. The reason we can do this is that if we manage to prove a large discrepancy for these shifted HAPs, then we can use the Chinese remainder theorem to find an integer $N$ such that $N+m$ is divisible by $n$ if and only if $m\in A_n$. In other words, “near $N$ the HAPs look like shifted HAPs”. It occurs to me now that if we do this then we may find it easier to find a decomposition, because we won’t be near the incredibly unusual number 0, which is a multiple of everything. If we shift all HAPs randomly (subject to the constraints), then a typical number will belong to $A_n$ for lots of small $n$. The sort of reason this might be useful is that if we take lots of products $A_n\times A_m$, then we can hope to cover the pairs $(x,y)$ much more evenly than if we use HAPs and have to say things like that if $x$ is a prime then it is contained in only two HAPs. • Alec Edgington Says: Thinking in terms of these shifted HAPs seems to be easier if we restrict ourselves to HAPs with square-free common difference, since then we just need to choose an arbitrary residue class modulo $p$ for each $p$. I don’t remember if we ever thought much about this strengthening of EDP — it isn’t quite the same as Gil’s ‘square-free EDP’, where the sequence is set to zero at non-square-free numbers — or if we rejected it as false? • gowers Says: The sequence 1,1,-1,-1,1,1,-1,-1,… is a counterexample for square-free EDP, so unfortunately we can’t make that simplification. • Alec Edgington Says: Ah, good point. • gowers Says: I should add that it’s a point that I too missed recently (or had perhaps forgotten about) and that Gil put me right on. For the complex version of the EDP you can’t prove anything unless every $m$ divides the common difference of one of your HAPs, since if $m$ doesn’t then you can rotate through the $m$th roots of unity. In the real case, you can’t use a periodic example if for every $k$ you have HAPs with common difference $2^k$, so it’s less clear what happens, but since the promising approaches (apart from the modular conjecture) would yield the complex case as well, it looks difficult to prove anything without a pretty rich set of HAPs. 13. plm Says: Actually Tim, I find your example quite deep. It is not at all obvious that what makes competition or school problems difficult is sometimes the same as what makes research problems difficult: misleading hints or context. Because in the case of problems there is the psychological aspect of “game against the problem poser” (e.g. is he trying to trap students?).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 476, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9203164577484131, "perplexity": 137.15985425379827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320570.72/warc/CC-MAIN-20170625184914-20170625204914-00184.warc.gz"}
https://gamedev.stackexchange.com/questions/120222/unexpected-execution-order-in-unity-unet
# Unexpected Execution Order in Unity (uNet) I made a little game in Unity. It works fine in the Editor. But if I run the build version I get a NullReferenceException. This problem does only appear if the build-version-client hosts the game. • No problems if the BUILD-version plays as client or server only. • No problems when played in EDITOR I have a class (CharacterSpawner) that is meant to run on the server only. public static CharacterSpawner instance; ... public void Awake() { instance = this; print("Character Spawner Awake"); } [Server] public void SpawnCharacter(NetworkConnection owner) { GameObject character = Instantiate(characterPrefab, randomSpawnPoint, Quaternion.identity) as GameObject; NetworkServer.SpawnWithClientAuthority(character, owner); } I have another class (Player) that should send a Command to the server in order to spawn a character right at the beginning. void Start() { if (isLocalPlayer) { CmdSpawnCharacter(); } } [Command] public void CmdSpawnCharacter() { CharacterSpawner.instance.SpawnCharacter(connectionToClient); // throws NullReferenceException } 'instance' is here the only thing that could cause the NullReferenceException, because connectionToClient would not cause a NullReferenceException even if it was null! This means the instance is not set yet. As problem I can think that the Start-method in the Player-script is called before the Awake-method in the CharacterSpawner Script. Because if I call CmdSpawnCharacter like this: Invoke("CmdSpawnCharacter", 0.5f); It works perfectly, but that is not the way I want to do this. I even changed the priorities of the scripts, but without success. QUESTION: Why does Start() gets called before Awake() gets called? This does not line up with the docs http://docs.unity3d.com/Manual/ExecutionOrder.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2612324059009552, "perplexity": 5613.741230684946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00022.warc.gz"}
http://www.whxb.pku.edu.cn/CN/10.3866/PKU.WHXB20040812
### 混配型配合物Ni(mnt)(bipyO2)的合成与气敏性能 1. 长沙理工大学化学系,长沙 410077 • 收稿日期:2004-02-16 修回日期:2004-04-05 发布日期:2004-08-15 • 通讯作者: 傅铁祥 ### Synthesis and Gas Sensitive Properties of Mixed-Ligand Complex Ni(mnt)(bipyO2) with Big Conjugate System Fu Tie-Xiang;Tao Jun;Li Dan 1. Department of Chemistry, Changsha University of Science & Technology, Changsha 410077 • Received:2004-02-16 Revised:2004-04-05 Published:2004-08-15 • Contact: Fu Tie-Xiang Abstract: A mixed-ligand complex Ni(mnt)(bipyO2) with large conjugate system was synthesized by reaction of bipyO2(2,2′-bipyridine-1,1′-dioxide),mnt2-(maleonitriledithiolate) and NiCl2. The determinations of elemental analysis, IR, UV, molar conductivity and TG-DTA showed the complex to have electroneutrality and a square planar structure. It is a thermostable brown solid under temperature 335 ℃. The ultraviolet absorption spectra of the complex in DMSO solution exhibited intensive absorption bands at 255~290 nm(π-π* in bipyO2 and mnt), 310~340 nm(n-π*, metel cation to ligands), 370~420 nm(π-π* in mnt) and 440~520 nm, and the latter was assigned as a large conjugate system interlocking metal ion and ligands π to π* transition. The study of sensitive properties showed the complex to be a good sensitive material of ammonia. The sensor manufactured using the complex has a good sensitivity and selectivity to ammonia. When working voltage is 10 V and NH3 concentration is less than 2.6 mmol•L-1. The output voltage of the sensor varied linearly in response to the change of NH3 concentration (Fig.3). The average recovery rate of the sensor is 100.2%(Table 4). The responsive time is about 20 s, and the return time is about 45 s. The sensor could be used in quantitative analysis of trace ammonia.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21229524910449982, "perplexity": 9761.863373832497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00604.warc.gz"}
http://fa.bianp.net/tag/scipy.html
fa.bianp.net In this post I compar several implementations of Logistic Regression. The task was to implement a Logistic Regression model using standard optimization tools from scipy.optimize and compare them against state of the art implementations such as LIBLINEAR. In this blog post I'll write down all the implementation details of this model, in the hope that not only the conclusions but also the process would be useful for future comparisons and benchmarks. Function evaluation We consider the case in which the decision function is an affine function, i.e., $f(x) = \langle x, w \rangle + c$ where $w$ and $c$ are parameters to estimate. The loss function for the $\ell_2$-regularized logistic regression, i.e. the function to be minimized is $$\mathcal{L}(w, \lambda, X, y) = - \frac{1}{n}\sum_{i=1}^n \log(\phi(y_i (\langle X_i, w \rangle + c))) + \frac{\lambda}{2} w^T w$$ where $\phi(t) = 1. / (1 + \exp(-t))$ is the logistic function, $\lambda w^T w$ is the regularization term and $X, y$ is the input data, with $X \in \mathbb{R}^{n \times p}$ and $y \in \{-1, 1\}^n$. However, this formulation is not great from a practical standpoint. Even for not unlikely values of $t$ such as $t = -100$, $\exp(100)$ will overflow, assigning the loss an (erroneous) value of $+\infty$. For this reason 1, we evaluate $\log(\phi(t))$ as $$\log(\phi(t)) = \begin{cases} - \log(1 + \exp(-t)) \text{ if } t > 0 \\ t - \log(1 + \exp(t)) \text{ if } t \leq 0\\ \end{cases}$$ The gradient of the loss function is given by \begin{aligned} \nabla_w \mathcal{L} &= \frac{1}{n}\sum_{i=1}^n y_i X_i (\phi(y_i (\langle X_i, w \rangle + c)) - 1) + \lambda w \\ \nabla_c \mathcal{L} &= \sum_{i=1}^n y_i (\phi(y_i (\langle X_i, w \rangle + c)) - 1) \end{aligned} Similarly, the logistic function $\phi$ used here can be computed in a more stable way using the formula $$\phi(t) = \begin{cases} 1 / (1 + \exp(-t)) \text{ if } t > 0 \\ \exp(t) / (1 + \exp(t)) \text{ if } t \leq 0\\ \end{cases}$$ Finally, we will also need the Hessian for some second-order methods, which is given by $$\nabla_w ^2 \mathcal{L} = X^T D X + \lambda I$$ where $I$ is the identity matrix and $D$ is a diagonal matrix given by $D_{ii} = \phi(y_i w^T X_i)(1 - \phi(y_i w^T X_i))$. In Python, these function can be written as import numpy as np def phi(t): # logistic function, returns 1 / (1 + exp(-t)) idx = t > 0 out = np.empty(t.size, dtype=np.float) out[idx] = 1. / (1 + np.exp(-t[idx])) exp_t = np.exp(t[~idx]) out[~idx] = exp_t / (1. + exp_t) return out def loss(x0, X, y, alpha): # logistic loss function, returns Sum{-log(phi(t))} w, c = x0[:X.shape[1]], x0[-1] z = X.dot(w) + c yz = y * z idx = yz > 0 out = np.zeros_like(yz) out[idx] = np.log(1 + np.exp(-yz[idx])) out[~idx] = (-yz[~idx] + np.log(1 + np.exp(yz[~idx]))) out = out.sum() / X.shape[0] + .5 * alpha * w.dot(w) return out # gradient of the logistic loss w, c = x0[:X.shape[1]], x0[-1] z = X.dot(w) + c z = phi(y * z) z0 = (z - 1) * y grad_w = X.T.dot(z0) / X.shape[0] + alpha * w Benchmarks I tried several methods to estimate this $\ell_2$-regularized logistic regression. There is one first-order method (that is, it only makes use of the gradient and not of the Hessian), Conjugate Gradient whereas all the others are Quasi-Newton methods. The method I tested are: • CG = Conjugate Gradient as implemented in scipy.optimize.fmin_cg • TNC = Truncated Newton as implemented in scipy.optimize.fmin_tnc • BFGS = Broyden–Fletcher–Goldfarb–Shanno method, as implemented in scipy.optimize.fmin_bfgs. • L-BFGS = Limited-memory BFGS as implemented in scipy.optimize.fmin_l_bfgs_b. Contrary to the BFGS algorithm, which is written in Python, this one wraps a C implementation. • Trust Region = Trust Region Newton method 1. This is the solver used by LIBLINEAR that I've wrapped to accept any Python function in the package pytron To assure the most accurate results across implementations, all timings were collected by callback functions that were called from the algorithm on each iteration. Finally, I plot the maximum absolute value of the gradient (=the infinity norm of the gradient) with respect to time. The synthetic data used in the benchmarks was generated as described in 2 and consists primarily of the design matrix $X$ being Gaussian noise, the vector of coefficients is drawn also from a Gaussian distribution and the explained variable $y$ is generated as $y = \text{sign}(X w)$. We then perturb matrix $X$ by adding Gaussian noise with covariance 0.8. The number of samples and features was fixed to $10^4$ and $10^3$ respectively. The penalization parameter $\lambda$ was fixed to 1. In this setting variables are typically uncorrelated and most solvers perform decently: Here, the Trust Region and L-BFGS solver perform almost equally good, with Conjugate Gradient and Truncated Newton falling shortly behind. I was surprised by the difference between BFGS and L-BFGS, I would have thought that when memory was not an issue both algorithms should perform similarly. To make things more interesting, we now make the design to be slightly more correlated. We do so by adding a constant term of 1 to the matrix $X$ and adding also a column vector of ones this matrix to account for the intercept. These are the results: Here, we already see that second-order methods dominate over first-order methods (well, except for BFGS), with Trust Region clearly dominating the picture but with TNC not far behind. Finally, if we force the matrix to be even more correlated (we add 10. to the design matrix $X$), then we have: Here, the Trust-Region method has the same timing as before, but all other methods have got substantially worse.The Trust Region method, unlike the other methods is surprisingly robust to correlated designs. To sum up, the Trust Region method performs extremely well for optimizing the Logistic Regression model under different conditionings of the design matrix. The LIBLINEAR software uses this solver and thus has similar performance, with the sole exception that the evaluation of the logistic function and its derivatives is done in C++ instead of Python. In practice, however, due to the small number of iterations of this solver I haven't seen any significant difference. 1. A similar development can be found in the source code of LIBLINEAR, and is probably also used elsewhere. 2. "A comparison of numerical optimizers for logistic regression", P. Minka, URL 3. "Newton's Method for Large Bound-Constrained Optimization Problems", Chih-Jen Lin, Jorge J. More URL SciPy contains two methods to compute the singular value decomposition (SVD) of a matrix: scipy.linalg.svd and scipy.sparse.linalg.svds. In this post I'll compare both methods for the task of computing the full SVD of a large dense matrix. The first method, scipy.linalg.svd, is perhaps the best known and uses the linear algebra library LAPACK to handle the computations. This implements the Golub-Kahan-Reisch algorithm 1, which is accurate and highly efficient with a cost of O(n^3) floating-point operations 2. The second method is scipy.sparse.linalg.svds and despite it's name it works fine also for dense arrays. This implementation is based on the ARPACK library and consists of an iterative procedure that finds the SVD decomposition by reducing the problem to an eigendecomposition on the array X -> dot(X.T, X). This method is usually very effective when the input matrix X is sparse or only the largest singular values are required. There are other SVD solvers that I did not consider, such as sparsesvd or pysparse.jdsym, but my points for the sparse solve probably hold for those packages too since they both implement iterative algorithms based on the same principles. When the input matrix is dense and all the singular values are required, the first method is usually more efficient. To support this statement I've created a little benchmark: timings for both methods as a function of the size of the matrices. Notice that we are in a case that is clearly favorable to the linalg.svd: after all sparse.linalg.svds was not created with this setting in mind, it was created for sparse matrices or dense matrices with some special structure. We will see however that even in this setting it has interesting advantages. I'll create random square matrices with different sizes and plot the timings for both methods. For the benchmarks I used SciPy v0.12 linked against Intel Math Kernel Library v11. Both methods are single-threaded (I had to set OMP_NUM_THREADS=1 so that MKL does not try to parallelize the computations). [code] Lower timings are better, so this gives scipy.linalg.svd as clear winner. However, this is just part of the story. What this graph doesn't show is that this method is winning at the price of allocating a huge amount of memory for temporary computations. If we now plot the memory consumption for both methods under the same settings, the story is completely different. [code] The memory requirements of scipy.linalg.svd scale with the number of dimensions, while for the sparse version the amount of allocated memory is constant. Notice that we are measuring the amount of total memory used, it is thus natural to see a slight increase in memory consumption since the input matrix is bigger on each iteration. For example, in my applications, I need to compute the SVD of a matrix for whom the needed workspace does not fit in memory. In cases like this, the sparse algorithm (sparse.linalg.svds) can come in handy: the timing is just a factor worse (but I can easily parallelize jobs) and the memory requirements for this method is peanuts compared to the dense version. 1. Calculating the singular values and pseudo-inverse of a matrix, Golub, Gene H., Kahan, William, 1965, JSTOR 2. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes, Plassman, Gerald E. 2005 PDF In scipy's development version there's a new function closely related to the QR-decomposition of a matrix and to the least-squares solution of a linear system. What this function does is to compute the QR-decomposition of a matrix and then multiply the resulting orthogonal factor by another arbitrary matrix. In pseudocode: def qr_multiply(X, Y): Q, R = qr(X) return dot(Q.T, Y) but unlike this naive implementation, qr_multiply is able to do all this without explicitly computing the orthogonal Q matrix, resulting both in memory and time saving. In the following picture I measured the memory consumption as a function of time of running this computation on a 1.000 x 1.000 matrix X and a vector Y (full code can be found here): It can be seen that not only qr_multiply is almost twice as fast as the naive approach, but also that the memory consumption is significantly reduced, since the orthogonal factor is never explicitly computed. Credit for implementing the qr_multiply function goes to Martin Teichmann. Ridge coefficients for multiple values of the regularization parameter can be elegantly computed by updating the thin SVD decomposition of the design matrix: import numpy as np from scipy import linalg def ridge(A, b, alphas): """ Return coefficients for regularized least squares min ||A x - b||^2 + alpha ||x||^2 Parameters ---------- A : array, shape (n, p) b : array, shape (n,) alphas : array, shape (k,) Returns ------- coef: array, shape (p, k) """ U, s, Vt = linalg.svd(X, full_matrices=False) d = s / (s[:, np.newaxis].T ** 2 + alphas[:, np.newaxis]) return np.dot(d * U.T.dot(y), Vt).T This can be used to efficiently compute what it's regularization path, that is, to plot the coefficients as a function of the regularization parameter. Since the bottleneck of the algorithm is the singular value decomposition, computing the coefficients for other values of the regularization parameter basically comes for free. A variant of this algorithm can then be used to compute the optimal regularization parameter in the sense of leave-one-out cross-validation and is implemented in scikit-learn's RidgeCV (for which Mathieu Blondel has an excelent post by ). This optimal parameter is denoted with a vertical dotted line in the following picture, full code can be found here. Update: a fast and stable norm was added to scipy.linalg in August 2011 and will be available in scipy 0.10 Last week I discussed with Gael how we should compute the euclidean norm of a vector a using SciPy. Two approaches suggest themselves, either calling scipy.linalg.norm(a) or computing sqrt(a.T a), but as I learned later, both have issues. Note: I use single-precision arithmetic for simplicity, but similar results hold for double-precision. Overflow and underflow Both approaches behave terribly in presence of big or small numbers. Take for example an array with a single entry: In [0]: a = np.array([1e20], dtype=np.float32) In [1]: a Out[1]: array([1.00000002e+20], dtype=float32) In [2]: scipy.linalg.norm(a) Out[2]: inf In [3]: np.sqrt(np.dot(a.T, a)) Out[3]: inf That is, both methods return Infinity. However, the correct answer is 10^20, which would comfortably fit in a single-precision instruction. Similar examples can be found where numbers underflow. Stability Again, scipy.linalg.norm has a terrible behavior in what concerns numerical stability. In presence of different magnitudes severe cancellation can occur. Take for example and array with one 10.000 in the first value and 10.000 ones behind: a = np.array([1e4] + [1]*10000, dtype=np.float32) In this case, scipy.linalg.norm will discard all the ones, producing In [3]: linalg.norm(a) - 1e4 Out[3]: 0.0 when the correct answer is 0.5. In this case $\sqrt{a^T a}$ has a much nicer behavior since results of a dot-product in single precision are accumulated using double-precision (but if double-precision is used, results won't be accumulated using quadruple-precision): In [4]: np.sqrt(np.dot(a.T, a)) - 1e4 Out[4]: 0.5 BLAS BLAS BLAS ... The BLAS function nrm2 does automatic scaling of parameters rendering it more stable and tolerant to overflow. Luckily, scipy provides a mechanism to call some BLAS functions: In [5]: nrm2, = scipy.linalg.get_blas_funcs(('nrm2',), (a,)) Using this function, no overflow occurs (hurray!) In [95]: a = np.array([1e20], dtype=np.float32) In [96]: nrm2(a) Out[96]: 1.0000000200408773e+20 and stability is greatly improved In [99]: nrm2(a) - 1e4 Out[99]: 0.49998750062513864 Update: as of scipy 0.10, this function is used by scipy.linalg.norm . Timing Computing the 2-norm of an array is a very cheap operation, thus computations are usually dominated by external factors, such as latency of memory access or overhead in the Python/C layer. Experimental benchmarks on an array of size 10^7 show that nrm2 is marginally slower than $latex \sqrt{a^T a}$, because scaling has a cost, but is is also more stable and less prone to overflow and underflow. It also shows that scipy.linalg.norm is the slowest (and numerically worst!) of all. $\sqrt{a^T a}$ BLAS nrm2(a) scipy.linalg.norm(a) 0.02 0.02 0.16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7918816208839417, "perplexity": 1025.0070970270415}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00498.warc.gz"}
https://web.math.rochester.edu/events/single/2190
# Dynamical systems workgroup ## Parabolic Mandelbrot Set Vanessa Matus de la Parra Friday, May 28th, 2021 8:45 AM - 10:00 AM https://rochester.zoom.us/j/94149347003 In this talk, we will introduce the moduli space $$\mathcal{M}_2$$ of rational functions of degree 2, and study the curve $$Per_1(1)\subset\mathcal{M}_2$$ of conjugacy classes of quadratic rational maps having a parabolic fixed point with multiplier equal to 1. We can define its Mandelbrot set $$M_1$$ to be the connectedness locus of Julia sets in this family. Milnor conjectured that this set is homeomorphic to the Mandelbrot set described in the previous two talks, which has been shown by Petersen & Roesch.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135438203811646, "perplexity": 464.1669819946522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00521.warc.gz"}
https://cs.stackexchange.com/questions/50185/how-does-hassins-algorithm-for-the-restricted-shortest-path-work
# How does Hassin's algorithm for the Restricted Shortest Path work? I'm studying the Approximation For Restricted Shortest Path Problem paper and don't understand what he is doing. In particular, I wonder why it is important that one computes upper and lower bounds $UB$ and $LB$ on $OPT$, so that their ratio is below some constant. The problem is to find the least-cost path from nodes $1$ to $n$ where the cumulative delay (each edge has a delay/transit time in addition to cost) is no more than some $T$. For the restricted shortest path problem, the exact solution can be found in pseudopolynomial time $\mathcal{O}(|E|OPT)$like this. Let $g_j(c)$ be the time of the quickest path form node $1$ to node $j$ with cumulative cost $\le c$. \begin{align*} g_1(c) & = 0, & c = 0,\ldots,OPT, \\ g_j(0) & = \infty, & j = 2,\ldots,n, \\ g_j(c) & = \min\left\{g_j(c-1), \min_{k \mid c_{kj} \le c}\left\{g_k(c-c_{kj}) + t_{kj}\right\}\right\}, & j = 2,\ldots,n; ~ c= 1,\ldots,OPT \end{align*} So Hassin constructs a $TEST(k)$ procedure which says $YES$ if $OPT \ge k$ and $NO$ if $OPT < k(1 + \epsilon)$ He then proceeds to pick initial bounds $UB$ and $LB$ and uses the $TEST$ to refine them until $\frac{UB}{LB} = 2$. He then computes the final solution with the exact algorithm, but where each edge cost $c_{ij }$ is scaled as $\hat{c}_{ij} = \lfloor\frac{c_{ij} (n-1)}{LB\epsilon}\rfloor$. So what I would like to know is how this application of the exact algorithm is actually polynomial. I suppose that the optimal solution in this modified graph is somehow bounded, but I do not see where the ratio of $2$ comes into play. I have one theory now, which is that the final application of the exact algorithm on the scaled costs has complexity $\mathcal{O}\left(|E| OPT\frac{(n-1)}{LB\epsilon}\right) \subseteq \mathcal{O}(|E| 2LB\frac{(n-1)}{LB\epsilon})=\mathcal{O}(|E| 2\frac{(n-1)}{\epsilon})=\mathcal{O}\left(|E|\frac{n-1}{\epsilon}\right)$ which is then polynomial in $n \le \sqrt{|E|}$ and $\epsilon$. Can anybody confirm this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969363808631897, "perplexity": 334.5381413680324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00472.warc.gz"}
https://stats.stackexchange.com/questions/18596/statistical-significance-of-betas-in-linear-regression/105165
# Statistical significance of betas in linear regression This question is similar to another question I had recently posted but I have a follow on. In classical linear regression, we have $$\hat{\beta } \sim N(\beta,(X^{T}X)^{-1}\sigma^2).$$ Using this, one builds individual hypotheses of the significance of the coefficients, as done in the book by Tibshirani et al. My questions are two fold: 1) The book talks about a combined hypothesis built by proving that $$(\hat{\beta }-\beta)^T(X^{T}X)^{-1}(\hat{\beta }-\beta) \sim \sigma^2\chi_{p+1}^2.$$ I don't see how this formula can be derived from the equation I wrote above. I do see that $$\hat{\beta }-\beta \sim N(0,(X^{T}X)^{-1}\sigma^2).$$ How do we take the $X^TX$ matrix out and prove the above? I would be grateful if someone could outline the steps. 2) My second question is, how do we think about building the hypothesis? Do we think about building individual coefficient hypotheses or is it a good idea to view everything together? In other words, what is the difference/pros and cons of using the two different styles of hypotheses, the individual coefficient one or viewing everything together as per the above equation? Can we have an example of building a combined hypothesis? I am guessing that most statistical packages don't really take into account the correlation between different $\beta$ which is encoded in the matrix $X^TX$. Please clarify, any help will be much appreciated. • Presumably, in the second formula, $\le$ is a typo for $\sim$ and $p+1$ is the number of parameters (including the constant), right? – whuber Nov 18 '11 at 18:48 • Yes, and Yes. Can you make sense of it though, how does the second formula come? – bgbgh Nov 18 '11 at 19:42 • The left hand side in (1) is a sum of squares of $p+1$ normal variables. By definition, a chi-squared distribution describes a sum of squares of $p+1$ standard normal variables. – whuber Nov 18 '11 at 20:30 • Factor it: because it is symmetric positive-definite, you can write it as $X^TX = UU^T$ for an invertible $p+1$ by $p+1$ matrix $U$. – whuber Nov 18 '11 at 20:39 • You would benefit greatly from working an actual problem. Why don't you fit a line through the points $(0,2),(3,4),(4,8),(7,10)$. (I chose this for the easy arithmetic.) You should compute $U = \{\{2,0\},\{7,5\}\}$. Then you might see where the correlation appears. – whuber Nov 18 '11 at 21:37 1) I do not have the book with me so I cannot check the original, but there is a typo in the first formula as given in that it should be $$(\hat\beta-\beta)^T(X^TX)(\hat\beta-\beta)\sim \sigma^2\chi^2_{p+1}$$ with no inverse for the $(X^T X)$ matrix. This is a consequence of the following: if $x\sim\mathcal{N}_p(0,\Sigma)$, then $$Ax\sim\mathcal{N}_q(0,A\Sigma A^T)$$ for any $(q,p)$ matrix $A$. Thus, taking one symmetric version of the square root of $(X^TX)$, i.e. $V$ such that $V^TV=(X^TX)$ and $V(X^TX)^{-1}V^T=I_{p+1}$, using for instance the eigenbasis and eigenvalues, you get that $$V(\hat\beta-\beta)\sim\mathcal{N}_{p+1}(0,\sigma^2I_{p+1})$$ and $$(\hat\beta-\beta)^TV^TV(\hat\beta-\beta)=(\hat\beta-\beta)^T(X^TX)(\hat\beta-\beta)\sim \sigma^2\chi^2_{p+1}.$$ 2) again, I do not have the book so cannot guess what the author mean by "building an hypothesis". The natural approach is to have an exogenous question about the significance of one group of variables and to test it by the corresponding chi-square test, using the corresponding submatrix of $(X^TX)^{-1}$. For instance, testing for $\beta_1=\beta_2=0$ leads to $$(\hat\beta_{1:2}-\beta_{1:2})^T\left[(X^TX)^{-1}_{1:2,1:2}\right]^{-1}(\hat\beta_{1:2} -\beta_{1:2})\sim \sigma^2\chi^2_{2}.$$ In response to 2) Recall that linear regression is a conditional mean. Therefore, an "individual coefficient" hypothesis for the $j$-th coefficient is an hypothesis about $\mathbb{E}[Y|X_j]$. An hypothesis about "everything together" is an hypothesis about $\mathbb{E}[Y|X_1,X_2,...,X_J]$. Therefore your hypotheses are always in a sense conditional on each other. An hypothesis about a single coefficient is kind of like a marginal hypothesis, "averaged over" values of the other predictors. An hypothesis about everything together is a joint hypothesis. For that reason, hypotheses about individual coefficients based on pairwise relationships tend not to translate to good joint hypotheses. This is where the first-semester surprise comes from, where you fit two univariate regressions that have significant coefficients but when you put them together in multiple regression, or add a third predictor, or an interaction, they are both nonsignificant. Better yet is when they only become significant when you add the interaction. Bonus points if the interaction itself is non-significant. Unless I misunderstood you, and you're asking about how to test an existing model. For that I defer to David Giles at his blog. The punch line is that you probably shouldn't test individual coefficients unless you have a substantive reason for doing so, but you should read the whole thing because it's a fantastic post and everyone that ever plans to use multiple regression should read the whole thing. It's also not meaningful to talk about correlation between $\beta_j$s outside of a Bayesian context (although I got into a debate with another poster on a related subject). Correlation between $\hat{\beta}_j$s is different, and correlation between $X_j$s is different still. All statistics packages take the former "into account" because, well, they explicitly compute $\mathbb{V}[\hat{\beta}]$, and standard error is computed using the main diagonal of that matrix, which is what you typically use to test hypotheses. The latter isn't a big deal in principle, except for the fact that highly correlated predictors will "steal" the magnitudes of their coefficients from each other, particularly if they are on different measurement scales. It's very often good practice to re-scale and center your variables. And if they are perfectly correlated, you don't have a full rank $X$ matrix and regression is mathematically impossible. If they are very very highly correlated, this is theoretically okay but it will make your computer very unhappy and you will get numerical issues trying to invert $X^TX$. I'd personally recommend not learning regression from Tibshirani & company, at least not at first. I have great respect for them and I hold dear my copy of Elements of Statistical Learning, but as a machine learning book it takes a very... machine-like approach to regression that in my opinion doesn't admit the kind of thinking needed to build a meaningful parametric model. My background is in economics, so I'll invariably recommend Wooldrige's Introductory Econometrics: A Modern Approach for what I think is a much more organic and intuitive approach to regression. There's a lot of stuff in there you don't need to know if you aren't working with, say, survey data, but there's nothing in there you don't want to know. Seeing regression built up from statistics principles, as well as the geometric/algebraic principles you get in Elements, is important for understanding it. As for the question about evaluating the hypothesis by "viewing everything together", consider the use of BIC or AIC to compare different models, rather than running significance tests. It's more computational intensive, and makes an assymptotic assumption (doesn't apply for small datasets), but it can compare different kinds (e.g. non-linear) models, including models of variables that are subsets of each other.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6712608933448792, "perplexity": 354.302148521082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00514.warc.gz"}
http://mathhelpforum.com/trigonometry/115293-tough-trigonometric-function-question.html
# Thread: Tough Trigonometric Function Question!! 1. ## Tough Trigonometric Function Question!! Hi, I'm not sure how to approach this question. Q: A Ferris wheel has a diameter of 56m, and one revolution took 2.5 min to complete. Riders could see Niagara Falls if they were higher than 50m above the ground. Sketch cycles of a graph that represents the height of a rider above the ground, as a function of time, if the rider gets on at a height of 0.5m at t=0 min. Then determine the time intervals when the rider could see Niagara Falls. I have the graph sketched and the resulting function I got is y= 28sin(4pi/5(x+pi/2))+28. If a rider gets on at 0.5m at t=0, the equation would be y=28sin(4pi/5(x+pi/2))+28.5? Where do I go from here? Can someone please show me what I have to do? The answers are: 0.98min<t<1.52min, 3.48min<t<4.02min, 5.98min<t<6.52min. Thanks!!! 2. Pretty good attempt. 1) One error. You need y = 1/2 when x = 0 and you don't have that. A little experimentation shows that the phase shift should be +5/8. 2) Solve away. You will need a calculator. Set the whole expression equal to 50 and find the two solutions in the first trip around. Other solutions should follow by 2.5 minutes. I get 0.97335 and 1.52665. It appears I agree with the given solution. 3. Hello MATHDUDE2 Originally Posted by MATHDUDE2 Hi, I'm not sure how to approach this question. Q: A Ferris wheel has a diameter of 56m, and one revolution took 2.5 min to complete. Riders could see Niagara Falls if they were higher than 50m above the ground. Sketch cycles of a graph that represents the height of a rider above the ground, as a function of time, if the rider gets on at a height of 0.5m at t=0 min. Then determine the time intervals when the rider could see Niagara Falls. I have the graph sketched and the resulting function I got is y= 28sin(4pi/5(x+pi/2))+28. If a rider gets on at 0.5m at t=0, the equation would be y=28sin(4pi/5(x+pi/2))+28.5? Where do I go from here? Can someone please show me what I have to do? The answers are: 0.98min<t<1.52min, 3.48min<t<4.02min, 5.98min<t<6.52min. Thanks!!! Your second equation is better than the first, assuming that the rider gets on at the lowest point of the cycle, for this is now $y = 0.5$, whereas it is $y = 0$ in your first equation. You have correctly used the period of 2.5 to give the factor $\frac{4\pi}{5}$, but the 'phase angle' that gives the initial value is not correct. So if we write: $y = 28 \sin \Big(\frac{4\pi}{5}t+c\Big)+28.5$ When $t = 0, y = 0.5$, and hence: $0.5=28\sin c+28.5$ $\Rightarrow -1=\sin c$ $\Rightarrow c=-\frac{\pi}{2}$ (or any equivalent angle) So the function is $y = 28 \sin \Big(\frac{4\pi}{5}t-\frac{\pi}{2}\Big)+28.5$ So you now need to solve $50 = 28 \sin \Big(\frac{4\pi}{5}t-\frac{\pi}{2}\Big)+28.5$ $\Rightarrow 28 \sin \Big(\frac{4\pi}{5}t-\frac{\pi}{2}\Big)= 21.5$ $\Rightarrow \sin \Big(\frac{4\pi}{5}t-\frac{\pi}{2}\Big)=\frac{21.5}{28}$ $\Rightarrow \frac{4\pi}{5}t-\frac{\pi}{2}=0.8755, \pi - 0.8755, 2\pi +0.8755, ...$ Can you complete it now?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545105457305908, "perplexity": 586.0960668012078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541220.56/warc/CC-MAIN-20161202170901-00228-ip-10-31-129-80.ec2.internal.warc.gz"}
https://pypi.org/project/inelegant/0.1.0/
Inelegant, a directory of weird helpers for tests. ## Project description “Inelegant” is a set of not very elegant tools to help testing. So far there are seven packages: inelegant.net: the most important tools are the waiter functions. inelegant.net.wait_server_down() will block until a port in a host is not accepting connections anymore, and inelegant.net.wait_server_up() will block until a port in the host will be ready for receiving data. There is also inelegant.net.Server, that sets up a very dumb SocketServer.TCPServer subclass for testing. inelegant.finder: contains the inelegant.finder.TestFinder class. It is a unittest.TestSuite subclass that makes the task of finding test cases and doctests way less annoying. inelegant.module: with inelegant.module.create_module(), one can create fully importable Python modules. inelegant.module.installed_module() will create and remove the importable module. There are other related functions. inelegant.process: home of inelegant.process.Process, a nice multiprocessing.Process subclass that makes the process of starting, stopping and communicating with a function in another process easier enough. inelegant.fs: tools for file system operations. Most notably, context managers to make such operations reverted. So, now once can “cd” into a directory and be back to the original one, create a temporary file and have it automatically deleted after the context, and the same with temporary directories. inelegant.dict: it provides the temp_key() context manager. It adds a key to a dictionary and, once its context is done, removes the key. inelegant.toggle: it provides the Toggle class. It is used to create flags to enable global behaviors. A toggle is, indeed, something you would rather avoid but may need. ## Project details Uploaded source
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1702183336019516, "perplexity": 3229.0354555456574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00461.warc.gz"}
http://tex.stackexchange.com/questions/167290/why-do-different-fonts-have-different-point-sizes
# Why do different fonts have different point sizes? My "general" question is: • Why do different fonts have different definitions of a "point"? Or in a concrete case: • Why is Arial 12pt bigger than Times New Roman 12pt? The fact the definition of point size changes from one font to another confuses me. General or theoretical answers are welcome. PS: Although this question is not directly (La)TeX-related, at the same time it is, because, of course, (La)TeX uses fonts. EDIT AND POSSIBLE ANSWER: Here, a sample document with words of 20 repeated characters of each latin character, in Times (upper line) and Arial (just below it): We can see that, 50% of letters, specially vowels, are bigger in Arial, 35% of similar size, and 15% are bigger in Times. But these differences in size are only real in width, since it changes the number of characters which can fit in a line (and in a page). About the vertical space consumed by both fonts, the x-height of Arial is bigger than Times (the x-height is which have in common the letters: pb), but the size of the x-height + ascenders (which have in common bh) or x-height + descender (which have in common pq) are the same in both fonts. In general, the "point size" of a font is the size of the x-height + ascender area's size + descender area's size, and the point size says nothing and gives no restrictions about the width of each character. So, the only reason Arial is bigger than Times is because of its width. Any other size difference depends only on the visual style of the font design (in this case, the fact Arial has a bigger x-height). Finally, how large is a point? does every font use the same definition of "point size"? In principle, the size of a point is 0.35278 mm, and thus 12pt (a pica), 12*0.35278 = 4.23333 mm. This point size is supposed to be a standard one, and it seems both Arial and Times New Roman uses this definition of point (tested with a rule in the screen :P). Although, in general, I don't know if, really, every modern font respect it and, according to some of the comments and other Internet sources, this seems not to be the case. - The Wikipedia article is an OK place to start. But I don't think it is really a *TeX question, though it is admittedly of interest to people who use fonts.... – jon Mar 23 '14 at 18:54 I think it's misleading to say that the point size is x-height + ascender area's size + descender area's size You can define it to be so (and a font editor may show it that way) but that does not necessarily relate to actual glyph shapes: there may be no glyphs in the font with ascenders or descenders that tall. Also the final comment about definition of point it isn't really font-specific but rather system specific, TeX pt is not quite the same as a postscript point (which is a bp in TeX units) but if you ask for fonts at 12pt or 12bp they all scale by the same relative amount. – David Carlisle Mar 24 '14 at 17:27 Also the site works best if you don't edit the answer into the question but leave the question as a question and put the answer in an answer. (It is OK to self-answer and to accept your own answer) – David Carlisle Mar 24 '14 at 17:28 @DavidCarlisle Ok. Currently, it is a "possible answer". If I get a more rigurously or correct answer, I will put it as an answer. – Peregring-lk Mar 24 '14 at 17:45 This is the complete rewrite of my answer, my original misleading one is kept at the end. Setting a glyph at 12pt means that the glyph is scaled so it fits into a vertical space measuring 12pt (in any of the systems). The question is how to determine the vertical size of the glyph. The vertical metrics are defined font-wide and in the case of Arial and Times, they are the same: the height of a glyph in these fonts is divided into 2048 design units of which 1638 are above, 410 below the baseline. In both cases, the theoretical descender- and ascender-heights are positioned at the same points. What’s different in the two cases is how the image and the white-space of the glyph (which is an important part of it) are distributed. In the following image you see the b’s of Times and Arial (screenshot from Sortsmill Editor). The top and bottom horizontal and the vertical lines mark the bounding box of the glyph, the horizontal line in between is the base line. Setting the glyph at 12pt means that the distance between the top and the bottom horizontal line measures 12pt (in whatever system you choose). The most important difference is the much lower x-height in Times (916 vs. 1062). But also the ascender height in the actual images of the glyphs differs (1422 vs. 1466) the impression of which is increased by the sloped top of ascenders in Times which make its letters appear even smaller. The third difference is the smaller upper-case-height (1356 vs. 1466) but much more important is the contrast between thick vertical and thin horizontal strokes in Times whereas in Arial there’s almost no contrast. What’s more, Arial’s stems are in fact a little bit thicker than Times’ (180 vs. 166). This makes Times appear lighter and therefore increases the impression of it being smaller. Finally, Arial’s design is a bit broader, Times is a very narrow design so you have one more factor that increases said impression. The first version of the answer: First, Arial’s 12pt don’t differ from Times’ 12pt as long as both are measured within the same system (either pica, didot or big point). What you ask about is the optical appearance of different fonts at the same nominal size. It’s essentially a question of how the glyph’s image is positioned within its box. In lead type you have the lead body on which the image is cast. That image might have a different vertical size depending on the font’s design, for instance being smaller to leave more space atop for diacritical marks. Also, the height of the baseline might be shifted upwards so the descenders can be longer. The same goes for digital type (in fact in lead type there were some baseline standardizations which are no more in digital type). Arial’s uppercase letters for instance take up more vertical space than Times’. Also, the x-height (the height of letters that only reach to the midline like the letter “x”) in Arial is much higher than in Times which adds to the impression of being bigger sized. - Are you sure it is only an optical question?: s11.postimg.org/x25b9nctr/Arial_Times.png – Peregring-lk Mar 24 '14 at 11:10 Both fonts have same "size" when you compare vertical space. Arial has a bigger x-height but both ascenders (of the letter 'b') reach the same point, and the space between line is also the same; but the horizontal size is bigger in Arial than in Times, and in a line there's more Arial than Times characters. – Peregring-lk Mar 24 '14 at 11:33 There are actually different definitions of the point unit. These days most systems mean "PostScript points" 1/72 in (which TeX calls big point bp) However that isn't really the issue here. The size is essentially an arbitrary number assigned by the font designer. there is no universal definition of what is measured by the "point size" of a font. It's not the height of M or ( or any other testable thing. All you can really say is that for a given font it is probably bigger at 12pt than it is at 10pt. - So, if a font designer gives you some font, with an arbitrary size, and says: "this font has an standard size of 12pt", you have no other alternative as accept it, even it that size is ridicuously tiny or bigger? – Peregring-lk Mar 23 '14 at 18:58 In TeX you can change it, in various ways as you can apply scaling while font loading so (in latex for example) you can specify that when the user goes \fontsize{10pt}{12pt}\selectfont then you get what the designer called 10pt or that multiplied by any number (which you can set per font) this is useful when setting up packages mixing fonts for sans serif and serif (say) to make x-height (the height of lower case letters) match more closely – David Carlisle Mar 23 '14 at 19:03 @Peregring-lk -- And with fontspec, there is the Scale option; e.g., \setsansfont[Scale=MatchUppercase]{Arial}, which would scale Arial so it matched (using the 'uppercase' size) whatever you've loaded as your \setmainfont. Scale can also be set to MatchLowercase or to some number. – jon Mar 23 '14 at 19:15 IIRC the point size was originally the height of the lead used in letterpress printing, the actual size on that piece of lead didn't need to fit exactly as long as the alphabet looked balanced. – hugovdberg Mar 23 '14 at 19:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431549906730652, "perplexity": 1491.197685889064}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00009-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/171731/solving-the-recurrence-tn-tn-12-1
# Solving the recurrence $t(n)=(t(n-1))^2 + 1$ I am trying to solve the following recurrence relation: \begin{align*} t(1) & = 1, \\ t(n) & =(t(n-1))^2 + 1. \end{align*} I need to prove that $t(n)= k^{2^{n}}$ for some constant $k$. What is the value of $k$? How would I go about doing it? thanks - If you mean that $t(n)=(t(n-1))^2+1$, then the formula $k^{2n}$ is not correct, as a calculation of the first few terms will show. So there may be a typo. And if you mean asymptotically equal, that doesn't work either, the function as given grows far faster than $k^{2n}$. – André Nicolas Jul 17 '12 at 0:52 yes is t(n)= (t(n-1))^2 +1 the formula is k^2^n – oscar Jul 17 '12 at 0:54 @oscar Even after editing, what you are trying to prove is false. Let $n=1$. Then we have $k^2=1$, so $k=\pm 1$. But this would imply $t(n)=1$ for all $n$, which is trivially false. – Alex Becker Jul 17 '12 at 1:13 Are you familiar with proof by induction in general? Do you know how to start such a proof? – Arkamis Jul 17 '12 at 2:56 @EdGorcenski i dont now how formula use, T(n)=T(n-1)^2 + 1 or T(n)=k^2^n in the induction – oscar Jul 17 '12 at 3:03 Since $t(n)\geqslant1$, $t(n+1)+1=t(n)^2+2\leqslant(t(n)+1)^2$ for every $n\geqslant1$. Iterating and using the initial condition $t(1)+1=2$, one gets $t(n)+1\leqslant2^{2^{n-1}}$, hence $t(n)\lt2^{2^{n-1}}$ for every $n\geqslant1$. On the other hand, $t(n+1)\gt t(n)^2$ for every $n\geqslant1$. Iterating and using the initial condition $t(2)=2$, one gets $t(n)\gt2^{2^{n-2}}$ for every $n\geqslant2$. For every $n\geqslant2$, $a^{2^n}\lt t(n)\lt b^{2^n}$ with $a=\sqrt[4]{2}$ and $b=\sqrt{2}$. Conjecture: $\log_2\log_2 t(n)=n-\kappa+o(1)$ for some $1\leqslant\kappa\lt2$. Edit: The OEIS page suggested by @Gerry Myerson asserts that $\kappa$ exists and provides a numerical value equivalent to $\kappa=1.7668768^-$. - The sequence is tabulated here, and there are some links that you might find helpful. - Since T(n) = k^(2^n), T(1) = 1 = k^(2^1) = k^2 Hence, k = 1 or k = -1, which doesn't make sense at all. You sure you got the question right? - If $T(n)=T(n-1)^2+1$ => $T(n)=(T(n-1))^2+1$ If $T(n)=k^{2^n}$ for some k, $(T(n-1))^2+1 = ({k^{{2^{n-1}}})^2}+1 = k^{2^n}+1$ which can not be equal to $k^{2^n}=T(n)$ What's wrong in these steps? - I think we all know $t(n)$ can't equal $k^{2^n}$. We've moved on to noting that it is asymptotic to $k^{2^n}$, for an appropriate choice of $k$. – Gerry Myerson Jul 17 '12 at 13:15 As noted in one of the answers before (and of course in your own answer), it is in fact better than 'just' asymptotic; the relation is so tight that one can find a $k$ such that $t(n) = \left\lfloor k^{2^n}\right\rfloor$ for all $n$. – Steven Stadnicki Sep 11 '12 at 3:33 In fact this recurrence belongs to a linear shifting version of http://mathworld.wolfram.com/QuadraticMap.html#eqn3. So according to http://mathworld.wolfram.com/QuadraticMap.html#eqn3, this recurrence has the analytical solution when $n$ is any natural number: $t(n)=\biggl[e^{2^{n-1}\sum\limits_{k=1}^\infty2^{-k}\ln\Bigl(1+\frac{1}{y_k^2}\Bigr)}\biggr]$ For $n$ is any complex number, I still have no idea about its analytical solution. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787577390670776, "perplexity": 225.5953215636813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447881.87/warc/CC-MAIN-20151124205407-00003-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/77321-improper-integrals.html
# Math Help - improper integrals 1. ## improper integrals $\int e^{2x}dx$ [-infinity,0] I set this equal to the limit as x goes to -infinity and subbed in a for -infinity. Then I solved the definite improper integral getting= $\frac{e^{2x}}{2}$ . I got (1/2)-0 = 1/2. Did I do this right?? 2. Originally Posted by saiyanmx89 $\int e^{2x}dx$ [-infinity,0] I set this equal to the limit as x goes to -infinity and subbed in a for -infinity. Then I solved the definite improper integral getting= $\frac{e^{2x}}{2}$ . I got (1/2)-0 = 1/2. Did I do this right?? Correct. 3. Originally Posted by saiyanmx89 $\int e^{2x}dx$ [-infinity,0] I set this equal to the limit as x goes to -infinity and subbed in a for -infinity. Then I solved the definite improper integral getting= $\frac{e^{2x}}{2}$ . I got (1/2)-0 = 1/2. Did I do this right?? $\lim_{a \to -\infty} \int_a^0 e^{2x} \, dx$ $\lim_{a \to -\infty} \left[\frac{1}{2}e^{2x}\right]_a^0$ $\lim_{a \to -\infty} \left[\frac{1}{2} - \frac{1}{2}e^{2a}\right] = \frac{1}{2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964414834976196, "perplexity": 1339.956101403583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00179-ip-10-146-231-18.ec2.internal.warc.gz"}
https://ckunte.net/2015/ea
ckunte.net Chain stiffness Position Mooring (DnV-OS-E301), DnV’s Offshore Standard, has undergone three revisions in six years — a sign of how busy things are w.r.t. mooring systems, and the urgent need for updates as the industry rapidly learns from experience and innovates on demand. The one of interest to me is the recipe it offers to calculate elasticity for stud-less chains. These, the standard acknowledges, are courtesy of Vicinay Cadenas, the world-renowned chain maker from Spain. Studless chains: Effective elastic modulus, E ($N/m^2$) 2000–8 2010–15 R3 (8.37 - 0.0305⋅d)⋅1010 (5.40 - 0.0040⋅d)⋅1010 R4 (7.776 - 0.01549⋅d)⋅1010 (5.45 - 0.0025⋅d)⋅1010 R5 – do – (6.00 - 0.0033⋅d)⋅1010 where, chain link diameter, d, is in mm, and A would be the combined cross sectional area of two legs of a (common) chain link. So, here’s the story of change in one graph that you do not get to see in standards or when comparing between revisions. A graph of chain diameter plotted against its corresponding axial stiffness is always handy for a mooring engineer, because anyone with an elementary understanding of structural mechanics would know that stiffness is essential in controlling displacements (vessel excursions or offsets). The one that jumps right out is, of course, the R3 grade stud-less chain as it falls free from its previous value. It’s good remember that the R3 has been with us longer than the newer grades, which are essentially out of the demand from vessels like the FLNG. Seriously, what just happened there? Code: ea.py for plotting chain diameter v. axial stiffness: #!/usr/bin/env python # -*- coding: UTF-8 -*- # ea.py -- 2018 ckunte import numpy as np import matplotlib.pyplot as plt def main(): d = np.linspace(60, 200) # Chain diameter (mm) E = [ (5.6 - 0.0 * d) * 1E10, # R3/R4/R5 Gr stud (2018) (5.4 - 0.004 * d) * 1E10, # R3 Gr. studless (2018) (5.45 - 0.0025 * d) * 1E10, # R4 Gr. studless (2018) (6.0 - 0.0033 * d) * 1E10 # R5 Gr. studless (2018) ] # Elasticities for studless chains lbl = [ 'Stud chain R3/R4/R5 (2018)', 'Studless chain R3 (2018)', 'Studless chain R4 (2018)', 'Studless chain R5 (2018)' ] # labels for i, j in zip(E, lbl): # plot: E vs. chain dia. ax1 = plt.subplot(211) plt.plot(d, i, label=j, linewidth=2) plt.setp(ax1.get_xticklabels(), visible=False) # plot: EA vs. chain dia. EA = i * 2 * np.pi * (d / 1E3)**2 / (4 * 1E6) # (MN) ax2 = plt.subplot(212) plt.plot(d, EA, label=j, linewidth=2) plt.setp(ax2.get_xticklabels(), fontsize=12) ax1.legend(loc=0) ax2.legend(loc=0) ax1.set_ylabel('Effective elastic modulus, E (N/m$^2$)') plt.ylabel('Axial stiffness, EA (MN)') plt.xlabel('Chain diameter, d (mm)') ax1.grid(True) ax2.grid(True) plt.show() pass if __name__ == '__main__': main()
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5396920442581177, "perplexity": 11652.597739909932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00011.warc.gz"}
http://en.wikipedia.org/wiki/Ramanujan's_congruences
# Ramanujan's congruences In mathematics, Ramanujan's congruences are some remarkable congruences for the partition function p(n). The Indian mathematician Srinivasa Ramanujan discovered the following • $p(5k+4)\equiv 0 \pmod 5$ • $p(7k+5)\equiv 0 \pmod 7$ • $p(11k+6)\equiv 0 \pmod {11}.$ ## Background In his 1919 paper (Ramanujan, 1919), he gave proof for the first two congruences using the following identities (using q-Pochhammer symbol notation): $\sum_{k=0}^\infty p(5k+4)q^k=5\frac{(q^5)_\infty^5}{(q)_\infty^6}$ $\sum_{k=0}^\infty p(7k+5)q^k=7\frac{(q^7)_\infty^3}{(q)_\infty^4}+49q\frac{(q^7)_\infty^7}{(q)_\infty^8}.$ then stated that "It appears there are no equally simple properties for any moduli involving primes other than these".[1] After Ramanujan died in 1920, G. H. Hardy, extracted proofs of all three congruences from an unpublished manuscript of Ramanujan on p(n) (Ramanujan, 1921). The proof in this manuscript employs Eisenstein series. In 1944, Freeman Dyson defined the rank function and conjectured the existence of a crank function for partitions that would provide a combinatorial proof of Ramanujan's congruences modulo 11. Forty years later, George Andrews and Frank Garvan successfully found such a function, and proved the celebrated result that the crank simultaneously “explains” the three Ramanujan congruences modulo 5, 7 and 11. Extending results of A. O. L. Atkin, Ken Ono in 2000 proved that there are such Ramanujan congruences modulo every integer coprime to 6. For example, his results give • $p(4063467631k+30064597)\equiv 0\pmod{31}.$ Later Ken Ono conjectured that the elusive crank also satisfies exactly the same types of general congruences. This was proved by his Ph.D. student Karl Mahlburg in his 2005 paper Partition Congruences and the Andrews–Garvan–Dyson Crank, linked below. This paper won the first Proceedings of the National Academy of Sciences Paper of the Year prize. A conceptual explanation for Ramanujan's observation was finally discovered in January 2011 [2] by considering the Hausdorff dimension of the following $P$ function in the l-adic topology: $P_l(b;z) := \sum_{n=0}^\infty p\left(\frac{l^bn+1}{25}\right)q^{\frac{n}{24}}.$ It is seen to have dimension 0 only in the cases where l = 5, 7 or 11 and since the partition function can be written as a linear combination of these functions[3] this can be considered a formalization and proof of Ramanujan's observation. In 2001, S. Weaver gave an effective algorithm for finding congruences of the partition function, and tabulated 76,065 congruences.[4] This was extended in 2012 by F. Johansson to 22,474,608,014 congruences,[5] one large example being $p(28995244292486005245947069k + 28995221336976431135321047) \equiv 0 \pmod{29}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578506350517273, "perplexity": 724.3490149171032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164581855/warc/CC-MAIN-20131204134301-00033-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/22075/connectedness-of-random-distance-graph-on-integers
# Connectedness of random distance graph on integers This is not my field, a friend needs the answer for the following question. Suppose we have a decreasing probability function, $p: N \rightarrow [0,1]$ such that $sum_n p(n) = \infty$. Take the graph where we connect two integers at distance d with probability $p(d)$. Will this graph be connected with probability one? I see that if the sum is convergent, then we almost surely have an isolated vertex (unless $p(1)=1$), so this would be "sharp". One possible approach would be to take the path that starts from the origin and if it is at n after some steps, then next goes to the smallest number that is bigger than $n$ and is connected to $n$ and to show that this path has a positive density with probability one. Is this second statement true? - Shelah has a series of papers titled Zero-one laws for graphs with edge probability decaying with distance. I haven't read any of them, but the title looks promising. –  François G. Dorais Apr 21 '10 at 19:42 (I came across this by pure accident. Apologies if it turns out to be a dead end.) –  François G. Dorais Apr 21 '10 at 19:44 This is true but slightly non-trivial (I mean the main question). If you haven't found this result in Shelah's papers or somewhere else already, I'll post the proof tomorrow. Now it is too late (and the comment window is too narrow...) –  fedja Apr 24 '10 at 3:12 oh, I started to believe that it might be false, at least I am pretty sure that the path approach (also suggested in didier's answer) fails if p grows very slowly. So please, post the sketch of the proof! –  domotorp Apr 24 '10 at 10:13 All right. Here goes, as promised. We shall work with a big circle containing a huge number $N$ of points and a sequence of probabilities $p_1,\dots,p_L$ such that $\sum_j p_j=P$ is large (so we never connect points at the distance greater than $L$ but connect points at the distance $d\le L$ with probability $p_d$). If $N\gg L$ and $p_j<1$ for all $j$, the probability of a connected path going around the entire circle is extremely small, so the problem is essentially equivalent to the one on the line. I chose the circle just to make averaging tricks technically simple (otherwise one would have to justify some exchanges of limits, etc.). Fix $\delta>0$. Our aim will be to show that with probability at least $1-2\delta$, we have $\sum_{j\in E_0} p_{|j|}\ge P$ where $E_0$ is the connected component of $0$ and integers are understood modulo $N$, provided that $P>P(\delta)$. This, clearly, implies the problem (just consider the connected component of $0$ in the subgraph with even vertices only; whatever it is, the edges going from odd vertices to even vertices are independent of it, so we get $0$ joined to $1$ with probability $1$ in the limiting line case with infinite sum of probabilities). We shall call a point $x$ good if $\sum_{y\in E_x}p_{|y-x|}\ge P$. We will call a connected component $E$ with $m$ points good if at least $(1-\delta)m$ its points are good. Fix $m$. Let's estimate the average number of points lying in the bad components. To this end, we need to sum over all bad $m$-point subsets $E$ the probabilities of the events that the subgraph with the set of vertices $E$ is connected and there are no edges going from $E$ elsewhere and then multiply this sum by $m$. For each fixed $E$ these two events are independent and, since $E$ is bad, there are at least $\delta m$ vertices in $E$ for which the probability to not be connected with a vertex outside $E$ is at most $e^{-P}$ (the total sum of probabilities of edges emanating from a vertex is $2P$ and only the sum $P$ can be killed by $E$). Thus, the second event has the probability at most $e^{-\delta P m}$ for every bad $E$ and it remains to estimate the sum of probabilities to be connected. We shall expand this sum to all $m$-point subsets $E$. Now, the probability that subgraph with $m$ vertices is connected does not exceed the sum over all trees with the set of vertices $E$ of the probabilities of such trees to be present in the graph. Thus, we can sum the probabilities of all $m$-vertex trees instead. We need an efficient way to parametrize all $m$-trees. To this end, recall that each tree admits a route that goes over each edge exactly twice. Moreover, when constructing a tree, in this route one needs to specify only new edges, the returns are defined uniquely as the last edge traversed only once by the moment. Thus, each $m$ tree can be encoded as a starting vertex and a sequence of $m-1$ integer numbers (steps to the new vertex) interlaced with $m-1$ return commands. For instance, (7;3,2,return,-4,return,return) encodes the tree with vertices 7,10,12,6 and the edges 7--10, 10--12, 10--6. Well I feel a bit stupid explaining this all to a combinatorist like you... Now when we sum over all such encodings, we effectively get $N$ (possibilities for the starting vertex) times the sum the products of probabilities over all sequences of $m-1$ integers multiplied by the number of possible encoding schemes telling us the positions of the return commands. (actually a bit less because not all sequences of integers result in a tree). Since there are fewer than $4^{m-1}$ encoding schemes, we get $4^{m-1}(2P)^{m-1}$ as a result. Thus the expected number of bad $m$-components is at most $N\cdot 4^{m-1}(2P)^{m-1}e^{-\delta Pm}$. Even if we multiply by $m$ (which is not really necessary because each tree is counted at least $m$ times according to the choice of the root) and add up over all $m\ge 1$, we still get less than $\delta N$ if $P$ is large. Now we see that the expected number of bad points is at most $2\delta N$ (on average at most $\delta N$ points lie in the bad components and the good components cannot contain more than $\delta N$ points by their definition). Due to rotational symmetry, we conclude that the probability of each particular point to be bad is at most $2\delta$. The end. - Nice trick, I think this is a perfect solution. –  domotorp Apr 24 '10 at 14:22 At least for some sequences $(p(n))$, the resulting graph is almost surely connected. To show that the vertices $1$ and $N$ are linked by a path of open edges, build an auxiliary Markov chain $(x_n,y_n)_n$ as follows. Start from $x_0=1$ and $y_0=N$. If $x_n < y_n$, set $y_{n+1}=y_n$ and replace $x_n$ by $x_{n+1}=x_n+k$ with probability $q(k)$. Likewise, if $x_n > y_n$, set $x_{n+1}=x_n$ and replace $y_n$ by $y_{n+1}=y_n+k$ with probability $q(k)$. Choose for $q(\cdot)$ the distribution of the least integer $k\ge1$ such that the edge $(x,x+k)$ is open in the graph, for any $x$, that is, $q(k)=p(k)(1-p(k-1))\cdots(1-p(1))$. The fact that the series $\sum_kp(k)$ diverges ensures that (indeed, is equivalent to the fact that) the measure $q$ has total mass $1$. Now, if $x_n=y_n$ for at least one integer $n$, then the vertices $1$ and $N$ are in the same connected component. It happens that the process $(z_n)_n$ defined by $z_n=|x_n-y_n|$ is an irreducible Markov chain and that in some cases one can show that $(z_n)$ is recurrent. For instance, if $(z_n)$ has integrable steps and if its drift at $z$ is uniformly negative for large enough values of $z$, Foster's criterion indicates that indeed $(z_n)$ is recurrent. An example of this case is when $p(n)=p$ for every $n$, with $p$ in $(0,1)$. Then $E(z_{n+1}|z_n=z)-z\to-1/p$ when $z\to\infty$ hence $(z_n)$ hits $0$ almost surely. This implies that there exists a path from $1$ to $N$ in the graph, almost surely, for every $N\ge2$. If $E(z_{n+1}|z_n=z)$ is infinite (for instance if $p(n)=1/(n+1)$ for every $n\ge1$), more work is needed. - I think you follow a similar approach that I suggested, with taking the path I described, except that you do it for two starting points at the same time, this seems to be a better idea. I do not see why x_n=y_n would be equivalent to 1 and N being in the same component, but it is surely sufficient. Unfortunately I am really not an expert and I don't understand the notions that you use after. What is integrable step, recurrent, what does Foster's criterion say? Could you please give a reference where I can find these? –  domotorp Apr 22 '10 at 8:14 1) You are solving the problem of whether there is a path between 1 and N that consists of two monotone subpaths; this is not exactly the same as whether there is a path between 1 and N. 2) If $\sum_{j<n} p(j)$ grows slower than $log(n)$, the steps of $z$ are not integrable. –  Thorny Apr 22 '10 at 11:17 @Thorny: You are right about your 1), thanks. Answer edited. –  Did Apr 22 '10 at 18:59 I was also referring to this when I wrote that I don't see the equivalence. Could anyone please give a reference to these? They were neither on wikipedia, nor on mathworld, and at other places even the statement was too complicated without knowing a bunch of other things. –  domotorp Apr 22 '10 at 19:52 @domotorp: About recurrence/transience, you might try section 1.5 of the book available at "statslab.cam.ac.uk/~james/Markov/";. "(Lyapunov-)Foster criteria" is a loose name for a variety of drift conditions ensuring the recurrence or the transience of a given Markov chain; some of them are described at "math.ucsd.edu/~pfitz/downloads/courses/spring05/math280c/…;. –  Did Apr 22 '10 at 20:22 Well, as it stands isn't the answer No? Just take $p(n) = 1$ if $n$ is even and $0$ if $n$ is odd. The graph will have at least two components consisting of the even and odd integers. EDIT: retracted. Sorry. This is not (and cannot be made) decreasing. Missed that requirement. - p is supposed to be decreasing. –  Gjergji Zaimi Apr 21 '10 at 19:00 True. My mistake. –  Alon Amit Apr 21 '10 at 19:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185813069343567, "perplexity": 183.2081240354298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042985140.15/warc/CC-MAIN-20150728002305-00339-ip-10-236-191-2.ec2.internal.warc.gz"}
https://leconjugueur.lefigaro.fr/conjugaison/anglais/shall+corrupt.html
Verbe anglais à conjuguer :  Modal : aucun | may | might | can | could | shall | should | will | would | must | ought to # Conjugaison du verbe anglais SHALL CORRUPT Verbe régulier : corrupt - corrupted - corrupted Traduction française : corrompre - dépraver ## Affirmation Forme simple I shall corrupt you shall corrupt he shall corrupt we shall corrupt you shall corrupt they shall corrupt Forme en V-ing I shall be corrupting you shall be corrupting he shall be corrupting we shall be corrupting you shall be corrupting they shall be corrupting Perfect I shall have corrupted you shall have corrupted he shall have corrupted we shall have corrupted you shall have corrupted they shall have corrupted Perfect en V-ing I shall have been corrupting you shall have been corrupting he shall have been corrupting we shall have been corrupting you shall have been corrupting they shall have been corrupting ## Négation Forme simple I shall not corrupt you shall not corrupt he shall not corrupt we shall not corrupt you shall not corrupt they shall not corrupt Forme en V-ing I shall not be corrupting you shall not be corrupting he shall not be corrupting we shall not be corrupting you shall not be corrupting they shall not be corrupting Perfect I shall not have corrupted you shall not have corrupted he shall not have corrupted we shall not have corrupted you shall not have corrupted they shall not have corrupted Perfect en V-ing I shall not have been corrupting you shall not have been corrupting he shall not have been corrupting we shall not have been corrupting you shall not have been corrupting they shall not have been corrupting ## Interrogation Forme simple shall I corrupt? shall you corrupt? shall he corrupt? shall we corrupt? shall you corrupt? shall they corrupt? Forme en V-ing shall I be corrupting? shall you be corrupting? shall he be corrupting? shall we be corrupting? shall you be corrupting? shall they be corrupting? Perfect shall I have corrupted? shall you have corrupted? shall he have corrupted? shall we have corrupted? shall you have corrupted? shall they have corrupted? Perfect en V-ing shall I have been corrupting? shall you have been corrupting? shall he have been corrupting? shall we have been corrupting? shall you have been corrupting? shall they have been corrupting? ## Interro-négation Forme simple shall I not corrupt? shall you not corrupt? shall he not corrupt? shall we not corrupt? shall you not corrupt? shall they not corrupt? Forme en V-ing shall I not be corrupting? shall you not be corrupting? shall he not be corrupting? shall we not be corrupting? shall you not be corrupting? shall they not be corrupting? Perfect shall I not have corrupted? shall you not have corrupted? shall he not have corrupted? shall we not have corrupted? shall you not have corrupted? shall they not have corrupted? Perfect en V-ing shall I not have been corrupting? shall you not have been corrupting? shall he not have been corrupting? shall we not have been corrupting? shall you not have been corrupting? shall they not have been corrupting?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960352182388306, "perplexity": 29472.544875052306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738425.43/warc/CC-MAIN-20200809043422-20200809073422-00478.warc.gz"}
http://aux.planetmath.org/locallycompactquantumgroupsuniformcontinuity
# locally compact quantum groups: uniform continuity ## Primary tabs Defines: left uniformly continuous functions on a locally compact group, uniformly continuous functionals on the Fourier algebra Keywords: locally compact quantum groups uniform continuity, amenability; co-amenability; invariant mean, locally compact quantum group, multiplier, amenable lcg uniform continuity over topological groups associated with Hopf algebras,locally compact quantum group Synonym: uniform continuity over topological groups associated with Hopf algebras Type of Math Object: Example Major Section: Reference
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41549503803253174, "perplexity": 5384.461580238163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695066.99/warc/CC-MAIN-20170926051558-20170926071558-00078.warc.gz"}
https://gmatclub.com/forum/how-many-digits-are-there-in-the-product-192607.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 24 Apr 2019, 05:06 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History How many digits are there in the product 2^23*5^24*7^3? Author Message TAGS: Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 54493 How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 03 Feb 2015, 09:53 00:00 Difficulty: 35% (medium) Question Stats: 70% (01:24) correct 30% (01:41) wrong based on 214 sessions HideShow timer Statistics How many digits are there in the product 2^23*5^24*7^3? A. 24 B. 25 C. 26 D. 27 E. 28 Kudos for a correct solution. _________________ Intern Joined: 19 Sep 2014 Posts: 21 Concentration: Finance, Economics GMAT Date: 05-05-2015 Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 03 Feb 2015, 11:41 8 1 Bunuel wrote: How many digits are there in the product 2^23*5^24*7^3? A. 24 B. 25 C. 26 D. 27 E. 28 Kudos for a correct solution. Seems like a tricky question, but I hope that I have been able to crack it! Here's my solution: $$(2^{23})*(5^{24})*(7^{3}) = (2^{23})*(5^{23})*(5)*(7)*(7)*(7)$$ $$(2^{23})*(5^{23})*(5)*(7)*(7)*(7) = ((2*5)^{23})*(5)*(7)*(7)*(7)$$ $$((2*5)^{23})*(5)*(7)*(7)*(7) = (10^{23})*(35)*(49)$$...... From this step onwards it is probably possible to estimate the number of digits by approximating $$(10^{23})*(35)*(49)$$ to $$(10^{23})*(35)*(50)$$! But, just to make sure: $$(10^{23})*(35)*(49)$$ = $$(10^{23})*(35)*(50-1)$$.... Therefore $$(10^{23})*(1750 - 35)$$ which is can be simplified to $$(10^{23})*(1715)$$ $$(10^{23})*(1715)$$ should have exactly 27 digits! I think the answer is D! Please consider giving me KUDOS if you felt this post was helpful and correct! or please enlighten me (in case my answer's incorrect) so that I can learn and improve from my mistakes! Thanks. General Discussion Manager Joined: 15 Aug 2013 Posts: 53 Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 03 Feb 2015, 18:00 3 Well, 2^23 * 5^24 8 7^3 can be simplified 5 * 7^3 * 10^23 Now either we can multiply 5 and 343 (7^3 = 343) and check or intuitively we can easily see that - 7^3 will be definitely greater than 200 (7*7 = 49 and we have one more 7 to multiply roughly would tak eus to 280+ if you dont remember 7^3 =343)... what matter here is it will surely give me a -> 3 digit number <- which when multiplied by 5 will give me no more than a 4 digit number. (we already saw that no is greater than 200 so definitely 4 digit number and not 3) Hence we can say that on simplifiction we get (4 digit number) * 10^23 This will give me 4 digit number followed by 23 zeroes and hence no of digits will be 27 Ans- :D Intern Joined: 24 Jan 2014 Posts: 35 Location: France GMAT 1: 700 Q47 V39 GPA: 3 WE: General Management (Advertising and PR) Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 03 Feb 2015, 22:40 3 Hi Bunuel, We can simplify to: 2^23*5^23*5*7^3= (2*5)^23*5*343=10^23*1715 We can see a pattern in the powers of 10 => 10^1 has 2 digits 10^2 has 3 digits .... 10^23 has 24 digits IF we simplify 10^1*1715= 17150 =>1715 adds 3 digits to any number, power of 10 => (a number with 24 digits, from 10^23) * ( a number that adds 3 digits) = 27 digits CORRECT RESPONSE D Bunuel wrote: How many digits are there in the product 2^23*5^24*7^3? A. 24 B. 25 C. 26 D. 27 E. 28 Kudos for a correct solution. _________________ THANKS = KUDOS. Kudos if my post helped you! Napoleon Hill — 'Whatever the mind can conceive and believe, it can achieve.' Math Expert Joined: 02 Sep 2009 Posts: 54493 Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 09 Feb 2015, 04:57 2 2 Bunuel wrote: How many digits are there in the product 2^23*5^24*7^3? A. 24 B. 25 C. 26 D. 27 E. 28 Kudos for a correct solution. VERITAS PREP OFFICIAL SOLUTION: The key to this problem is rearranging the math to play to your strengths. You should feel comfortable multiplying 2s by 5s to get 10s, so if you extract 2^23*5^23, you can visualize that number: 10^23, which is a 1 followed by 23 zeroes. Then you're left with 5^1*7^3, which you could either multiply out (not fun but not impossible, either) or again repackage to (5)(7) * (7)(7), which is 35 * 49. That is close enough to 35 * 50 that you can quickly see that that number will have 4 digits, so your final number will be those 4 digits followed by 23 zeroes for a total of 27 digits. _________________ Manager Joined: 25 May 2016 Posts: 84 Location: Singapore Concentration: Finance, General Management GMAT 1: 620 Q46 V30 Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 24 Jun 2016, 02:58 kdatt1991 wrote: Bunuel wrote: How many digits are there in the product 2^23*5^24*7^3? A. 24 B. 25 C. 26 D. 27 E. 28 Kudos for a correct solution. Seems like a tricky question, but I hope that I have been able to crack it! Here's my solution: $$(2^{23})*(5^{24})*(7^{3}) = (2^{23})*(5^{23})*(5)*(7)*(7)*(7)$$ $$(2^{23})*(5^{23})*(5)*(7)*(7)*(7) = ((2*5)^{23})*(5)*(7)*(7)*(7)$$ $$((2*5)^{23})*(5)*(7)*(7)*(7) = (10^{23})*(35)*(49)$$...... From this step onwards it is probably possible to estimate the number of digits by approximating $$(10^{23})*(35)*(49)$$ to $$(10^{23})*(35)*(50)$$! But, just to make sure: $$(10^{23})*(35)*(49)$$ = $$(10^{23})*(35)*(50-1)$$.... Therefore $$(10^{23})*(1750 - 35)$$ which is can be simplified to $$(10^{23})*(1715)$$ $$(10^{23})*(1715)$$ should have exactly 27 digits! I think the answer is D! Please consider giving me KUDOS if you felt this post was helpful and correct! or please enlighten me (in case my answer's incorrect) so that I can learn and improve from my mistakes! Thanks. Great Explanation. +1 Kudos _________________ 23 Kudos left to unlock next level. Help me by Contributing one for cause .. Please Manick9 If you found my post helpful, kindly press "+1 Kudos" to appreciate Director Joined: 20 Feb 2015 Posts: 795 Concentration: Strategy, General Management Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 24 Jun 2016, 04:02 2^23*5^23*5*7^3 or, 10^23*5*343 10^23*1715 =27 digits Intern Joined: 08 Jun 2011 Posts: 18 Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 08 Jul 2017, 09:40 3 To find no of digits, first thing comes in our mind is - either we do multiplication and see for ourselves. But here, looking at the mammoth factors (2^23 * 5^24 8 7^3), we know it is not posssible or at least very time consuming. SO, next thing comes in our mind is - we know that 10^2 = 100 = 3 digits 10^3 = 1000 = 4 digits In fact, 10^ n will have n+1 digits. Now, I see this because I see a lot of 2's and 5's in the option. Hence, let us try to simplify the given equation - 2^23 * 5^24 * 7^3 = (2*5)^23 * 5 * 7^3 = 10^23 * 5 * 7^3 * = 24 digits + whatever we get from rest Let us solve the rest, 7^3 will surely give me 3 digit no, if you do not know 7^3 = 343 Multiplying by 5 will give me a 4 digit number. This will give me 4 digit number followed by 23 zeroes and hence no of digits will be 27. Ans D Senior Manager Joined: 15 Jan 2017 Posts: 351 Re: How many digits are there in the product 2^23*5^24*7^3?  [#permalink] Show Tags 19 Aug 2017, 01:36 I did a slightly more lengthy approach (but for someone still struggling with Quant; maybe useful): - intial amount: 2^23 5^24 7^3 - least number of digits - 2*3*5 = 70 --> 2 digits; leaving me with 70 x 2^22 x 5 23 x 7 ^2 - next number -- 70 x 70 x 2^21 x 5^22 x7 - next number -- 343000 x 2^20 x 5 ^21 (now we can understand from this step only zeroes would get added on) - we have now 3430000 (7 digits) x 2^19 x 5^20 --> we have now 19 zeroes that get added on; leaving aside 5 to power 1 - so we have 7 + 19 = 25 zeroes and 343*5 = 2 additional digits (1715) that get added on in total 27 digits Re: How many digits are there in the product 2^23*5^24*7^3?   [#permalink] 19 Aug 2017, 01:36 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6814479231834412, "perplexity": 2657.121044371419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578641278.80/warc/CC-MAIN-20190424114453-20190424140453-00199.warc.gz"}
http://www.sciencemadness.org/talk/viewthread.php?tid=70960
Not logged in [Login - Register] Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » Anyone interested in purchasing rhodium or ruthenium sponge? Your budget is the limit! Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication   » References Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Whimsy   » Detritus   » The Moderators' Lounge Author: Subject: Anyone interested in purchasing rhodium or ruthenium sponge? Your budget is the limit! Melgar International Hazard Posts: 1706 Registered: 23-2-2010 Location: NYC Member Is Offline Mood: Aromatic Anyone interested in purchasing rhodium or ruthenium sponge? Your budget is the limit! A friend of mine was having a hard time finding rhodium and ruthenium sponge in small quantities, so he went for very large quantities; much more than he needed in fact. Rhodium and ruthenium sponge rarely even show up on eBay. We haven't settled on a pricing model yet, but you guys would obviously get a discount that buyers on eBay would not. Prices would probably be based on the metals' spot prices plus a percent, plus shipping. Discounts would be available for larger orders. The first few buyers would get discounts too, assuming they're regular forum members and can vouch for the quality once they receive it. For rhodium, it can be dissolved in either a NaHSO4 fused salt, or oleum. There are probably other ways too, but since rhodium sulfate is its most-used, most stable salt, most methods require oxidation with sulfur trioxide in some form. Ruthenium can be dissolved in bleach, which causes it to form RuO4, similar to osmium. (It's right above Os on the periodic table) Since this stuff is toxic, and you probably don't want to lose it as a gas anyway, what I did when I was trying to isolate the finely-divided metal was to use bleach under a layer of cyclohexane, then add the ruthenium metal, which would sink to the bottom. RuO4 is heavy and nonpolar and so dissolves readily in nonpolar solvents, although its half-life isn't very long, so it would eventually form a black suspension in the cyclohexane. The suspension would eventually form clumps and settle back down to the bleach layer, where it would immediately be reoxidized due to its high surface area. So as long as there was active bleach in the bottom layer, the suspension would remain in the top layer. The whole system did give off bubbles, but we determined that they were most likely O2, as they did not form metallic coatings on glass like RuO4 did, and when collected, could cause a smouldering splint to relight. We also didn't notice any significant oxidation of aliphatic hydrocarbons as a solvent for RuO4, although we did learn that THF was a very poor solvent choice. I'm curious what that reaction product was (could very well have just been a reaction with the bleach), but it was heading quickly for thermal runaway, so we just quenched it with sodium thiosulfate and dumped it into the organic waste container. Anyway, considering how much of a pain in the ass it was for him to get these metals, this could be quite an opportunity for those of you interested in the chemistry of these elements. Pricing would be by the gram, and at least at first, we'll just give out quotes until we establish a pricing formula. It'll be less than double the spot price for sure though. Cryolite. Hazard to Others Posts: 217 Registered: 28-6-2016 Location: CA Member Is Online Mood: No Mood Ruthenium tetroxide is capable of oxidizing ethers to the corresponding diacids. In the case of THF, the product was likely succinic acid. Seeing as these are both very expensive but very useful catalytic metals, I am interested in how the pricing of low quantities will be. How much would, say, 1 gram cost? [Edited on 23-11-2016 by Cryolite.] j_sum1 Super Moderator Posts: 3573 Registered: 4-10-2014 Location: Oz Member Is Offline Mood: Struggling with inter-job inertia. Can it simply be ampouled and sold as an element sample? A little shameless self-promotion: You are welcome to tour my newly-completed lab. Channel is growing. Browse some others if you feel inclined. careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state Or simply put in a little jar and sold as an element sample? As noble metals putting them in an ampoule is not too critical, I would be happy with cheaper, faster access sans ampoule. Besides, if I want to do a little chemistry with a sample I would have to break the ampoule anyway. About that which we cannot speak, we must remain silent. -Wittgenstein Some things can never be spoken Some things cannot be pronounced That word does not exist in any language It will never be uttered by a human mouth Fleaker International Hazard Posts: 1217 Registered: 19-6-2005 Member Is Offline Mood: nucleophilic Depending on the particle size of the sponge, it may go into refluxing HBr or HCl/Cl2 over the course of a week or two. I should emphasize that RuO4 isn't formed by adding Ru to bleach--it's a mellow reaction. However, if one takes Ru sponge and dissolve it in hypochlorite, warm it and acidify that liquor, there certainly will be RuO4 formed because of the chlorine formed from the chlorate content in the bleach (especially after heating it). Brauer has a good discussion on RuO4. I've never had it explode on me and always attributed such claims to explosive chlorine-oxygen compounds if the ruthenate is prepared from hypochlorite. Woelen has an outstanding page on the chemistry of ruthenium in aqueous solutions here: http://woelen.homescience.net/science/chem/exps/ruthenium/in... "Kid, you don't even know just what you don't know. " --The Dark Lord Sauron Melgar International Hazard Posts: 1706 Registered: 23-2-2010 Location: NYC Member Is Offline Mood: Aromatic Ruthenium will bubble pretty vigorously in swimming-pool 10% sodium hypochlorite solution, and if you don't use an inert organic solvent to trap the RuO4, it'll form a metallic layer all over the inside of the glass. I imagine the ruthenium probably catalyzes the decomposition of NaOCl to NaCl and O2 to an extent, but there is definitely a reaction that produces significant amounts of RuO4 without any external heat source, even with a solid ingot of ruthenium. The reaction is exothermic, but the ruthenium suspension becomes visible almost immediately, indicating RuO4 formation. According to this video, if the NaOH concentration isn't very high, RuO4 can form. Apparently this is the case with commercial hypochlorite solutions: https://youtu.be/H7Ng4sOVkns Woelen probably added the NaOH deliberately, to avoid this reaction happening, but using cyclohexane or heavy mineral oil (cheaper, and more viscous, so bubbles and metal stay suspended for longer) traps the RuO4 for long enough that it decomposes back to metallic Ru before it can escape into the air. I only ever did this reaction outside, but if someone did it indoors, obviously a fume hood would be necessary. I didn't realize that the spot price of ruthenium is only like $40 an ounce, making it by far the cheapest platinum group metal. So I guess it doesn't make sense to sell it per gram. And with rhodium prices below platinum prices, now he's thinking of holding onto the rhodium as an investment. Still waiting to hear back from him as far as how (or if) he wants to sell it. Fleaker International Hazard Posts: 1217 Registered: 19-6-2005 Member Is Offline Mood: nucleophilic Have you personally seen this to be the case? I can't say that my experience with Ru in hypochlorite has been entirely the same. For one, I have not seen any deposition of RuO2/Ru on any glassware when preparing Ru/Al2O3 catalyst samples for ICP-OES but admittedly I try and keep the glassware clean. The fizzling in the case of 50 g 5% Ru/Al2O3 into 200 mL 12.5% w/v NaOCl isn't that vigorous, but it is definitely exothermic. I just don't see much RuO4 being produced in alkaline conditions such as would be encountered with strong bleach. RuO4 has a pretty ozone-ish smell; I don't think RuO4 or OsO4 smell at all like chlorine or chlorine dioxide and I've never smelled either in alkaline situations, although all work I ever did with it has been under a good draft. I believe almost all of the gaseous species produced is simple oxygen; it may be catalytically produced but it may also be produced by the production of sodium chlorate as the temperature rises. One of the first steps after reduction of any RuO2 to metal for a Ru/Al2O3 catalyst is to put it all into 1.5 M NaOH, stir, and run chlorine through it. After that's done, then it's filter pressed and boiled down before acidification. The boiling makes sodium chlorate out of the hypochlorite. When that concentrate is slowly acidified, the RuO4 distills with Cl2. At the end it needs boiled to get the last of the RuO4 out of solution. I never noticed much difference in Ru concentrations on samples prepped with NaOCl vs KOH/KNO3 if the samples were properly pre-reduced to Ru with H2. If they weren't reduced, then the bleach would not recover the Ru effectively but the fusion would. I do not know how much RuO4 could be formed in strong, alkaline hypochlorite solutions but I imagine while it might be possible, it probably isn't much. Maybe some RuO4 is formed at pH 8-9 in OCl- but I have never noticed it in practice and would be hesitant to say that it would be so much as to be analytically meaningful. As an aside, my: 12.5% w/v NaOCl has a pH of 12.5 per the meter (it bleaches papers, obviously). It's been my experience that Os is much easier to oxidize than Ru (heat and with other oxidants). And yes, Ru, Rh, even Pt, all are quite undervalued right now. Good time to acquire, in my honest opinion. If your friend doesn't want to sell any, let me know and I can get you fixed up with whichever ones you want. Also, I can make them into salts for you if you desire, but there would be a minimum order on account of the tedium and time to set up tube furnaces and such. Neither flask nor beaker. "Kid, you don't even know just what you don't know. " --The Dark Lord Sauron careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state Does anyone know of a source that sells ruthenium anywhere near the spot price? Silver, gold, and platinum and palladium are available in "investment" form on eBay for example at not too large a mark-up from spot (165% of spot for platinum), but the cheapest ruthenium seems to be$15/g (1100% of spot). It is priced as if it was $300/troz Ruthenium "investment" items do not exist on eBay. About that which we cannot speak, we must remain silent. -Wittgenstein Some things can never be spoken Some things cannot be pronounced That word does not exist in any language It will never be uttered by a human mouth - The Talking Heads violet sin International Hazard Posts: 1180 Registered: 2-9-2012 Location: :14,15,9,20,1,3,15,12 Member Is Offline Mood: Humaneize Alibaba. I had an offer to get Ru @ 250g/970$US shipping included. Not a spectacular deal, but the best I had found. Could scare up the contact info if you like this evening. They were nice and courted a prospective buyer with patients. SLAAYYYEEEERRRRR ,imi careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state Unfortunately I cannot afford to save that much money. About that which we cannot speak, we must remain silent. -Wittgenstein Some things can never be spoken Some things cannot be pronounced That word does not exist in any language It will never be uttered by a human mouth Maroboduus National Hazard Posts: 257 Registered: 14-9-2016 Location: 26 Ancho Street Member Is Offline Mood: vacant Quote: Originally posted by careysub Does anyone know of a source that sells ruthenium anywhere near the spot price? Silver, gold, and platinum and palladium are available in "investment" form on eBay for example at not too large a mark-up from spot (165% of spot for platinum), but the cheapest ruthenium seems to be $15/g (1100% of spot). It is priced as if it was$300/troz Ruthenium "investment" items do not exist on eBay. Yes, as a matter of fact I DO know somebody who sells it at a most reasonable markup from the spot price. Melgar International Hazard Posts: 1706 Registered: 23-2-2010 Location: NYC Member Is Offline Mood: Aromatic Quote: Originally posted by Fleaker Have you personally seen this to be the case? I can't say that my experience with Ru in hypochlorite has been entirely the same. Yes. My friend put a ruthenium ingot directly into bleach, and it coated the inside of his flask with a metallic layer. Then he put a beaker over the flask, and it coated the inside of that with a metallic layer too, as well as part of the table. This was before I got there, otherwise I would have advised against doing that, but I did see the metallic coating all over his glass. When I did it, I put a nonpolar layer on top of the bleach, then added the ingot. There was a lot of bubbling, the hypochlorite layer became bright yellow, and soon, a black, finely-divided particulate appeared in the nonpolar layer. As the reaction proceeded, the particulate gradually formed loose clumps, then slowly sank to the bottom of the nonpolar layer. The hypochlorite layer eventually lost its yellow color, at which point the metal particulate would start drifting back into the aqueous layer, and the bubbling would stop. Since this was a ruthenium ingot, not sponge, my best guess as to what happened is that the reaction of NaOH with ruthenium used up the NaOH in some localized areas near the ingot, causing the pH to drop in those areas enough to allow some RuO4 formation. The metallic layer on his glass was not thick, and not much ruthenium would need to be deposited to make a metallic layer visible. When I captured the gas in a nonpolar layer, it made the entire layer black, but the amount of metal that was actually suspended in that layer was very small. Does that seem like a plausible explanation? As far as sourcing these metals for close to their spot price, my friend said he got his through Crystal Bay Trading, formerly pm-connect.com, now located at https://platinumtradingonline.com/. You have to call them to arrange a purchase though, presumably because of the massive amount of attempted fraud in the precious metals and jewelry industries. careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state Quote: Originally posted by Melgar ... The metallic layer on his glass was not thick, and not much ruthenium would need to be deposited to make a metallic layer visible. Indeed not. The "absorption" (i.e. not transmission) path, reducing transmission by a factor of 1/e (to 36.7%) in a metal is about 4 nm, a high quality aluminum mirror coating, much thicker than necessary to reflect efficiently for long service life, is 100 nm. A 10 nm layer would look like a mirror. Your proposed mechanism (I gather) formation of volatile RuO4, that gets reduced on surfaces it contacts, looks like a convincing mechanism to me. I never heard of this phenomenon before, but it is interesting to me as an amateur telescope maker. People use the Brashear process (with tin salt pre-treatment) to silver mirrors at home. A benzotriazole over-coating is often used to protect the short-lived silver coating. Aluminizing requires expensive vacuum chamber set-ups (but I know people working on that too). Ruthenium coating using RuO4 vapor is something I have never heard proposed. I wonder how it would fair with benzotriazole coating. I did a little Googling on ruthenium tetroxide toxicity, since it is similar to the well know very toxic and volatile osmium tetroxide, but apparently there is little information on toxicity. Since ruthenium is produced in much larger quantities than osmium, and is much cheaper, I wonder if that is an indicator that it is considerably less toxic, since otherwise reports of toxic incidents would be in the literature. [Edited on 29-11-2016 by careysub] About that which we cannot speak, we must remain silent. -Wittgenstein Some things can never be spoken Some things cannot be pronounced That word does not exist in any language It will never be uttered by a human mouth Melgar International Hazard Posts: 1706 Registered: 23-2-2010 Location: NYC Member Is Offline Mood: Aromatic Ruthenium tetroxide is supposed to be pretty hazardous, but it's hard to get it into that state, and it doesn't stay like that for long. IIRC, osmium oxidizes more readily, which is what makes it more dangerous. Considering they use nickel carbonyl in industry, I can't imagine ruthenium tetroxide is worse than that. If it's any help, the layer was quite dark and not extremely uniform; more the color of iron or lead than silver. The precipitate in the nonpolar layer was even darker (but then, so are silver nitrate stains). It was pretty hard to get off though, so I guess that's good news! The particles might not be a good size for forming a reflective coating, and I'd imagine that a gas is less easy to control than a liquid. But it would be a whole lot heavier than air, so you could use that to your advantage. I wonder if you could apply a static charge to the glass, or if that would do anything useful? Presumably the RuO4 goes through a few steps as it decomposes, possibly forming ions? careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state I found a paper on the reflectivity of ruthenium, and while it is good in the IR spectrum it is not outstanding in the visual range, about 70%. This is about the same as the reflectivity of fresh speculum metal that preceded to the use of silvered glass (invented by Liebig). Both silver and speculum metal tarnished terribly, so if ruthenium is more resistant, it might have a long term advantage. One would expect an actual coating process to be carefully controlled, using an enclosed coating chamber with still air and appropriate geometry. Vapor deposition is after all how aluminum coating works, though in a vacuum. But the poor initial reflectivity takes the wind out of sails on this idea. About that which we cannot speak, we must remain silent. -Wittgenstein Some things can never be spoken Some things cannot be pronounced That word does not exist in any language It will never be uttered by a human mouth Fleaker International Hazard Posts: 1217 Registered: 19-6-2005 Member Is Offline Mood: nucleophilic Quote: Originally posted by Melgar Quote: Originally posted by Fleaker Have you personally seen this to be the case? I can't say that my experience with Ru in hypochlorite has been entirely the same. Yes. My friend put a ruthenium ingot directly into bleach, and it coated the inside of his flask with a metallic layer. Then he put a beaker over the flask, and it coated the inside of that with a metallic layer too, as well as part of the table. This was before I got there, otherwise I would have advised against doing that, but I did see the metallic coating all over his glass. When I did it, I put a nonpolar layer on top of the bleach, then added the ingot. There was a lot of bubbling, the hypochlorite layer became bright yellow, and soon, a black, finely-divided particulate appeared in the nonpolar layer. As the reaction proceeded, the particulate gradually formed loose clumps, then slowly sank to the bottom of the nonpolar layer. The hypochlorite layer eventually lost its yellow color, at which point the metal particulate would start drifting back into the aqueous layer, and the bubbling would stop. Since this was a ruthenium ingot, not sponge, my best guess as to what happened is that the reaction of NaOH with ruthenium used up the NaOH in some localized areas near the ingot, causing the pH to drop in those areas enough to allow some RuO4 formation. The metallic layer on his glass was not thick, and not much ruthenium would need to be deposited to make a metallic layer visible. When I captured the gas in a nonpolar layer, it made the entire layer black, but the amount of metal that was actually suspended in that layer was very small. Does that seem like a plausible explanation? As far as sourcing these metals for close to their spot price, my friend said he got his through Crystal Bay Trading, formerly pm-connect.com, now located at https://platinumtradingonline.com/. You have to call them to arrange a purchase though, presumably because of the massive amount of attempted fraud in the precious metals and jewelry industries. Seems plausible to me but I don't have any Ru ingot that I'd do that to (my pieces are ebeam pieces and were gifts). I do have a bunch of sponge but that might not work due to surface area phenomena like you advised. I never observed anything like that in my experience with strong, fresh bleach in the lab so I assume it's not possible or I'd be dead by now. In process, when recovering from a supported catalyst, usually just keep the whole reactor cold and open up a cylinder of chlorine while it stirs. I can't remember if I ever put up a photo of the "Bubble Wrap of Death". I had some few hundred milligrams of OsO4 that someone in their infinite wisdom decided to put into a screw top bottle. Even electrical-taped shut, the inside of the safe and the bubble wrap it was in it (as well as bottle label) were coated with a shiny metallic black sheen. I disposed of it upon seeing that. It's very nasty. Both of them are. The sheening and mirroring of glass is something every PGM has done for me in one fashion or the other when reducing them in solution. Os and Ru will darken the distilling tube, flask, and receiver, and even finger prints on the flask, if the glassware wasn't cleaned very well with something strong (caro's acid, then follow with ammonia and water). Concentrated sulfuric acid serves as the joint lubricant. If it stays clear with sulfuric, usually that means no organics and the oxides are fine. They all smell very strongly. The carbonyl I think is much worse. I know Alan at Crystal Bay. Good guy and he could probably provide any of those materials (particularly the Ru ), really knows his stuff. I imagine his son is running it now though. I haven't talked to him in a year or two but last I recall he was taking it easy. Mind you, I can also provide Ru sponge to you. I don't refine much of it as it's not worth the time at this time. Os occasionally but it's more expedient to buy it when needed for a customer. Usually I get Ru from Ru/Al2O3 catalysts or from it on titanium or niobium anodes. Last catalyst I saw was 5% Ru/Al2O3 and was about 5500 lbs dry. Unless it's practically given away for free, it's just not worth it at 40/oz. Careysub, there are people who use weird Ru-Os alloy targets for some sort of sputtering application! I know a lot of people don't like to ebeam melt Os because it gets all over the melt chamber and stinks it up when it's opened! I do not know what they form when they deposit on dirty surfaces but I think the reduction product is most likely the dioxide not the metal. "Kid, you don't even know just what you don't know. " --The Dark Lord Sauron careysub International Hazard Posts: 1339 Registered: 4-8-2014 Location: Coastal Sage Scrub Biome Member Is Offline Mood: Lowest quantum state Apparently Crystal Bay has a 50 troy oz minimum for purchasing ruthenium. I am sure the price is good, but you need to tie up a couple of thousand dollars in ruthenium. About that which we cannot speak, we must remain silent. -Wittgenstein Some things can never be spoken Some things cannot be pronounced That word does not exist in any language It will never be uttered by a human mouth Fleaker International Hazard Posts: 1217 Registered: 19-6-2005 Member Is Offline Mood: nucleophilic I would be willing to distribute smaller quantities to members of pure Ru sponge. I am in the process of converting my accumulated Ru into Ru sponge now that the market is rapidly moving.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4076002538204193, "perplexity": 3623.4963334747385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00507.warc.gz"}
https://math.stackexchange.com/questions/3829469/finite-field-extension-of-mathbbr-why-must-it-admit-an-element-such-that
# Finite field extension of $\mathbb{R}$, why must it admit an element such that $x^2+1=0$ Proof understanding. The proposition is as follows and I read the proof but I am not so certain regarding a particular point the proof has made: Any finite extension of $$\mathbb{R}$$ is at most degree $$2$$. Proof: Suppose the field extension $$\mathbb{F}$$ is non-trivial and thus there must exist $$\alpha\in\mathbb{F}\setminus\mathbb{R}.$$ Since the extension is finite then $$\alpha$$ must be $$\mathbb{F}$$-algebraic. In particular its minimal polynomial must be quadratic since it is not in $$\mathbb{R}.$$ Hence there must exist an element $$x\in\mathbb{F}$$ such that $$x^2+1=0.$$ [The rest of the proof is quite understandable.] My question is why is it guaranteed that such $$x$$ must exist? I get that the minimal polynomial must be in the form of $$m_\alpha(x)=(x-p)(x-\overline{p})$$ for some $$p\in\mathbb{C}\setminus\mathbb{R}$$ but does that do much? • Are you confused about why it is quadratic or why you can find an element satisfying $x^2+1=0$? – user208649 Sep 17, 2020 at 7:27 • @TokenToucan Hello, I am more confused about the latter; why can we always find $x$ that $x^2+1=0$ Sep 17, 2020 at 7:30 Since $$\mathbb{C}$$ is algebraically closed all finite extension of $$\mathbb{R}$$ embedded in $$\mathbb{C}$$ but since the degree of $$\mathbb{C}$$ over $$\mathbb{R}$$ is two and all non trivial extension of $$\mathbb{R}$$ have degree more than two all non trivial finite extension of $$\mathbb{R}$$ have degree 2 and by equality of degree all finite extension is equal to $$\mathbb{C}$$ hence have x such that $$x^2+1=0$$ Another proof : If the minimal polynomial is $$(x-\alpha )(x +\bar{\alpha} )$$ then $$\alpha - \bar{\alpha}=2\operatorname{Im}(\alpha)i$$ and since $$2\operatorname{Im}(\alpha )\in \mathbb{R}$$ we know that $$\frac{2\operatorname{Im}(\alpha)i }{2\operatorname{Im}(\alpha)} = i$$ is in $$\mathbb F$$ (by closure). The minimal polynomial of $$\alpha$$ is of the form $$x^2+\beta x+\gamma$$. Since it is irreducible over $$\Bbb R$$, $$\beta^2-4\gamma<0$$. You know then that $$\alpha^2+\beta\alpha+\gamma=0$$. In other words,$$\left(\alpha-\frac\beta2\right)^2+\gamma-\frac\beta4=0.$$So, take$$x=\frac{\alpha-\frac\beta2}{\sqrt{\gamma-\frac{\beta^2}4}}$$and then$$x^2=\frac{\left(\alpha-\frac\beta2\right)^2}{\gamma-\frac{\beta^2}4}=-1.$$In other words, $$x^2+1=0$$. • Hi Jose, thank you for your answer! I am just thinking would your solution potentially assume that $\mathbb{F}\subset\mathbb{C}$? This is because looking at $x=\frac{\alpha-\beta/2}{\sqrt{\gamma-\beta/4}}$, it could be the case that the denominator is actually an imaginary number. Henceforth, for $x\in\mathbb{F}$ wouldn't we need $\mathbb{F}\subset\mathbb{C}$ as an assumption at the start? Sep 17, 2020 at 8:26 • My answer has nothing to do with $\Bbb C$ and, since $\beta^2-4\gamma<0$, it is clear that $\gamma-\frac{\beta^2}4>0$. So, no, $\sqrt{\gamma-\frac{\beta^2}4}$ cannot be a complex non-real number. Sep 17, 2020 at 8:47 • Ohh right, your answer is exactly what I am looking for! By the way, there were some typos in your answer, in particular you missed the square in $\beta$ Sep 17, 2020 at 8:55 • Right you are! I have edited my answer. Thank you. Sep 17, 2020 at 9:53 If you know $$\mathbb C$$ is algebraically closed, then we may assume $$\mathbb F$$ is embedded in $$\mathbb C$$, and in this way view $$\alpha$$ as being an element of $$\mathbb C$$. That means you can write $$\alpha = a+bi$$ with $$a,b$$ real. The only time $$\alpha$$ is not in $$\mathbb R$$ is when $$b\neq 0$$ and so $$i = \frac{\alpha - a}{b}$$ will satisfy $$x^2 + 1 = 0$$. • Thank you for your great answer. However though, if I could take your first paragraph as an assumption then would I be right in saying that the proposition becomes trivial? I guess what I am trying to say is if we can assume $\mathbb{F}$ is embedded in $\mathbb{C}$, since $\mathbb{C}$ is a degree $2$ extension of $\mathbb{R}$ then automatically the proposition at the very beginning is proved. Hence there will no longer need to find $x$ such that $x^2+1=0$ Sep 17, 2020 at 7:52 • Yes, I suppose it might. However, you can prove that $\mathbb C$ is algebraically closed without using your proposition, so the argument would not be circular, at least. – user208649 Sep 17, 2020 at 7:57 • Actually, in the proof you wrote, I'm not sure how you'd know that $\alpha$ is quadratic over $\mathbb R$ without already knowing that $\mathbb C$ is the algebraic closure of $\mathbb R$ or something very nearly equivalent. – user208649 Sep 17, 2020 at 7:59 • You are right to be honest, this proof is not very well written I must say Sep 17, 2020 at 8:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 47, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670974016189575, "perplexity": 133.9448048489405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00094.warc.gz"}
http://brunettoziosi.com/posts/pdf-manipulation/
Contents PDF manipulation It happens, from time to time, that I find myself looking for a way to manipulate PDF on the fly. For example, I want to print them two-pages per sheet, or to extract few pages, or to shrink the size of the file without degradate the quality. Here are few trick I collected and post here to be able to find them. The first tricks comes from here and assume you have pdfjam installed. This is how you can produce a pdf with two pages per sheet: pdfjam --nup 2x1 infile.pdf --landscape --outfile outfile.pdf Booklet You can also print your pdf file as booklet. This means that the pages of your file are shuffled (and placed two per sheet) so that you can join them with a clip or some glue or strings in the middle just like a real book. The sommand is: pdfbook --short-edge infile.pdf pdfbook is part of pdfjam. Extract (or join) pages If you need to extrac some pages from your pdf file you can just run pdftk infile.pdf cat <first_page>-<last_page> output outfile.pdf To join pdf files, instead, run pdftk infile1.pdf infile2.pdf infile3.pdf cat output outfile.pdf Obviously you need pdftk. Shrink pdf file size Sometiimes a pdf grows in size with no reason (apparently). It is possible to shrink it by reduce it to pdf defaults. You will need gs. The command you need to run is gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/default -dNOPAUSE -dQUIET -dBATCH -sOutputFile=outfile.pdf infile.pdf It can be quite difficult to remember, so you can create a bash alias for a function doing it for you. In .bashrc add alias pdfdefault='function _pdfdefault() { gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/default -dNOPAUSE -dQUIET -dBATCH -sOutputFile=$2$1;}; _pdfdefault' and run . ~/.bashrc before running pdfdefault infile.pdf outfile.pdf. Two pages per sheet with latex It is possible, if you are writing something with pdf, to produce a pdf with two pages per sheet without needing to run pdfjam. In this case just add, at the beginning of your latex document \usepackage{pgfpages} \pgfpagesuselayout{2 on 1}[a4paper], landscape]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6517803072929382, "perplexity": 5528.682619372134}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00401.warc.gz"}
https://brilliant.org/problems/odd-and-even-2/
# Odd and Even If $m$ is odd and $n$ is even, which of the following is definitely an even number? A. $\ m^2 + n$ B. $\ m^2 + mn$ C. $\ m^2+n^2$ D. $\ m+ n^2$ E. $\ mn + n^2$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7218437790870667, "perplexity": 317.69352381169296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00483.warc.gz"}
https://joomla.stackexchange.com/questions/28531/dpdocker-and-joomla
# DPDocker and Joomla I'm new to Docker. I tested https://github.com/Digital-Peak/DPDocker. I was able to implement things quickly with the component https://github.com/Digital-Peak/DPAttachments. Now I'm in the process of converting one of my components. I am currently creating symlinks with https://robo.li/. I wanted to keep it that way. I use this component for the first tests: https://github.com/astridx/boilerplate astrid@ubuntu:~/git/DPDocker/boilerplate$vendor/bin/robo map ~/git/DPDocker/DPDocker/webserver/www/j4/ [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/language/de-DE/pkg_foos.ini","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//language/de-DE/pkg_foos.ini"] [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/language/de-DE/pkg_foos.sys.ini","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//language/de-DE/pkg_foos.sys.ini"] [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/language/en-GB/pkg_foos.ini","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//language/en-GB/pkg_foos.ini"] [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/language/en-GB/pkg_foos.sys.ini","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//language/en-GB/pkg_foos.sys.ini"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//api/components/com_foos... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/api/components/com_foos","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//api/components/com_foos"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//components/com_foos... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/components/com_foos","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//components/com_foos"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//plugins/webservices/foos... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/plugins/webservices/foos","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//plugins/webservices/foos"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//templates/facile... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/templates/facile","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//templates/facile"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//administrator/components/com_foos... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/administrator/components/com_foos","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//administrator/components/com_foos"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//media/com_foos... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/media/com_foos","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//media/com_foos"] [Filesystem\DeleteDir] Deleted /home/astrid/git/DPDocker/DPDocker/webserver/www/j4//modules/mod_foo... [Filesystem\FilesystemStack] symlink ["/home/astrid/git/DPDocker/boilerplate/src/modules/mod_foo","/home/astrid/git/DPDocker/DPDocker/webserver/www/j4//modules/mod_foo"] astrid@ubuntu:~/git/DPDocker/boilerplate$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3381083309650421, "perplexity": 1953.9744923480957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00612.warc.gz"}
https://bibbase.org/network/publication/bohr-mochizuki-of-1992-classificationandpredictionofproteinsidechainsbyneuralnetworktechniques
CLASSIFICATION AND PREDICTION OF PROTEIN SIDE-CHAINS BY NEURAL NETWORK TECHNIQUES. Bohr, H; Mochizuki, K; of , P. W. I. J.; and 1992 World Scientific . @article{Bohr:tr, author = {Bohr, H and Mochizuki, K and of, PG Wolynes International Journal and {1992}}, title = {{CLASSIFICATION AND PREDICTION OF PROTEIN SIDE-CHAINS BY NEURAL NETWORK TECHNIQUES}}, journal = {World Scientific } }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7429966330528259, "perplexity": 25869.461198732686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373095.44/warc/CC-MAIN-20210305152710-20210305182710-00105.warc.gz"}
https://www.physicsforums.com/threads/pumping-a-cylindrical-tank.202742/
# Pumping a cylindrical tank 1. Dec 5, 2007 ### kuahji A storage tank is a right circular cylinder 20ft long and 8 ft in diameter with its axis horizontal. If the tank is half full of olive oil weighing 57 lb/ft^3, find the work done in emptying it through a pipe that runs from the bottom of the tank to an outlet that is 6 ft above the top of the tank. My work I set the bottom of the tank at the point (0,0) & then drew a circle with a radius of 4. dV=2x(20)dy then solved the the circle's equation for x, x=$$\sqrt{(16-y^2)}$$ dV=40$$\sqrt{(16-y^2)}$$ F(y)=57(40)$$\sqrt{(16-y^2)}$$ 10-y should be the distance the work must do W=2280 $$\int$$(10-y)$$\sqrt{(16-y^2)}$$ Then I distributed the (10-y) W=22800$$\int$$$$\sqrt{(16-y^2)}$$- 2280$$\int$$y$$\sqrt{(16-y^2)}$$ For part two, I set u=16-y^2 & got W=22800$$\int$$$$\sqrt{(16-y^2)}$$+ 1140$$\int$$y$$\sqrt{(16-y^2)}$$ This is the part where I get lost, I did it a bit differently from the solutions manual, but at this point the manual shows inserting 4$$\pi$$ as follows 22800(4$$\pi$$)+ 1140$$\int$$y$$\sqrt{(16-y^2)}$$ (evaluated from 0 to -4), I tried from (0 to 4) in my solution. I don't understand why the solutions manual is doing this particular step, it says its the area of a semicircle, but can anyone explain why I use it in this problem & where the integral disappears to? 2. Dec 5, 2007 ### Shooting Star The problem can be solved easily in a different way if you calculate position of the centre of mass of the oil by integration. After that, work done = Mgh, where M is the total mass and h the height the CM has to rise. This would be the Physicist's approach. 3. Dec 5, 2007 ### kuahji Wouldn't you have to calculate it for a 3d object though? I don't have those skills yet. So far all I've dealt with was thin plats, 2d objects. 4. Dec 6, 2007 ### Shooting Star Yes, for 3d objects, but you have to do that anyway. And because of symmetry, the integration would be only for 2d. I don't notice any value of g in your calcs? Also, I'm not very sure what you are trying to do. Let me know if you need more help, but after explaining what is the method you are following. 5. Dec 6, 2007 ### kuahji The value of g should already be in the 57 lbs, as weight. Guess I'm kind of lost myself, but up unto there, I everything the solution manual has. 6. Jul 9, 2008 ### dudicuff Hi everyone. I have a very similar problem to the one kuahji posted, but my problem is a storage tank completely full of oil. Would I then calculate using (6+8 -y) or (14-y) or would I use the same (10-y)? Also, wouldn't my integral be from -4 to 4 ( or 2* [0 to 4])? Thanks for any help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077059626579285, "perplexity": 724.5050475969126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.54/warc/CC-MAIN-20180320205242-20180320225242-00248.warc.gz"}
https://www.transtutors.com/questions/nicole-thinks-that-her-business-nicole-s-getaway-spa-ngs-is-doing-really-well-and-sh-2560581.htm
# Nicole thinks that her business, Nicole's Getaway Spa (NGS), is doing really well and she is plan... Nicole thinks that her business, Nicole's Getaway Spa (NGS), is doing really well and she is planning a large expansion. With such a large expansion, Nicole will need to finance some of it using debt. She signed a one-year note payable with the bank for $45,000 with a 6 percent interest rate. The note was issued October 1, 2014, interest is payable semiannually; and the end of Nicole's accounting period is December 31. Required: Prepare the journal entries required from the issuance of the note until its maturity on September 30, 2015, assuming that no entries are made other than at the end of the accounting period, when interest is payable, and when the note reaches its maturity. (If no entry is required for a transaction/event, select "No Journal Entry Required" in the first account field. Do not round intermediate calculations.) transaction list 1. Record the borrowing of$45,000. 2. Record the accrued interest on December 31, 2014. 3. Record the interest payment on March 31, 2015, assuming no interest accrual has been recorded since December 31, 2014. 4. Record the interest payment on September 30, 2015 assuming no interest accrual has been recorded since the payment on March 31, 2015. 5. Record the repayment of the note on its maturity date.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34774893522262573, "perplexity": 3002.174169676422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863518.39/warc/CC-MAIN-20180620104904-20180620124904-00602.warc.gz"}
https://scicomp.stackexchange.com/questions/7139/fenics-how-to-interpolate-data-at-vertices-of-3d-cells/7144
# FEniCS : How to interpolate data at vertices of (3D) cells? I am trying to get an interpolation function $f$ (in 3D) at all vertices of cells. I extract all vertices of cells and then I assign the value to each vertex: if it's in a sphere of radius $R$, then I assign the value, say 3.91. If it's outside the sphere, then I assign the value 0. I got it running without the error message, but then when I calculated the function value at points that are not vertices, it does not give me either 3.91 or 0. Am I doing something wrong here? Here is a part of my code: Extract vertices of all cells, then I export these points and use another software to assign value for each point (say 3.91 for points inside sphere, 0 for points outside) coor = mesh.coordinates() numpy.savetxt('meshforE.txt',coor) I get the values at all vertices and then interpolate this to function f qvalues2 = numpy.loadtxt('qdata.txt') V = FunctionSpace(mesh, "CG", 1) f = Function(V) f.vector()[:] = qvalues2 then I read points (xp, yp) on $z=0$ plane and evaluate the funtion at these points with open('xpdata.txt') as g: print "xp[1]=", xp[1] with open('ypdata.txt') as h: print "yp[2]=", yp[2] for i in range(len(xp)): g_in[i] = f(xp[i],yp[i],0.0) Now when I plot $g_i$ , it does not look like what it should be, i.e. constant (3.91) inside the circle $R=50$, and 0 outside. Any help would be appreciated. Defining a custom Expression and then interpolating that into your function space should do the trick. Check out from dolfin import * mesh = UnitSquareMesh(20, 20) class CharCircle(Expression): def eval(self, value, x): xm0 = x[0] - 0.5 xm1 = x[1] - 0.5 if xm0*xm0 + xm1*xm1 < 0.4**2: value[0] = 1.0 else: value[0] = 0.0 V = FunctionSpace(mesh, 'CG', 1) u = Function(V) u.interpolate(CharCircle()) plot(u) interactive() • My mesh is 3D though. And I am using external mesh generator and I don't see how your trick would be useful. – Paul S. May 15 '13 at 4:03 I assume that qvalues2 has some computed values at the vertices. You cannot directly assign these to your dof vector as the dofs does not follow vertex numbering. You could however try: vertex_to_dof = V.dofmap().vertex_to_dof_map(mesh) f.vector()[:] = qvalues2[vertex_to_dof] • Thank you for your answer. Yes, qvalues2 have computed values at vertices. I tried your code but it gave me this error: vertex_to_dof = V.dofmap().vertex_to_dof_map(mesh) AttributeError: 'GenericDofMap' object has no attribute 'vertex_to_dof_map' – Paul S. May 9 '13 at 23:28 • What version of dolfin do you have? I think that function was added betwee 1.1.0 and 1.2.0. It should have been backported to the 1.1.1 release. – Hake May 10 '13 at 19:12 • I am using Fenics version 1.1.0. Do I need to upgrade and if I do, how do I do that? – Paul S. May 14 '13 at 1:31 You are using piecewise linear functions (V = FunctionSpace(mesh, "CG", 1)), so if you are evaluating the function on a point which is not a vertex, you will get a linear interpolation. You don't say anything about the mesh you are using, but if it's not curvilinear, I would not assume that points on that circle coincide with points on element boundaries (where the interpolant would be constant). • The mesh I am using is from Gmsh and I converted it. The problem is that for points that are inside the sphere(not on the boundary), my data is all the same (constant=3.91) but when I computed the interpolated function at points that are not vertices, I didn't get 3.91 I got other number between 0-3.91. – Paul S. May 9 '13 at 23:37 If you have the information of your function on all vertices, a representation with Functionspace of continous galerkin degree 1 is only exact at those vertices. Evaluating it at any other point gives the linear interpolation between the points. That's why you observe different values to 3.91 in between values. If you want that any evaluation within the sphere to be exactly equal to the assigned constant, you need the information of the function on each cell. Maybe by counting a cell to be inside the sphere, when all vertices are inside the sphere. Then you can represent the function with discontinous galerking degree 0. Any evaluation at any point gives than the expected value.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916444063186646, "perplexity": 1115.8256507827043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738562.5/warc/CC-MAIN-20200809162458-20200809192458-00495.warc.gz"}
http://bdaugherty.tripod.com/KeySkills/ratios.html
## Ratios and Proportion ### Introduction Ratios occur in mixing things - such as concrete which is made of cement, sand and gravel in a definite ratio. For example, a ratio of 1:3:4 would mean that no matter what volume of concrete you have, 1 part is cement, 3 parts is sand and 4 parts is gravel. An alternative way of stating this is to say that $\frac{1}{8}$ is cement, $\frac{3}{8}$ is sand and $\frac{4}{8}$ $\left(\mbox{i.e. }\frac{1}{2}\right)$ is gravel. The aspect ratio which is commonly used in describing the width to length ratios of aircraft wings, is also commonly seen nowadays to describe the ratio of width to height of a TV screen. "Old-style" TV screens feature a 4:3 (1.33:1) aspect ratio, but newer widescreen TVs have a 16:9 (1.78:1) ratio; and most feature films are shot in at least a 1.85:1 ratio. ### Simplification of Ratios Conventionally ratios are stated as whole numbers, or at least decimal numbers (as used for some examples in the Introduction). Whole number ratios would normally be stated in their lowest terms. A ratio of the form 3 : 6 : 15 can be reduced to 1 : 2 : 5 by dividing every number by 3 (note the similarity with reducing fractions to their lowest terms). On the other hand, if a ratio was stated as $\frac{1}{3},\ \ \frac{1}{6},\ \ \frac{1}{2}$ (which is much less common) then it would be usual to multiply every term by an appropriate number to achieve a ratio stated in whole numers only. Here we could multiply every term by 6 to get 2 : 1 : 3 or 1 : 2 : 3 To go back to the decimal representation mentioned in the first paragraph, you can see from the examples in the Introduction that when this is used then at least of one of the ratios is 1 . This is the crux of the representation, the ratios have been divided through so that the lowest ratio is 1, and the others are either 1 or more than 1. For example, the ratio 2 : 3 : 15 could be represented as 1 : 1.5 : 7.5 by dividing the original ratios through by 2. You do come across ratios of this type where the numbers are quoted as decimals less than 1, contravening the rule I have just given. Mathematically, there is nothing wrong with this, but hopefully you can see that this type of represenation would not be as easy to understand. Example Of 200 customers of a cafe, 80 ordered and 120 ordered coffee. What is the ratio of tea drinkers to coffee drinkers ? State the given figures as a ratio (in the right order) 80 : 120 and then reduce this to its lowest terms if applicable. Here we can see that both sides can be divided by 40, to give 2 : 3 Quick Quiz Simplify these ratios, keeping them in integer form $1) \ \ 3 : 6 : 9$ $2) \ \ 24 : 48 : 60$ $3) \ \ 3 : 27 : 36$ $4) \ \ 4 : 8 : 16$ Represent these ratios in decimal form $5) \ \ 4 : 7 : 12$ $6) \ \ 5 : 7 : 24$ ### Calculation Ratios are closely related to fractions, for example if two items ( A and B ) are connected in the ratio 3 : 4 then • A will constitute $\frac{3}{7} \ \mbox{of the whole}$ • B will constitute $\frac{4}{7} \ \mbox{of the whole}$ Likewise, if three quantities are related in the ratio 1 : 9 : 15 then the quantities will constitute $\frac{1}{25}, \ \ \frac{9}{25},\ \ \mbox{and } \frac{15}{25} \left(= \frac{3}{5}\right)\ \mbox{of the whole, respectively}$ Example If a line 10 cms long is to be divided in the ratio 3 : 5 then stated as fractions of the whole, the two lengths will be $\frac{3}{8} \ : \ \frac{5}{8}$ The two required lengths are then $\frac{3}{8} \times 10 = \frac{30}{8} = 3.75\ \mbox{cm}$ $\frac{5}{8} \times 10 = \frac{50}{8} = 6.25\ \mbox{cm}$ Example If £ 4.50 is to be divided between three people in the ratio 4 : 5 : 6 then stated as fractions of the whole, the three amounts will be $\frac{4}{15} : \frac{5}{15} \left(=\frac{1}{3}\right) : \frac{6}{15}$ So the required monetary amounts are then $\frac{4}{15} \times 4.50 = \frac{18}{15} = £ 1.20$ $\frac{1}{3} \times 4.50 = \frac{4.5}{3} = £ 1.50$ $\frac{6}{15} \times 4.50 = \frac{27}{15} = £ 1.80$ ### Proportions Consider a question like A car travels for 300 kilometres on 35 litres of petrol. How far will it travel on 54 litres ? To explain how to do this in words - divide 300 by 35 to find how far it will travel on 1 litre multiply this figure by 54 to find how far it will travel on 54 litres If needs be, you can start off doing calculations of this type in these two stages, but once you get more practised, you can start to do it in one step. For example, the calculations for the above question would be $300 \times \frac{54}{35}$ = 462.9 (to 1 d.p.) A car travels for 300 kilometres on 35 litres of petrol. How much petrol would be needed for a journey of 369 kilometres ? Similar logic would be needed, but applied differently. To explain how to do this in words - divide 35 by 300 to find many litres would be required to travel for 1 kilometre (obviously this would be quite a bit less than 1 litre) multiply this figure by 369 to find how much is needed to travel 369 kilometres As before, you can start off doing calculations of this type in these two stages, but once you get more practised, you can start to do it in one step. For example, the calculations for this question would be $35 \times \frac{369}{300}$ = 43.05 litres Quick Quiz 1) 5 kgs of potatoes cost £ 2.20 - how much will 28 kgs cost? 2) If 34 items cost £ 45.67, how many will 83 items cost? 3) If a train takes 3.4 hours to travel 980 kilometres (obviously not a British train), how long will it need to travel 1 200 kilometres? 4) If a machine produces 34 items in 5 minutes, how many will it produce in 34 minutes? 5) If 56 items cost £ 120 pounds, how many can I get for £ 84? ### Inverse Proportion Some problems have an inverse proportion. Very common ones involve workers doing a particular job - the more workers you have, the less time the job will take (assuming an ideal situation where all workers produce exactly the same, at the same rate). For problems like these, the technique would be the opposite to that used for 'proportions' previously. There you carried out a two-stage operation, first dividing and then multiplying. For inverse proportion, a two-stage operation is involved, first multiplying and then dividing. Example 4 workers build a wall in 12 days. How long would it take 7 workers ? To explain specifically how to do this in words - •  multiply 12 by 4 to find how long it would take 1 worker •  divide this figure by 7 to find how long it would take 7 workers As explained in other sections, with practise you can conflate these two steps into one line of calculation $4 \times \frac{12}{7} = 6.9\ \mbox{(to 1 d.p.)}$ Example 20 workers produce 3 000 articles in 15 days. How long would it take 13 workers ? Note that the figure of 3 000 articles does not enter into the calculation (previously the details about the wall that the workers were building did not enter into the calculation, apart from knowing that it had been finished). To explain specifically how to do this in words - •  multiply 15 by 20 to find how long it would take 1 worker •  divide this figure by 13 to find how long it would take 13 workers Conflating these two steps into one line of calculation $15 \times \frac{20}{13} = 23\ \mbox{(to nearest no. of whole days)}$ Quick Quiz 1) If 12 workers take 11 days to harvest a crop, how long will 15 workers take? 2) When a set amount of fruit is distributed equally to 7 people, each receive 2.3 kgs. How much would each receive if the same amount had been divided among 12 people? 3) If it takes 12 workers 2 hours to dig a hole, how long would 7 workers take? ## Past Exam Questions #### Jake is making 55 biscuits for the playgroup Christmas party. He has a recipe for 20 biscuits, which requires: 150g margarine 150g sugar 1 egg 300g self-raising flour 50g ground almonds 1. How much flour will he need to make exactly 55 biscuits? • A    413g • B    825g • C    900g • D    1 650g 2. What is the ratio of ground almonds to sugar to self-raising flour in the recipe? • A    3 : 6 : 1 • B    6 : 1 : 3 • C    1 : 3 : 6 • D    1 : 6 : 3 #### 1. The border for the top of the walls costs £ 3.97 per metre. Which estimate is most accurate for the total cost of the border? • A     (8 + 3) x 4 = £ 44 • B     (8 + 3 + 8 + 3) = £ 22 • C     (8 + 3) x 2 x 4 = £ 88 • D     (8 x 3) x 4 = £ 96 #### 2.    3.2 litres of paint are needed to decorate the bedroom. (1 litre = 1 000cm3). The decorator mixes 3 paint colours together. The amounts of paint are in the ratio: 15 parts of Honey Yellow to 12 parts of Aztec Orange to 5 parts of Ravishing Red. How much Ravishing Red will she need? • A     500cm3 • B     720cm3 • C     1 067cm3 • D     1 200cm3 • A     3:8 • B     3:5 • C     5:8 • D     5:3 • A     6% • B     12% • C     25% • D     125% • A     70 000 • B     80 000 • C     90 000 • D     10 000 #### Here are the scores that 20 people get for a test 8, 7, 5, 6, 9 4, 5, 7, 2, 1 6, 1, 9, 2, 1 5, 9, 7, 3, 8 • A     3% • B     15% • C     25% • D     75% • A     1:4 • B     2:5 • C     3:2 • D     1:3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7271925806999207, "perplexity": 1446.1255625386427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00093-ip-10-171-10-70.ec2.internal.warc.gz"}
https://lms.ibu.edu.ba/course/info.php?id=43
### EEE 203 Electromagnetic Field Theory Vector Analysis. Electrostatic and Magnetostatic forces and fields in vacuum and in material bodies. Energy and potential. Steady electric current and conductors. Dielectric properties of materials. Boundary conditions for electrostatic and magnetostatic fields. Poisson's and Laplace's Equations. Magnetic circuits and inductance.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590013384819031, "perplexity": 3243.744955905637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589177.70/warc/CC-MAIN-20171216201436-20171216223436-00532.warc.gz"}
http://www.onemathematicalcat.org/Math/Algebra_II_obj/loans_and_investments.htm
LOANS AND INVESTMENTS Loan and investment problems offer great applications of recursive sequences. PAYING BACK A LOAN SAVING FOR THE FUTURE This section requires calculations that cannot reasonably be done by hand. Many calculators have the ability to work with recursive sequences. Or, there are forms on this page that you can use to do the calculations. Also, there are instructions on using WolframAlpha to work with recursive sequences. ## PAYING BACK A LOAN Suppose you are borrowing $\,\$22{,}000\,$. Interest is being charged at an annual rate of$\,5\%\,$. You plan to pay back$\,\$250\,$ each month; this payment goes to both interest and principal. (a)   Find the interest owed in the first month. (b)   Write a recursive formula where $\,u_n\,$ gives the amount owed after $\,n\,$ months. (c)   Then, find the amount owed after one year of payback. (d)   Find the total principal paid in the first year. (e)   Find the total interest paid in the first year. (f)   Putting it all together—the brief solution. SOLUTION (a) Find the interest owed in the first month. The annual interest rate is $\,5\% = 0.05\,$. The monthly interest rate is $\,\frac{0.05}{12}\,$. The interest owed in the first month is $\,(\$22{,}000)(\frac{0.05}{12}) = \$91.67\,$. Notice that your monthly payment must be at least $\,\$91.67\,$to cover the interest. If you were to pay exactly$\,\$91.67\,$ the first month, then your debt remains $\,\$22{,}000\,$. If you were to pay$\,\$91.67\,$ each month for the next twenty years, you'd still owe $\,\$22{,}000\,$at the end of those twenty years. Obviously, you want to make sure you pay more than this each month, so the amount of debt decreases. You've decided to pay$\,\$250\,$ each month, so with this first payment you reduce your debt by $\,\$250 - \$91.67 = \$158.33\,$. That means you now owe$\,\$22{,}000 - \$158.33 = \$21{,}841.67\,$, which will accrue a smidgeon less interest for the next month. Look at the pattern for the amount you owe at the end of the first month. Dollar signs and commas are suppressed, to make it a bit easier on the eyes: $$\overset{\text{amt owed at beginning}}{\overbrace{\strut 22000}} + \overset{\text{interest accrued for the month}}{\overbrace{\strut (22000)(\frac{0.05}{12})}} - \overset{\text{monthly payment}}{\overbrace{\strut \ \ 250\ \ }} = \overset{\text{amount owed after one month}}{\overbrace{\strut 21841.67}}$$ Now we'll introduce the notation. Let $\,u_0\,$ denote the amount owed at time zero; that is, the initial debt. Let $\,u_1\,$ denote the amount owed after $\,1\,$ month (i.e., after $\,1\,$ payment). Rewriting the formula above using this notation gives: $$\overset{\text{amt owed at beginning}}{\overbrace{\strut \ \ u_0\ \ }} + \overset{\text{interest accrued for the month}}{\overbrace{\strut (u_0)(\frac{0.05}{12})}} - \overset{\text{monthly payment}}{\overbrace{\strut \ \ 250\ \ }} = \overset{\text{amount owed after one month}}{\overbrace{\strut\ \ u_1\ \ }}$$ Rewrite the equation from right-to-left and factor, to get: $$u_1 = (1 + \frac{0.05}{12})u_0 - 250$$ (b) Write a recursive formula where $\,u_n\,$ gives the amount owed after $\,n\,$ months. The pattern above gets applied month after month after month. Take the prior amount owed:   $\,u_{n-1}\,$ Add in the interest accrued on this amount:   $\,u_{n-1}(\frac{0.05}{12})\,$ Subtract off your payment:   $\,250\,$ Thus, the amount owed after $\,n\,$ months is: $$\,u_n = (1 + \frac{0.05}{12})u_{n-1} - 250\,,\ \ \text{ for } n\ge 1$$ (c) Find the amount owed after one year of payback. Note:   For this part of the problem, you need a calculator that does recursion; or, you can use the javascript form at right. Fill in the fields with the desired amounts. Then, click the button to compute $\,u_n\,$. The initial (sample) values are from this current example. The amount owed after one year (twelve months) is $\,u_{12}\,$. From a calculator, or from the form at right:   $\,u_{12} = 20055.85\,$ Thus, you owe $\,\$20{,}055.85\,$after one year. The form at right computes the amount owed after$\,n\,$payments on a loan payback with equal monthly payments: • You are borrowing$\,u_0\,$(in dollars). That is,$\,u_0\,$is the amount owed at time zero (the start of the loan). • Interest is being charged at an annual interest rate$\,i\%\,$. For example, if the interest rate is$\,5\%\,$, then$\,i = 5\,$. That is,$\,i\,$does not include the percent sign. • You are paying back$\,\$B\,$ each month. For example, if you pay back $\,\$250\,$, then$\,B = 250\,$. That is,$\,B\,$does not include the dollar sign. •$\,u_n\,$is the amount owed (principal plus interest) after$\,n\,$payments: $$u(n) = u_n = \bigl(1 + \frac{i/100}{12}\bigr)u_{n-1} - B\quad\text{for}\ n = 1,2,3,\ldots$$ ## The amount owed after$\,n\,$payments on a loan payback with equal monthly payments$u_0 =\,$amount borrowed, in dollars: Do NOT include commas or dollar sign. Annual interest rate,$\,i\,$, as a percent: Do NOT include the percent sign. Amount paid back each month,$\,B\,$, in dollars: Do NOT include commas or dollar sign. Value of$\,n\,$for which you want$\,u_n\,$: By clicking the button below, you will calculate the amount still owed after$\,n\,$monthly payments. (d) Find the total principal paid in the first year. The total principal paid in the first year is:$\,\$22{,}000 - \$20{,}055.85 = \$1944.15$ (e) Find the total interest paid in the first year. How much money did you actually send to the bank in the first year? Twelve payments of \$250 each:$\,(12)(\$250) = \$3000\,$Of this, you know from part (d) that$\,\$1944.15\,$ went to principal—that is, reduced your debt. The remainder is interest—your payment for the privilege of using the bank's money for the year. Thus, the total interest paid in the first year is:   $\,(12)(\$250) - \$1944.15 = \$1055.85$(f) Putting it all together—the brief solution. Here's the brief solution: (a) The interest owed in the first month is:$\,(\$22{,}000)(\frac{0.05}{12}) = \$91.67\,$(b) The recursive formula is:$u_0 = 22000\,u_n = (1 + \frac{0.05}{12})u_{n-1} - 250\,,\ \ \text{ for } n\ge 1$(c) From the calculator or form:$\,u_{12} = \$20{,}055.85\,$ (d) The total principal paid in the first year is:   $\,\$22{,}000 - \$20{,}055.85 = \$1944.15$(e) The total interest paid in the first year is:$\,(12)(\$250) - \$1944.15 = \$1055.85$ ## SAVING FOR THE FUTURE You are saving for the future. Your initial deposit is $\,\$4100\,$. Interest is being earned at an annual rate of$\,5\%\,$, compounded monthly. You will contribute an additional$\,\$120\,$ each month. (a)   Find the interest earned in the first month. (b)   Write a recursive formula where $\,u_n\,$ gives the amount saved (principal plus interest) after $\,n\,$ months. (c)   Then, find the amount saved (principal plus interest) after $\,7\,$ years. (d)   Find the total amount of money you contributed (principal only) during these $\,7\,$ years. (e)   Find the total interest earned during these $\,7\,$ years. SOLUTION: (a) Find the interest earned in the first month. The interest earned in the first month is:   $(\$4{,}100)(\frac{0.05}{12}) = \$17.08$ (b) Write a recursive formula where $\,u_n\,$ gives the amount saved (principal plus interest) after $\,n\,$ months. The recursive formula is: $u_0 = 4100$ $u_n = (1 + \frac{0.05}{12})u_{n-1} + 120\,$, for $\,n \ge 1$ (c) Find the amount saved (principal plus interest) after $\,7\,$ years. Note that $\,7\,$ years is $\,7(12) = 84\,$ months. From the calculator, or from the form at right:   $\,u_{84} = 17853.39$ Thus, you have saved (principal plus interest) $\,\$17{,}853.39\,$after$\,7\,$years. The form at right computes the amount saved (principal plus interest) after$\,n\,$equal monthly payments: • Your initial deposit is$\,u_0\,$(in dollars). That is,$\,u_0\,$is the amount saved at time zero (the start of your savings program). • Interest is being earned at an annual rate of$\,i\%\,$, compounded monthly. For example, if the interest rate is$\,5\%\,$, then$\,i = 5\,$. That is,$\,i\,$does not include the percent sign. • You are contributing an additional$\,\$C\,$ each month. For example, if you contribute $\,\$120\,$, then$\,C = 120\,$. That is,$\,C\,$does not include the dollar sign. •$\,u_n\,$is the amount saved (principal plus interest) after$\,n\,$monthly contributions: $$u(n) = u_n = \bigl(1 + \frac{i/100}{12}\bigr)u_{n-1} + C\quad\text{for}\ n = 1,2,3,\ldots$$ ## The amount saved after$\,n\,$equal monthly payments$u_0 =\,$initial deposit, in dollars: Do NOT include commas or dollar sign. Annual interest rate,$\,i\,$, as a percent: Do NOT include the percent sign. Amount contributed each month,$\,C\,$, in dollars: Do NOT include commas or dollar sign. Value of$\,n\,$for which you want$\,u_n\,$: By clicking the button below, you will compute the amount saved (principal plus interest) after$\,n\,$monthly contributions. You can use an asterisk, ‘*’, to denote multiplication. (d) Find the total amount of money you contributed (principal only) during these$\,7\,$years. The total amount of money you contributed (principal only) during these$\,7\,$years is:$\,\$4100 + 7(12)(\$120) = \$14{,}180.00$ (e) Find the total interest earned during these $\,7\,$ years. The total interest earned during these $\,7\,$ years is:   $\,\$17{,}853.39 - \$14{,}180.00 = \$3,673.39$## Using WolframAlpha to work with Recursive Sequences Find$\,u_n\,$for$\,n = 3\cdot 52\,$, if:$\,u_0 = 10,000\,u_n = (1 + \frac{0.075}{52})u_{n-1} - 20\,$, for$\,n \ge 1\,$• Step 1: Put the information into WolframAlpha: u(0) = 10000, u(n) = (1 + 0.075/52)*u(n-1) - 20 • Step 2: WolframAlpha verifies your input as: As part of your answer, you get this recurrence equation solution, which is a nonrecursive description of the sequence: • Step 3: If you click on the recurrence equation solution, then WolframAlpha puts it in the input box for you. This is much safer than typing it in yourself! • Step 4: Change the value of$\,n\,$to the desired value. If the value of$\,n\,$involves computations, then (to ensure correct order of operations) put it inside parentheses, as shown here: • Step 5: Scroll down to the decimal approximation: So, if you borrow$\,\$10,000\,$ at a $\,7.5\%\,$ annual interest rate and pay back $\,\$20\,$per week, then after$\,3\,$years you will still owe$\,\$9025.14\,$. Master the ideas from this section When you're done practicing, move on to: the Compound Interest Formula PROBLEM TYPES: 1 2 3 4 5 6 7 8 AVAILABLE MASTERED IN PROGRESS (MAX is 8; there are 8 different problem types.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4958938956260681, "perplexity": 15248.869376396047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607806.46/warc/CC-MAIN-20170524074252-20170524094252-00274.warc.gz"}
http://cpr-mathph.blogspot.com/2013/06/13061561-matthias-gorny.html
## A Curie-Weiss Model of Self-Organized Criticality : The Gaussian Case    [PDF] Matthias Gorny We try to design a simple model exhibiting self-organized criticality, which is amenable to a rigorous mathematical analysis. To this end, we modify the generalized Ising Curie-Weiss model by implementing an automatic control of the inverse temperature. With the help of exact computations, we show that, in the case of a centered Gaussian measure with positive variance $\sigma^{2}$, the sum $S_n$ of the random variables has fluctuations of order $n^{3/4}$ and that $S_n/n^{3/4}$ converges to the distribution $C \exp(-x^{4}/(4\sigma^4))\,dx$ where $C$ is a suitable positive constant. View original: http://arxiv.org/abs/1306.1561
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583759903907776, "perplexity": 255.570622531779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00369.warc.gz"}
https://instantcertcredit.com/courses/6/lesson/380
### Assignments: Unfinished Assignment Study Questions for Lesson 30 ### Lesson Objectives: - Solve by graphing. - Solve by substitution or elimination method. - Use a system of two linear equations to solve an applied problem. A system of equations consists of two or more equations considered simultaneously. The corresponding unknowns have the same values. In order to find all the unknown numbers in a system of equations, we must have as many equations as there are unknown numbers. Solving a system of equations graphically: Each point at which the graphs intersect is a solution to both equations. Solve graphically: x+y = 3 and 2x+y = 0. Start by solving for y in each equation. So in the first equation, we have y = 3-x, and in the second equation, we have y = -2x. Now you can plug these equations into your calculator, under y=. And use your calculator to determine the point of intersection. The graphs intersect at a single point, (-3, 6). So (-3, 6) is the solution of the system of equations. Now let's check our solution. Plug (-3, 6) into our first equation, x+y = 3. So we get (-3)+(6) = 3; 3 = 3, a true statement. Now let's check it with the second equation. So we have 2x+y = 0, or 2(-3)+(6) = 0. This gives us -6+6, or 0=0, another true statement. So (-3, 6) is the solution. Solving a system of equations algebraically: Systems of equations are solved by combining the equations so as to obtain a single equation with one unknown number. And you can accomplish this by substitution or elimination. In substitution, we solve for a variable in one of the equations, and substitute the value of that variable into the second equation. In elimination, we use addition or subtraction to eliminate one of the variables. And then we solve for the remaining variable and substitute the value for that variable into either of the original equations. Solve using the substitution method. x+y = 10 and 2x-3y = -2. We'll call the first equation, 1, and the second equation, 2. Now let's solve 1 for x. So we get x = 10-y. Then we substitute 10-y in for x in equation 2. So we get 2(10-y)-3y = -2. So now we have an equation in one variable, which we know how to solve. Let's start by distributing 2 into the parentheses. So we get 20-2y-3y = -2. And then go ahead and combine the -2y and the -3y; so we get -5y. And now if we subtract 20 from both sides, we get -5y = -22. And then divide both sides by -5. So we get y = 22/5. So now we substitute 22/5 in for y in either of our equations. This is called back substitution. So if we plug it into equation 1, we have x+22/5 = 10. Now subtract 22/5 from both sides, and we have x = 10-22/5. And we can make common denominators by multiplying the top and bottom of 10/1 by 5. So we have 50/5 - 22/5. Now we can just subtract the numerators, 50-22, and get 28/5. So x = 28/5. So the solution for the system of equations is the ordered pair (28/5, 22/5). And now we can check our solution by plugging into both of our equations. So if we plug into 1, we have 28/5+22/5 = 10. This gives us 50/5 = 10, or 10 = 10, a true statement. Now if we plug into equation 2, we have 2(28/5)-3(22/5) = -2. Let's go ahead and divide everything by 2. So we get 28/5-3/2(22/5) = -1. Now 2 goes into 22, 11 times, so we have 28/5 minus 3(11/5), which is 33/5, is equal to -1. So now we can just subtract the numerators; 28-33 = -5. So we have -5/5 = -1, or -1 = -1, another true statement. So this verifies that (28/5, 22/5) is the solution to our system of equations. Solve the following system of equations using the elimination method. x+3y = 8 and x-3y = -5. Since the coefficients of y in both of our equations is 3 and -3, we can eliminate y by adding our equations. So if we go ahead and add them, we're left with 2x = 3, and we get x = 3/2. Now backsubstitute 3/2 for x in either of our equations. Let's go ahead and plug into our first equation. So we have 3/2+3y = 8. And now if we subtract 3/2 from both sides, we have 3y = 8-3/2. Now 8/1 can be multiplied by 2 in the numerator and denominator so that we get 16/2. So we have 16/2-3/2 = 3y. Now we can subtract the numerators, 16-3 = 13, so we have 13/2 = 3y. And if we multiply both sides by 1/3, the 3's cancel on the left, and we have y = 13/6. So the solution to our system of equations is the ordered pair, (3/2, 13/6). There are three types of systems. The last question we did was called a Consistent Independent System, because there was exactly one solution, or one common point between our two equations. If you were to graph the equations, you would see that we have two perpendicular lines. Another type of systems is called Inconsistent Independent. Now this is a graph of two parallel lines. There are no common points and no solutions. And finally, there's a system called Consistem Dependent. These are lines that are identical. There are infinitely many common points, and infinitely many solutions. A riverboat takes 2 hours to travel downstream 55 kilometers. It takes 3 hours to travel 60 kilometers upstream. What is the speed of the boat and the speed of the current? We'll need to translate our situation using mathematical language by creating more than one equation with more than one variable. So notice that we're given a time and a distance. Time and distance are related by the following equation: "distance" = "rate"*"time". Now, downstream, we have that the riverboat takes 2 hours to travel 55 kilometers. And upstream, we have that the boat takes 3 hours to travel 60 kilometers. Now let's go ahead and let the speed of the boat be represented by b, and the speed of the current be represented by c. Since the boat is going to be traveling with the current, downstream, the rate is b+c. And then since the boat will be traveling against the current, upstream, the rate will be b-c. So our system of equations is 55 = (b+c)*2 and 60 = (b-c)*3. Now let's go ahead and simplify both of our equations, by dividing by 2 on both sides in our first equation, and dividing by 3 on both sides in our second equation. So we have 27.5 = b+c, and then we have 20 = b-c. Let's use the elimination method. Notice that we have a positive c, and a -c here. So if we go ahead and add our equations, the c's will cancel, and we're left with 47.5 = 2b, or 23.75 = b, if we divide both sides by 2. Now let's backsubstitute 23.75 into either one of our equations. I'll use our first equation, 27.5 = b+c. So we have 27.5 = (23.75)+c. And then if we subtract both sides by 23.75, then we're left with c = 3.75. So the speed of the boat is 23.75 kilometers per hour, and the speed of the current is 3.75 kilometers per hour.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867470622062683, "perplexity": 349.81771485588615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00210.warc.gz"}
http://stat511.cwick.co.nz/labs/lab-5.html
Stat 411/511 Goals This lab is relatively short, use any spare time to get started on your data analysis. • Using the tapply function to find summaries by group • Log transform practice For this purposes of this lab, we’ll use the data you are using for Data Analysis #1, you don’t need any of the material here to complete Data Analysis #1, but it may help when formulating your own question about the data. tapply Quite often we want summary statistics calculated within some groups. We’ve already seen a strategy for finding the average, standard deviation and sample size for certain groups: we used subset to get observations that correspond to one group, then calculated our summary using that subset. For example, if we wanted to know the average number of bedrooms for households that rent, we could do: If we also wanted the average the average number of bedrooms for households that own free and clear, we would repeat the process with some modifications: But there are another two categories in own and doing this soon becomes tiring! Luckily there is an easier way, the tapply function. tapply (short for table apply) takes three arguments, a numeric vector you want summarise, a factor vector that describes the categories to summarise within and a function to do the summarising, for example: It’s easier to read what’s happening from the right to left. We want to take the mean in each category of acs$own of the acs$bedrooms variable. We can save typing the acs\$ part by using the with function, We can use the same idea to find the sample standard deviations for each group. To get the number of observations for each group , But be aware that is there are missing values this can be dangerous. There aren’t any here, but a safer way to count observations is Can you find the mean electricity cost by the decade the house was built? Can you find the mean and median income of the husband by whether the household has internet or not? Log transform practice The dataset email_sample is a random sample of 100 spam emails and 100 non-spam (i.e. Ham) emails from the dataset emails in the openintro package. For each email the number of characters in the email is recorded. (Actually num_char was the number of characters in thousands, hence the decimals.) • Examine the histogram of the data on the raw and transformed scale. Do the assumptions seem better met on the log scale? • Try writing a statistical summary from the results of the t-test. You can compare your answers to this summary.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5632280111312866, "perplexity": 968.8495760036501}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218899.88/warc/CC-MAIN-20180821191026-20180821211026-00061.warc.gz"}
http://clay6.com/qa/8569/a-card-is-drawn-at-random-from-a-pack-of-52-playing-cards-what-is-the-proba
# A card is drawn at random from a pack of 52 playing cards.What is the probability that the card drawn is neither a spade nor a queen? This question has multiple parts. Therefore each part has been answered as a separate question on Clay6.com
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5035425424575806, "perplexity": 151.94305448975254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946564.73/warc/CC-MAIN-20180424041828-20180424061828-00129.warc.gz"}
http://etsf.polytechnique.fr/node/1427
# Anomalous Angular Dependence of the Dynamic Structure Factor near Bragg Reflections: Graphite Title Anomalous Angular Dependence of the Dynamic Structure Factor near Bragg Reflections: Graphite Publication Type Palaiseau Article Acknowledgements ETSF-I3, ANR ETSF-France DOI 10.1103/PhysRevLett.101.266406 Hambach, R, Giorgetti, C, Hiraoka, N, Cai, YQ, Sottile, F, Marinopoulos, A-G, Bechstedt, F, Reining, L Year of Publication 2008 Journal Physical Review Letters Volume 101 URL http://link.aps.org/abstract/PRL/v101/e266406 Keywords paper Abstract The electron energy-loss function of graphite is studied for momentum transfers q beyond the first Brillouin zone. We find that near Bragg reflections the spectra can change drastically for very small variations in q. The effect is investigated by means of first principle calculations in the random phase approximation and confirmed by inelastic x-ray scattering measurements of the dynamic structure factor S(q,$\omega$). We demonstrate that this effect is governed by crystal local field effects and the stacking of graphite. It is traced back to a strong coupling between excitations at small and large momentum transfers. Full Text Biblio Keywords:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944300651550293, "perplexity": 4703.946970739601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000313-00055.warc.gz"}
https://matheducators.stackexchange.com/questions/14114/how-to-resolve-the-new-definition-of-subtraction-and-division-seen-in-college-al/14126
# How to resolve the new definition of subtraction and division seen in college algebra? Here's the foundational thing that irritates me the most when teaching college algebra. Up through the secondary level, I think that instructors and students are trained to understand subtraction and division in terms of the inverse operation. Focusing on division here, if one asked "Why is $6/2 = 3$?", then one would most likely say it's because: $$3 \times 2 = 6$$ But in every college-level algebra book I've seen, a different definition is given (and this goes for any texts in remedial elementary algebra, intermediate algebra, college algebra, etc.). Specifically, such books begin with the accepted "properties of real numbers", which are basically a restatement of the axioms for a field. In particular, one of the basic axioms is the existence of inverses: e.g., for multiplication, for any $a \ne 0$, there exists a value $1/a$ such that $a \times 1/a = 1$. (This is already problematic because students at this level are not yet familiar with statements involving existential quantifiers.) Thereafter, division is defined this way: $a/b$ means $a \times 1/b$. Of course, that's exactly what we see for a definition in most abstract algebra texts. But then technically this commits us to justifying "Why is $6/2 = 3$?" with something like the following chain of reasoning from the axioms: $$6/2 = 6 \times 1/2 = (3 \times 2) \times 1/2 = 3 \times (2 \times 1/2) = 3 \times 1 = 3$$ Which I'm pretty sure no one actually ever does. Rather, they continue to use the secondary-school justification, even though this is technically out-of-synch (although, obviously, provably consistent with) our starting textbook axiom-properties. Furthermore: When radicals are defined in the college algebra text, then the definition will once again look like the understanding of inverses from secondary-school subtraction and division (so it is additionally irritating to have these definitions and justifications out-of-synch with each other). In summary: Advantages of the secondary-school definition: (1) it's what students are familiar with, (2) it provides shorter justifications, (3) it better lays the groundwork for the definition of radicals. Advantages of the standard college-algebra definition: (1) it complies with any standard textbook, and (2) it synchronizes with standard abstract algebra definitions. So I go back-and-forth about this proud nail every semester. It seems like there would be more advantages to redefining subtraction and division as per the customary secondary-school rules, and thus smooth the way for student entry and understanding of the course; but the labor of going off-book and rewriting everything always deters me. What is the best resolution to this problem? • Does $6/3$ denote a rational number, or an expression involving a division "operation"? Which textbook does the course employ? May 29 '18 at 15:22 • One further question: does your division apply to reals or only to integers? If the latter, are you essentially trying to show that $\Bbb R$ (or any field containing $\Bbb Z)$ also contains $\Bbb Q$ (up to ring isomorphism)? This follows immediately from the universal properties of fraction fields but the idea is so simple that its essence can be explained to a bright high-school student. If this is of interest let me know and I will elaborate in an answer. May 30 '18 at 3:03 • @Number: As the question says, this is about the definition of the division (and subtraction) operation. This could be in the context of a half-dozen or more college-level algebra texts that I've seen. I don't think the other question is relevant. May 30 '18 at 18:51 • So the scope of your question includes division in $\Bbb R$ (or any field), not simply integer division? Are you essentially asking if there are pedagogical (dis)advantages of using alternative field (or group) axioms that replace inversion by division (or negation by subtraction in the additive case)? And the motivation for such is to clarify the relationship with fraction fields (or their additive analog = difference groups)? May 30 '18 at 20:03 • Normal definition makes more sense to me too. But does it really even come up though? How much time are you spending on arithmetic? Can't you gloss over it and move on in the course? Maybe it just bugs you. May 31 '18 at 7:07 Here is what the CCSS have to say when division and fractions are being introduced in grade 3: http://www.corestandards.org/Math/Content/3/introduction/ 1. Students develop an understanding of the meanings of multiplication and division of whole numbers through activities and problems involving equal-sized groups, arrays, and area models; multiplication is finding an unknown product, and division is finding an unknown factor in these situations. For equal-sized group situations, division can require finding the unknown number of groups or the unknown group size. Students use properties of operations to calculate products of whole numbers, using increasingly sophisticated strategies based on these properties to solve multiplication and division problems involving single-digit factors. By comparing a variety of solution strategies, students learn the relationship between multiplication and division. 2. Students develop an understanding of fractions, beginning with unit fractions. Students view fractions in general as being built out of unit fractions, and they use fractions along with visual fraction models to represent parts of a whole. Students understand that the size of a fractional part is relative to the size of the whole. For example, 1/2 of the paint in a small bucket could be less paint than 1/3 of the paint in a larger bucket, but 1/3 of a ribbon is longer than 1/5 of the same ribbon because when the ribbon is divided into 3 equal parts, the parts are longer than when the ribbon is divided into 5 equal parts. Students are able to use fractions to represent numbers equal to, less than, and greater than one. They solve problems that involve comparing fractions by using visual fraction models and strategies based on noticing equal numerators or denominators. In other words, division is (as you suggest) being defined as an inverse to multiplication. Since the CCSS defines multiplication in a way which distinguishes the two factors ($$A \times B$$ is the number of units $$A$$ groups, is each group has $$B$$ units), there are actually TWO distinct definitions of division which must be unified through the commutative property of multiplication: The "How many groups" definition of division: We define $$A \div B$$ to be the number $$C$$ which solves the multiplication problem $$C \times B = A$$. For example $$12 \div 4 = 3$$ since if want a certain number of groups, each containing 4 units, to give us 12 units total, then we must have 3 of those groups. The "How many units in each group" definition of division: We alternatively define $$A \div B$$ to be the number $$C$$ which solves the multiplication problem $$B \times C = A$$. For example $$12 \div 4 = 3$$ since if want 4 groups, each containing a certain number of units, to give us 12 units total, then we must have 3 units in each group. The definition of $$\frac{A}{B}$$ of a unit is to take the unit amount, split it into $$B$$ equal sized parts to obtain the unit fraction $$\frac{1}{B}$$. Then $$\frac{A}{B}$$ is defined to be equal to $$A$$ of these parts of size $$\frac{1}{B}$$. For instance, the definition of $$\frac{4}{3}$$ of a pound would be to take $$1$$ pound, split it into three equal sized pieces each called $$\frac{1}{3}$$ pound. Then four of these pieces is $$\frac{4}{3}$$ pound. According to these definitions, there is no direct link between $$A \div B$$ and $$\frac{A}{B}$$. However, we can argue their equality using the definitions, both "intuitively" and more formally. From an intuitive "how many units in each group" perspective $$A \div B$$ could be thought of as the answer to the question: "I have A cupcakes, and B people to share them with. What fraction of a cupcake will each person receive?". One way to answer this is to split each cupcake into $$B$$ parts. Now I can give $$\frac{1}{B}$$ to each person from each of the $$A$$ cupcakes, yielding $$\frac{A}{B}$$ cupcakes for each person. Thus $$A \div B = \frac{A}{B}$$. From an intuitive "how many groups" perspective, $$A \div B$$ could be thought of as the answer to the question : "I have A pounds of flour. It takes B pounds to make one recipe. How many recipes can I make?". One way to answer this question is to split each recipe into $$B$$ equal parts. Then it takes $$1$$ pound of flour to make $$\frac{1}{B}$$ recipes. Since I have $$A$$ pounds of flour, I can make $$\frac{A}{B}$$ recipes. From a more formal/algebraic perspective we might make the following analogous definitions and proofs: Let $$A,B \in \mathbb{R}$$ with $$B \neq 0$$. We define $$A \div B$$ as the real number $$C$$ so that $$B \times C = A$$. If we were being really formal, I supposed existence and uniqueness of this number would need to be addressed. I have never seen any elementary text address the uniqueness part. Let $$A, B \in \mathbb{R}$$ with $$B \neq 0$$. Define $$\frac{A}{B} = AB^{-1}$$, where $$B^{-1}$$ is the multiplicative inverse of $$B$$. Note that this corresponds to our intuitive treatment of fractions. $$\frac{1}{B}$$ was defined as the number such that $$B$$ of them yields $$1$$: aka it was defined as the multiplicative inverse of $$B$$. Now to check that $$A \div B = \frac{A}{B}$$, we just need to check that $$\frac{A}{B}$$ satisfies the definition of the quotient: \begin{align*} B \times \frac{A}{B} &= B \times (A \times \frac{1}{B})\\ &= A \times (B \times \frac{1}{B})\\ &= A \times 1\\ &= A \end{align*} This whole discussion will fly way over the head of almost any student though. Ideally the teacher can be aware of these issues, so that they can design tasks which target the intuitive development of these ideas. There is a tension between the logical development of the ideas, which has a strict progression from the definitions, and the desired end state, in which the intuitions, understandings, and equivalences are so strong that it is easy to forget what originally implied what. You want to build number and operation sense which is so strong that these ideas are all applied intuitively at the subconscious level. I am not sure how to resolve this tension. We want a really big "The following are all equivalent" statements to be living (implicitly) in the mind of each student, but we cannot get there without a logical progression. • Thanks a bunch for writing this, it's very helpful! As you can tell from my question, I hadn't previously dug into Common Core (or anything else) to pinpoint exactly what the status of these concepts there is, nor exactly what grade level. I'm picking this as the selected answer at this time, because I think it's far and away the most clarifying. Nov 8 at 14:24 • That said, I'll continue to think about this informs how I present those classes (re: last 2 paragraphs). E.g.: in my college algebra courses, I actually look for opportunities like this to show the axiomatic method and proof-creation process, so as to get that mental model in front of students as soon as possible in the sequence (I kind of feel that's what college math should be for). For a remedial arithmetic or algebra class, that would be a dicier proposition (although those courses are supposedly being eliminated at my institution). Nov 8 at 14:27 But then technically this commits us to justifying "Why is $6/2 = 3$?" with something like the following chain of reasoning from the axioms: $$6/2 = 6 \times 1/2 = (3 \times 2) \times 1/2 = 3 \times (2 \times 1/2) = 3 \times 1 = 3$$ Which I'm pretty sure no one actually ever does. I am reminded of Principia Mathematica famously taking hundreds of pages to prove that $1+1=2$. I'm pretty sure no-one actually ever does that either. (Actually, I doubt that anyone at secondary-school level, much level college level, bothers to justify $6/2=3$ at all). Nevertheless, since this troubles your conscience, the best resolution to an apparent mis-match between an old, familiar, system and a new system is surely to prove their equivalence and then to work in whichever system is most convenient for the task at hand. So if you prove the lemma $a / b = c \iff a = b \times c$ you can then apply your secondary-school justification without any twinges of conscience. And if you explicitly teach the principle of proving equivalence and then working in the more convenient system you're doing your students a big favour. I think it can be argued that that principle is as foundational to mathematical thinking as axiomatisation, and probably more so. • For the students I have in these classes, it is definitely necessary to justify that $6/2 = 3$ numerous times each semester... if only as a warm-up to knowing how to check polynomial division, factoring, radicals, why division by zero is undefined, etc. Likewise, the principle of proving bidirectional equivalence would be beyond them. May 28 '18 at 18:53 I'm very much with you concerning the problem. Striving for conceptual understanding and not merely exercising procedure following, I actually do bother with the question to justify why 6/2=3. But maybe one has to clarify more clearly, what 'justify' in school context actually means. To me, it doesn't mean to give a mathematically valid proof as in the Principia Mathematica, but merely an explanation that allows students to connect to other concepts and to model meaning; loosely speaking... In the case you mention, I see the problem mainly in a change of concept: In middle school - students develop an understanding of inverse operation: They learn to see division as inverse operation of multiplication. In high school or later in college, students then should develop an understanding of inverse element: they learn that there is a neutral element with respect to an operation and the meaning of an inverse element is that its action on (i.e. operation with) the element itself results in the neutral element. This is the way that I read your line of reasoning. The first equal sign reads as interpretation what 'division' means: multiplication with the inverse element. In my experience, students often have a lot of trouble with this abstraction: To see as object what they used to know as an operation. Frankly I don't know what the best way is to approach this problem. That might much depend on the local context you encounter. But I do have some strategies as an approach. Said in advance, it anyway just takes time, and patience, and care... I explicitly discuss different models for the objects and the operations. For example, taking the number line as model for the numbers (objects), what are models for the operations? addition and subtraction might be straight forward, but if you model muliplication as concatenation of "arrows", that lets you explain what $5 \times \frac{2}{3}$ is, but does not work for a calculation like $\frac{8}{7}\times \frac{2}{3}$. That works if you model multiplication as scaling (e.g. I use the following Geogebra-Applet as part of the many ways to explain why the product of two negative numbers yields a positive number as a result). You can model division in a similar way. Using explicit models I'm usually pretty successful in talking about the distinction of inverse operation and inverse element. I'm sure there are other ideas and better ways to address this problem with students, I would love to hear more about how what people in this community think about this question. The problem stems from effectively defining the division operator twice. You should begin by defining division on $R$ as usual: For all $x, y, z \in R$ where $y\neq 0$, we have $x/y=z$ iff $x=y\cdot z$ So, $6/2=3$ iff $6 = 2\times 3$ by the definition of division on $R$. From the field axioms, we know that For all $x\in R$ where $x\neq 0$, there exists $y\in R$ such that $x\cdot y=1$ Note that I do not make use of the division operator here. Now, we can prove: For all $a\in R$ and $a\neq 0$, we have $1/a\in R$ and $a\cdot (1/a)=1$ Suppose $a\in R$ and $a\neq 0$. Then, from the field axioms, there must exist $b\in R$ such that $a\cdot b =1$ Applying the definition of division, we have $b=1/a$ Substituting, we have $a\cdot (1/a)=1$ We conclude as required that: For all $a\in R$ and $a\neq 0$, we have $1/a\in R$ and $a\cdot (1/a)=1$ • So you think that it's profitable to be off-book definitions in that way? Jun 2 '18 at 15:36 • @DanielR.Collins What else can you do when the book definition leads to confusion? Jun 2 '18 at 15:41 • @DanielR.Collins See for example web.stanford.edu/~jchw/2015Math110Material/… Jun 2 '18 at 15:46 • @DanielR.Collins If you are stuck with the book definitions, it might help to use the $x^{-1}$ notation for the multiplicative inverse of $x$ and define $x/y=x\cdot y^{-1}$. Then derive the "definition" of division that I give above to establish a link to their high-school definition. Jun 2 '18 at 19:55 This isn't every possibility, but a rough sketch of approaches, rank ordered by what I think will help the (weak, given they are taking this course in college) students the most. Which I believe is the more important objective. Not fussy details of math or of what is most interest to the (more sophisticated) instructors. But helping the kids. 1. Avoid dwelling on the theoretical justifications (at all) and pursue a course of familiarization via practice. Yes, the kids need to learn the ability to manipulate symbols (like polynomial long division) as they do numbers, but this is best accomplished by light practice with the numbers, followed by extensive practice with x-containing expressions. Perhaps even juxtaposed (when they get to the symbols at least, OK if you do some all arithmetic drill earlier, but doing a long division number problem, itself, followed by a polynomial one, is an effective way to build confidence, willingness. If you instead make them question things they already are familiar with (subtraction of numbers), or at least think they are, you will derail them. As other have pointed out, one could spend immense detail on 1+1, but this is not the right time for that. I would even wager that it is hard to learn that level of abstract material without some base in the topic first (why it is easier not to have to learn Rudin before conventional calculus.) 2. Teach the definition you (and they) are comfortable with, the conventional one. But then move to significant calculational practice to build familiarity. (The way to learn quantum mechanics is to do particle in the box problems, not axiom-dwelling. In some cases, you really need a sort of familiarity with the materials, before dwelling on definitions or different axioms or notations [like bra ket] are pedagogically manageable. (There's a 1.5, where you glide over it and move to practice very fast.) 3. Teach the book's definition. Then practice. (There's probably a 2.5 where you glide over it and emphasize practice, though.) 4. Do both, then practice. This has the disadvantage of spending way too much time on fussy details and bogging the kids down, when they need familiarization with manipulating x-expressions. Note, I'm not saying little edge cases are irrelevent (like stuff the "divided by zero" hawks watch for), but even there they are better dealt with after or during the course of building basic familiarity/competence. Not pre-emptively. That is too much cognitive load and not progressive enough.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 52, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811226487159729, "perplexity": 557.778341112181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00503.warc.gz"}
https://worldwidescience.org/topicpages/m/multiresolution+hough+transform.html
#### Sample records for multiresolution hough transform 1. Neutrosophic Hough Transform Directory of Open Access Journals (Sweden) Ümit Budak 2017-12-01 Full Text Available Hough transform (HT is a useful tool for both pattern recognition and image processing communities. In the view of pattern recognition, it can extract unique features for description of various shapes, such as lines, circles, ellipses, and etc. In the view of image processing, a dozen of applications can be handled with HT, such as lane detection for autonomous cars, blood cell detection in microscope images, and so on. As HT is a straight forward shape detector in a given image, its shape detection ability is low in noisy images. To alleviate its weakness on noisy images and improve its shape detection performance, in this paper, we proposed neutrosophic Hough transform (NHT. As it was proved earlier, neutrosophy theory based image processing applications were successful in noisy environments. To this end, the Hough space is initially transferred into the NS domain by calculating the NS membership triples (T, I, and F. An indeterminacy filtering is constructed where the neighborhood information is used in order to remove the indeterminacy in the spatial neighborhood of neutrosophic Hough space. The potential peaks are detected based on thresholding on the neutrosophic Hough space, and these peak locations are then used to detect the lines in the image domain. Extensive experiments on noisy and noise-free images are performed in order to show the efficiency of the proposed NHT algorithm. We also compared our proposed NHT with traditional HT and fuzzy HT methods on variety of images. The obtained results showed the efficiency of the proposed NHT on noisy images. 2. Feature Extraction Using the Hough Transform OpenAIRE Ferguson, Tara; Baker, Doran 2002-01-01 This paper contains a brief literature survey of applications and improvements of the Hough transform, a description of the Hough transform and a few of its algorithms, and simulation examples of line and curve detection using the Hough transform. 3. Guaranteed convergence of the Hough transform Science.gov (United States) Soffer, Menashe; Kiryati, Nahum 1995-01-01 The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic. 4. Hough transform search for continuous gravitational waves International Nuclear Information System (INIS) Krishnan, Badri; Papa, Maria Alessandra; Sintes, Alicia M.; Schutz, Bernard F.; Frasca, Sergio; Palomba, Cristiano 2004-01-01 This paper describes an incoherent method to search for continuous gravitational waves based on the Hough transform, a well-known technique used for detecting patterns in digital images. We apply the Hough transform to detect patterns in the time-frequency plane of the data produced by an earth-based gravitational wave detector. Two different flavors of searches will be considered, depending on the type of input to the Hough transform: either Fourier transforms of the detector data or the output of a coherent matched-filtering type search. We present the technical details for implementing the Hough transform algorithm for both kinds of searches, their statistical properties, and their sensitivities 5. Parallel Monte Carlo Search for Hough Transform Science.gov (United States) Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R. 2017-10-01 We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed. 6. Mobile robot motion estimation using Hough transform Science.gov (United States) Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu 2018-05-01 This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described. 7. Detecting circumscribed lesions with the Hough transform Energy Technology Data Exchange (ETDEWEB) Groshong, B.R; Kegelmeyer, W.P., Jr 1996-01-11 We have designed and implemented a circumscribed lesion detection algorithm, based on the Hough Transform, which will detect zero or more approximately circular structures in a mammogram over a range of radii from a few pixels to nearly the size of the breast. We address the geometrical behavior of peaks in Hough parameter space (x,y,r) for both the true radius of a circular structure in the image (r = r{sub o}), and for the parameter r as it passes through this radius. In addition, we evaluate peaks in Hough parameter space by re-analyzing the underlying mammogram in the vicinity of the circular disk indicated by the peak. Discs suggested by the resulting peaks are accumulated in a feature image, scaled by a measure of their quality. These results are then rectified with respect to image contrast extremes and average value. The result is a feature with a continuously scaled pixel level output which suggests the likelihood that a pixel is located inside a circular structure, irrespective of the radius of the structure and overall mammogram contrast. These features are evaluated fast qualitative and quantitative performance metrics which permit circumscribed lesion detection features to be initially evaluated without a full end-to-end classification experiment. 8. Lane detection using Randomized Hough Transform Science.gov (United States) Mongkonyong, Peerawat; Nuthong, Chaiwat; Siddhichai, Supakorn; Yamakita, Masaki 2018-01-01 According to the report of the Royal Thai Police between 2006 and 2015, lane changing without consciousness is one of the most accident causes. To solve this problem, many methods are considered. Lane Departure Warning System (LDWS) is considered to be one of the potential solutions. LDWS is a mechanism designed to warn the driver when the vehicle begins to move out of its current lane. LDWS contains many parts including lane boundary detection, driver warning and lane marker tracking. This article focuses on the lane boundary detection part. The proposed lane boundary detection detects the lines of the image from the input video and selects the lane marker of the road surface from those lines. Standard Hough Transform (SHT) and Randomized Hough Transform (RHT) are considered in this article. They are used to extract lines of an image. SHT extracts the lines from all of the edge pixels. RHT extracts only the lines randomly picked by the point pairs from edge pixels. RHT algorithm reduces the time and memory usage when compared with SHT. The increase of the threshold value in RHT will increase the voted limit of the line that has a high possibility to be the lane marker, but it also consumes the time and memory. By comparison between SHT and RHT with the different threshold values, 500 frames of input video from the front car camera will be processed. The accuracy and the computational time of RHT are similar to those of SHT in the result of the comparison. 9. Road Detection by Using a Generalized Hough Transform Directory of Open Access Journals (Sweden) Weifeng Liu 2017-06-01 Full Text Available Road detection plays key roles for remote sensing image analytics. Hough transform (HT is one very typical method for road detection, especially for straight line road detection. Although many variants of Hough transform have been reported, it is still a great challenge to develop a low computational complexity and time-saving Hough transform algorithm. In this paper, we propose a generalized Hough transform (i.e., Radon transform implementation for road detection in remote sensing images. Specifically, we present a dictionary learning method to approximate the Radon transform. The proposed approximation method treats a Radon transform as a linear transform, which then facilitates parallel implementation of the Radon transform for multiple images. To evaluate the proposed algorithm, we conduct extensive experiments on the popular RSSCN7 database for straight road detection. The experimental results demonstrate that our method is superior to the traditional algorithms in terms of accuracy and computing complexity. 10. Hough transform methods used for object detection International Nuclear Information System (INIS) Qussay A Salih; Abdul Rahman Ramli; Md Mahmud Hassan Prakash 2001-01-01 The Hough transform (HT) is a robust parameter estimator of multi-dimensional features in images. The HT is an established technique which evidences a shape by mapping image edge points into a parameter space. The HT is technique which is used to isolate curves of a give shape in an image. The classical HT requires that the curve be specified in some parametric from and, hence is most commonly used in the detection of regular curves. The HT has been generalized so that it is capable of detecting arbitrary curved shapes. The main advantage of this transform technique is that it is very tolerant of gaps in the actual object boundaries the classical HT for the detection of line , we will indicate how it can be applied to the detection of arbitrary shapes. Sometimes the straight line HT is efficient enough to detect features such as artificial curves. The HT is an established technique for extracting geometric shapes based on the duality definition of the points on a curve and their parameters. This technique has been developed for extracting simple geometric shapes such as lines, circles and ellipses as well as arbitrary shapes. The HT provides robustness against discontinuous or missing features, points or edges are mapped into a partitioned parameter of Hough space as individual votes where peaks denote the feature of interest represented in a non-analytically tabular form. The main drawback of the HT technique is the computational requirement which has an exponential growth of memory space and processing time as the number of parameters used to represent a primitive increases. For this reason most of the research on the HT has focused on reducing the computational burden for extracting of arbitrary shapes under more general transformations include a overview of describing the methods for the detection image processing programs are frequently required to detect and particle classification in an industrial setting, a standard algorithms for this detection lines 11. Locating An IRIS From Image Using Canny And Hough Transform Directory of Open Access Journals (Sweden) Poorvi Bhatt 2017-11-01 Full Text Available Iris recognition a relatively new biometric technology has great advantages such as variability stability and security thus it is the most promising for high security environments. The proposed system here is a simple system design and implemented to find the iris from the image using Hough Transform Algorithm. Canny Edge detector has been used to get edge image to use it as an input to the Hough Transform. To get the general idea of Hough Transform the Hough Transform for circle is also implemented. RGB value of 3-D accumulator array of peaks of inner circle and outer circle has been performed. And at the end some suggestions are made to improve the system and performance gets discussed. 12. An improved Hough transform-based fingerprint alignment approach CSIR Research Space (South Africa) Mlambo, CS 2014-11-01 Full Text Available An improved Hough Transform based fingerprint alignment approach is presented, which improves computing time and memory usage with accurate alignment parameter (rotation and translation) results. This is achieved by studying the strengths... 13. Generalized Hough Transform for Object Classification in the Maritime Domain Science.gov (United States) 2015-12-01 Aziz, “Detecting mango fruits by using Randomized Hough Transform and backpropagation neural network,” in 18th Int. Conf. Inform. Visualisation...Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY 2. REPORT DATE December 2015 3. REPORT TYPE AND... used to generate a representation of the object as a Hough coordinate table by using the GHT algorithm. The table is then reformatted to a contour map 14. Human eye localization using the modified Hough transform Czech Academy of Sciences Publication Activity Database Dobeš, M.; Martínek, J.; Skoupil, D.; Dobešová, Z.; Pospíšil, Jaroslav 2006-01-01 Roč. 117, - (2006), s. 468-473 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : human eye localization * modified Hough transform * eye iris and eyelid shape determination Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.585, year: 2006 15. Circle Hough transform implementation for dots recognition in braille cells Science.gov (United States) Jacinto Gómez, Edwar; Montiel Ariza, Holman; Martínez Sarmiento, Fredy Hernán. 2017-02-01 This paper shows a technique based on CHT (Circle Hough Transform) to achieve the optical Braille recognition (OBR). Unlike other papers developed around the same topic, this one is made by using Hough Transform to process the recognition and transcription of Braille cells, proving CHT to be an appropriate technique to go over different non-systematics factors who can affect the process, as the paper type where the text to traduce is placed, some lightning factors, input image resolution and some flaws derived from the capture process, which is realized using a scanner. Tests are performed with a local database using text generated by visual nondisabled people and some transcripts by sightless people; all of this with the support of National Institute for Blind People (INCI for their Spanish acronym) placed in Colombia. 16. The fuzzy Hough Transform-feature extraction in medical images International Nuclear Information System (INIS) Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B.; McPherson, D.D.; Gotteiner, N.L. 1994-01-01 Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications 17. Circular defects detection in welded joints using circular hough transform International Nuclear Information System (INIS) Hafizal Yazid; Mohd Harun; Shukri Mohd; Abdul Aziz Mohamed; Shaharudin Sayuti; Muhamad Daud 2007-01-01 Conventional radiography is one of the common non-destructive testing which employs manual image interpretation. The interpretation is very subjective and depends much on the inspector experience and working conditions. It is therefore useful to have pattern recognition system in order to assist human interpreter in evaluating the quality of the radiograph sample, especially radiographic image of welded joint. This paper describes a system to detect circular discontinuities that is present in the joints. The system utilizes together 2 different algorithms, which is separability filter to identify the best object candidate and Circular Hough Transform to detect the present of circular shape. The result of the experiment shows a promising output in recognition of circular discontinuities in a radiographic image. This is based on 81.82-100% of radiography film with successful circular detection by using template movement of 10 pixels. (author) 18. Magnetically aligned H I fibers and the rolling hough transform Energy Technology Data Exchange (ETDEWEB) Clark, S. E.; Putman, M. E.; Peek, J. E. G. [Department of Astronomy, Columbia University, New York, NY (United States) 2014-07-01 We present observations of a new group of structures in the diffuse Galactic interstellar medium (ISM): slender, linear H I features we dub 'fibers' that extend for many degrees at high Galactic latitude. To characterize and measure the extent and strength of these fibers, we present the Rolling Hough Transform, a new machine vision method for parameterizing the coherent linearity of structures in the image plane. With this powerful new tool we show that the fibers are oriented along the interstellar magnetic field as probed by starlight polarization. We find that these low column density (N{sub H} {sub I}≃5×10{sup 18} cm{sup –2}) fiber features are most likely a component of the local cavity wall, about 100 pc away. The H I data we use to demonstrate this alignment at high latitude are from the Galactic Arecibo L-Band Feed Array H I (GALFA-H I) Survey and the Parkes Galactic All Sky Survey. We find better alignment in the higher resolution GALFA-H I data, where the fibers are more visually evident. This trend continues in our investigation of magnetically aligned linear features in the Riegel-Crutcher H I cold cloud, detected in the Southern Galactic Plane Survey. We propose an application of the RHT for estimating the field strength in such a cloud, based on the Chandrasekhar-Fermi method. We conclude that data-driven, quantitative studies of ISM morphology can be very powerful predictors of underlying physical quantities. 19. Vanishing points detection using combination of fast Hough transform and deep learning Science.gov (United States) Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry 2018-04-01 In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU. 20. Multiresolution signal decomposition transforms, subbands, and wavelets CERN Document Server Akansu, Ali N 1992-01-01 This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos 1. A novel approach to Hough Transform for implementation in fast triggers Energy Technology Data Exchange (ETDEWEB) Pozzobon, Nicola, E-mail: nicola.pozzobon@pd.infn.it [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via F. Marzolo 8, 35131 Padova (Italy); Dipartimento di Fisica ed Astronomia “G. Galilei”, Università degli Studi di Padova, via F. Marzolo 8, 35131 Padova (Italy); Montecassiano, Fabio [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via F. Marzolo 8, 35131 Padova (Italy); Zotto, Pierluigi [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via F. Marzolo 8, 35131 Padova (Italy); Dipartimento di Fisica ed Astronomia “G. Galilei”, Università degli Studi di Padova, via F. Marzolo 8, 35131 Padova (Italy) 2016-10-21 Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled. 2. A novel approach to Hough Transform for implementation in fast triggers International Nuclear Information System (INIS) Pozzobon, Nicola; Montecassiano, Fabio; Zotto, Pierluigi 2016-01-01 Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled. 3. Multiresolution signal decomposition transforms, subbands, and wavelets CERN Document Server 2001-01-01 The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course 4. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform Science.gov (United States) Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H. 1998-02-01 We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts. 5. ANNSVM: A Novel Method for Graph-Type Classification by Utilization of Fourier Transformation, Wavelet Transformation, and Hough Transformation Directory of Open Access Journals (Sweden) Sarunya Kanjanawattana 2017-07-01 Full Text Available Image classification plays a vital role in many areas of study, such as data mining and image processing; however, serious problems collectively referred to as the course of dimensionality have been encountered in previous studies as factors that reduce system performance. Furthermore, we also confront the problem of different graph characteristics even if graphs belong to same types. In this study, we propose a novel method of graph-type classification. Using our approach, we open up a new solution of high-dimensional images and address problems of different characteristics by converting graph images to one dimension with a discrete Fourier transformation and creating numeric datasets using wavelet and Hough transformations. Moreover, we introduce a new classifier, which is a combination between artificial neuron networks (ANNs and support vector machines (SVMs, which we call ANNSVM, to enhance accuracy. The objectives of our study are to propose an effective graph-type classification method that includes finding a new data representative used for classification instead of two-dimensional images and to investigate what features make our data separable. To evaluate the method of our study, we conducted five experiments with different methods and datasets. The input dataset we focused on was a numeric dataset containing wavelet coefficients and outputs of a Hough transformation. From our experimental results, we observed that the highest accuracy was provided using our method with Coiflet 1, which achieved a 0.91 accuracy. 6. Implementation of the Hough transform for 3D recognition of the straight tracks in drift chambers International Nuclear Information System (INIS) Bel'kov, A.A. 2001-01-01 This work is devoted to the development of the method for 3D reconstruction of the charged-particle straight tracks in the tracking systems consisting of the drift-chamber stereo layers. The method is based on the modified Hough transform with taking into account the measurements of drift distance. The proposed program realization of the method provides the time-consuming optimization of event processing, the stable performance of the algorithm and high efficiency of the track recognition under large track-occupancy of the detector as well as under high level of noisy and dead channels 7. Implementation of the Hough Transform for 3D Recognition of the Straight Tracks in Drift Chambers CERN Document Server Belkov, A A 2001-01-01 This work is devoted to the development of the method for 3D reconstruction of the charged-particle straight tracks in the tracking systems consisting of the drift-chamber stereo layers. The method is based on the modified Hough transform with taking into account the measurements of drift distance. The proposed program realization of the method provides the time-consuming optimization of event processing, the stable performance of algorithm and high efficiency of the track recognition under large track-occupancy of detector as well as under high level of noisy and dead channels. 8. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform Science.gov (United States) Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S. 2017-12-01 In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability. 9. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision Directory of Open Access Journals (Sweden) Nam Ling 2013-07-01 Full Text Available Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. 10. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision. Science.gov (United States) Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam 2013-07-17 Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. 11. Application of generalized Hough transform for detecting sugar beet plant from weed using machine vision method Directory of Open Access Journals (Sweden) A Bakhshipour Ziaratgahi 2017-05-01 Full Text Available Introduction Sugar beet (Beta vulgaris L. as the second most important world’s sugar source after sugarcane is one of the major industrial crops. The presence of weeds in sugar beet fields, especially at early growth stages, results in a substantial decrease in the crop yield. It is very important to efficiently eliminate weeds at early growing stages. The first step of precision weed control is accurate detection of weeds location in the field. This operation can be performed by machine vision techniques. Hough transform is one of the shape feature extraction methods for object tracking in image processing which is basically used to identify lines or other geometrical shapes in an image. Generalized Hough transform (GHT is a modified version of the Hough transform used not only for geometrical forms, but also for detecting any arbitrary shape. This method is based on a pattern matching principle that uses a set of vectors of feature points (usually object edge points to a reference point to construct a pattern. By comparing this pattern with a set pattern, the desired shape is detected. The aim of this study was to identify the sugar beet plant from some common weeds in a field using the GHT. Materials and Methods Images required for this study were taken at the four-leaf stage of sugar beet as the beginning of the critical period of weed control. A shelter was used to avoid direct sunlight and prevent leaf shadows on each other. The obtained images were then introduced to the Image Processing Toolbox of MATLAB programming software for further processing. Green and Red color components were extracted from primary RGB images. In the first step, binary images were obtained by applying the optimal threshold on the G-R images. A comprehensive study of several sugar beet images revealed that there is a unique feature in sugar beet leaves which makes them differentiable from the weeds. The feature observed in all sugar beet plants at the four 12. Evolved Multiresolution Transforms for Optimized Image Compression and Reconstruction Under Quantization National Research Council Canada - National Science Library Moore, Frank 2005-01-01 ...) First, this research demonstrates that a GA can evolve a single set of coefficients describing a single matched forward and inverse transform pair that can be used at each level of a multiresolution... 13. EFFECTIVE MULTI-RESOLUTION TRANSFORM IDENTIFICATION FOR CHARACTERIZATION AND CLASSIFICATION OF TEXTURE GROUPS Directory of Open Access Journals (Sweden) S. Arivazhagan 2011-11-01 Full Text Available Texture classification is important in applications of computer image analysis for characterization or classification of images based on local spatial variations of intensity or color. Texture can be defined as consisting of mutually related elements. This paper proposes an experimental approach for identification of suitable multi-resolution transform for characterization and classification of different texture groups based on statistical and co-occurrence features derived from multi-resolution transformed sub bands. The statistical and co-occurrence feature sets are extracted for various multi-resolution transforms such as Discrete Wavelet Transform (DWT, Stationary Wavelet Transform (SWT, Double Density Wavelet Transform (DDWT and Dual Tree Complex Wavelet Transform (DTCWT and then, the transform that maximizes the texture classification performance for the particular texture group is identified. 14. Determination of mango fruit from binary image using randomized Hough transform Science.gov (United States) Rizon, Mohamed; Najihah Yusri, Nurul Ain; Abdul Kadir, Mohd Fadzil; bin Mamat, Abd. Rasid; Abd Aziz, Azim Zaliha; Nanaa, Kutiba 2015-12-01 A method of detecting mango fruit from RGB input image is proposed in this research. From the input image, the image is processed to obtain the binary image using the texture analysis and morphological operations (dilation and erosion). Later, the Randomized Hough Transform (RHT) method is used to find the best ellipse fits to each binary region. By using the texture analysis, the system can detect the mango fruit that is partially overlapped with each other and mango fruit that is partially occluded by the leaves. The combination of texture analysis and morphological operator can isolate the partially overlapped fruit and fruit that are partially occluded by leaves. The parameters derived from RHT method was used to calculate the center of the ellipse. The center of the ellipse acts as the gripping point for the fruit picking robot. As the results, the rate of detection was up to 95% for fruit that is partially overlapped and partially covered by leaves. 15. Searching for continuous gravitational wave signals. The hierarchical Hough transform algorithm International Nuclear Information System (INIS) Papa, M.; Schutz, B.F.; Sintes, A.M. 2001-01-01 It is well known that matched filtering techniques cannot be applied for searching extensive parameter space volumes for continuous gravitational wave signals. This is the reason why alternative strategies are being pursued. Hierarchical strategies are best at investigating a large parameter space when there exist computational power constraints. Algorithms of this kind are being implemented by all the groups that are developing software for analyzing the data of the gravitational wave detectors that will come online in the next years. In this talk I will report about the hierarchical Hough transform method that the GEO 600 data analysis team at the Albert Einstein Institute is developing. The three step hierarchical algorithm has been described elsewhere [8]. In this talk I will focus on some of the implementational aspects we are currently concerned with. (author) 16. Partial fingerprint identification algorithm based on the modified generalized Hough transform on mobile device Science.gov (United States) Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande 2018-04-01 Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance. 17. Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor Science.gov (United States) Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer 2016-01-01 An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm. 18. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy International Nuclear Information System (INIS) Qiu Wu; Yuchi Ming; Ding Mingyue; Tessier, David; Fenster, Aaron 2013-01-01 Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 × 376 × 630 voxels. Conclusions: The proposed needle segmentation 19. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures. Science.gov (United States) Guan, Jungang; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Mattausch, Hans Jürgen 2017-01-30 The Hough Transform (HT) is a method for extracting straight lines from an edge image. The main limitations of the HT for usage in actual applications are computation time and storage requirements. This paper reports a hardware architecture for HT implementation on a Field Programmable Gate Array (FPGA) with parallelized voting procedure. The 2-dimensional accumulator array, namely the Hough space in parametric form (ρ, θ), for computing the strength of each line by a voting mechanism is mapped on a 1-dimensional array with regular increments of θ. Then, this Hough space is divided into a number of parallel parts. The computation of (ρ, θ) for the edge pixels and the voting procedure for straight-line determination are therefore executable in parallel. In addition, a synchronized initialization for the Hough space further increases the speed of straight-line detection, so that XGA video processing becomes possible. The designed prototype system has been synthesized on a DE4 platform with a Stratix-IV FPGA device. In the application of road-lane detection, the average processing speed of this HT implementation is 5.4ms per XGA-frame at 200 MHz working frequency. 20. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy Energy Technology Data Exchange (ETDEWEB) Qiu Wu [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Yuchi Ming; Ding Mingyue [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Tessier, David; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario N6A 5K8 (Canada) 2013-04-15 Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions 1. Track recognition in 4 μs by a systolic trigger processor using a parallel Hough transform International Nuclear Information System (INIS) Klefenz, F.; Noffz, K.H.; Conen, W.; Zoz, R.; Kugel, A.; Maenner, R.; Univ. Heidelberg 1993-01-01 A parallel Hough transform processor has been developed that identifies circular particle tracks in a 2D projection of the OPAL jet chamber. The high-speed requirements imposed by the 8 bunch crossing mode of LEP could be fulfilled by computing the starting angle and the radius of curvature for each well defined track in less than 4 μs. The system consists of a Hough transform processor that determines well defined tracks, and a Euler processor that counts their number by applying the Euler relation to the thresholded result of the Hough transform. A prototype of a systolic processor has been built that handles one sector of the jet chamber. It consists of 35 x 32 processing elements that were loaded into 21 programmable gate arrays (XILINX). This processor runs at a clock rate of 40 MHz. It has been tested offline with about 1,000 original OPAL events. No deviations from the off-line simulation have been found. A trigger efficiency of 93% has been obtained. The prototype together with the associated drift time measurement unit has been installed at the OPAL detector at LEP and 100k events have been sampled to evaluate the system under detector conditions 2. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis Science.gov (United States) Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan 2014-01-01 Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502 3. Laser spot tracking based on modified circular Hough transform and motion pattern analysis. Science.gov (United States) Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan 2014-10-27 Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. 4. Development of a Hough transformation track finder for time projection chambers International Nuclear Information System (INIS) Heinze, Isa 2013-12-01 The International Linear Collider (ILC) is a planned particle physics experiment. One of the two detector concepts is the International Large Detector (ILD) concept for which a time projection chamber is foreseen as the main tracking device. In the ILD the particle flow concept is followed which leads to special requirements for the detector. Especially for the tracking system a very good momentum resolution is required. Several prototypes were build to prove that it is possible to build a TPC which fulfills the requirements for a TPC in the ILD. One is the Large Prototype with which different readout technologies currently under development are tested. In parallel reconstruction software is developed for the reconstruction of Large Prototype data. In this thesis the development of a track finding algorithm based on the Hough transformation is described. It can find curved tracks (with magnetic field) as well as straight tracks (without magnetic field). This package was mainly developed for Large Prototype testbeam data but was also tested on Monte Carlo simulation of tracks in the ILD TPC. Furthermore the analysis of testbeam data regarding the single point resolution is presented. The data were taken with the Large Prototype and a readout module with GEM (gas electron multiplier) amplification. For the reconstruction of these data the software package mentioned above was used. The single point resolution is directly related to the momentum resolution of the detector, thus a good single point resolution is needed to achieve a good momentum resolution. 5. Robust Detection of Moving Human Target in Foliage-Penetration Environment Based on Hough Transform Directory of Open Access Journals (Sweden) P. Lei 2014-04-01 Full Text Available Attention has been focused on the robust moving human target detection in foliage-penetration environment, which presents a formidable task in a radar system because foliage is a rich scattering environment with complex multipath propagation and time-varying clutter. Generally, multiple-bounce returns and clutter are additionally superposed to direct-scatter echoes. They obscure true target echo and lead to poor visual quality time-range image, making target detection particular difficult. Consequently, an innovative approach is proposed to suppress clutter and mitigate multipath effects. In particular, a clutter suppression technique based on range alignment is firstly applied to suppress the time-varying clutter and the instable antenna coupling. Then entropy weighted coherent integration (EWCI algorithm is adopted to mitigate the multipath effects. In consequence, the proposed method effectively reduces the clutter and ghosting artifacts considerably. Based on the high visual quality image, the target trajectory is detected robustly and the radial velocity is estimated accurately with the Hough transform (HT. Real data used in the experimental results are provided to verify the proposed method. 6. Automatic detection of karstic sinkholes in seismic 3D images using circular Hough transform International Nuclear Information System (INIS) Parchkoohi, Mostafa Heydari; Farajkhah, Nasser Keshavarz; Delshad, Meysam Salimi 2015-01-01 More than 30% of hydrocarbon reservoirs are reported in carbonates that mostly include evidence of fractures and karstification. Generally, the detection of karstic sinkholes prognosticate good quality hydrocarbon reservoirs where looser sediments fill the holes penetrating hard limestone and the overburden pressure on infill sediments is mostly tolerated by their sturdier surrounding structure. They are also useful for the detection of erosional surfaces in seismic stratigraphic studies and imply possible relative sea level fall at the time of establishment. Karstic sinkholes are identified straightforwardly by using seismic geometric attributes (e.g. coherency, curvature) in which lateral variations are much more emphasized with respect to the original 3D seismic image. Then, seismic interpreters rely on their visual skills and experience in detecting roughly round objects in seismic attribute maps. In this paper, we introduce an image processing workflow to enhance selective edges in seismic attribute volumes stemming from karstic sinkholes and finally locate them in a high quality 3D seismic image by using circular Hough transform. Afterwards, we present a case study from an on-shore oilfield in southwest Iran, in which the proposed algorithm is applied and karstic sinkholes are traced. (paper) 7. Automated Spatiotemporal Analysis of Fibrils and Coronal Rain Using the Rolling Hough Transform Science.gov (United States) 2017-09-01 A technique is presented that automates the direction characterization of curvilinear features in multidimensional solar imaging datasets. It is an extension of the Rolling Hough Transform (RHT) technique presented by Clark, Peek, and Putman ( Astrophys. J. 789, 82, 2014), and it excels at rapid quantification of spatial and spatiotemporal feature orientation even for applications with a low signal-to-noise ratio. It operates on a pixel-by-pixel basis within a dataset and reliably quantifies orientation even for locations not centered on a feature ridge, which is used here to derive a quasi-continuous map of the chromospheric fine-structure projection angle. For time-series analysis, a procedure is developed that uses a hierarchical application of the RHT to automatically derive the apparent motion of coronal rain observed off-limb. Essential to the success of this technique is the formulation presented in this article for the RHT error analysis as it provides a means to properly filter results. 8. Development of a Hough transformation track finder for time projection chambers Energy Technology Data Exchange (ETDEWEB) Heinze, Isa 2013-12-15 The International Linear Collider (ILC) is a planned particle physics experiment. One of the two detector concepts is the International Large Detector (ILD) concept for which a time projection chamber is foreseen as the main tracking device. In the ILD the particle flow concept is followed which leads to special requirements for the detector. Especially for the tracking system a very good momentum resolution is required. Several prototypes were build to prove that it is possible to build a TPC which fulfills the requirements for a TPC in the ILD. One is the Large Prototype with which different readout technologies currently under development are tested. In parallel reconstruction software is developed for the reconstruction of Large Prototype data. In this thesis the development of a track finding algorithm based on the Hough transformation is described. It can find curved tracks (with magnetic field) as well as straight tracks (without magnetic field). This package was mainly developed for Large Prototype testbeam data but was also tested on Monte Carlo simulation of tracks in the ILD TPC. Furthermore the analysis of testbeam data regarding the single point resolution is presented. The data were taken with the Large Prototype and a readout module with GEM (gas electron multiplier) amplification. For the reconstruction of these data the software package mentioned above was used. The single point resolution is directly related to the momentum resolution of the detector, thus a good single point resolution is needed to achieve a good momentum resolution. 9. Generalized Hough transform based time invariant action recognition with 3D pose information Science.gov (United States) Muench, David; Huebner, Wolfgang; Arens, Michael 2014-10-01 Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database. 10. Implementation of an automated assessment system of the Winston-Lutz test based on the transformed generalized Hough; Implementacion de un sistema de evaluacion automatizada del test de Winston-Lutz basado en la transformada generalizada de Hough Energy Technology Data Exchange (ETDEWEB) Martin-Viera Cueto, J. A.; Moreno Saiz, C.; Benitez Villegas, E. M.; Fernandez Canadillas, M. J.; Caballero Lucena, E.; Cantero Carrillo, M. 2013-07-01 It has implemented a software tool based on the generalized Hough transform to automate the evaluation of test WL This method provides a quantitative evaluation of the test. It also eliminates the subjectivity of the evaluator which is an uncertainty of 0.3 mm. (Author) 11. Invisible data matrix detection with smart phone using geometric correction and Hough transform Science.gov (United States) Sun, Halit; Uysalturk, Mahir C.; Karakaya, Mahmut 2016-04-01 Two-dimensional data matrices are used in many different areas that provide quick and automatic data entry to the computer system. Their most common usage is to automatically read labeled products (books, medicines, food, etc.) and recognize them. In Turkey, alcohol beverages and tobacco products are labeled and tracked with the invisible data matrices for public safety and tax purposes. In this application, since data matrixes are printed on a special paper with a pigmented ink, it cannot be seen under daylight. When red LEDs are utilized for illumination and reflected light is filtered, invisible data matrices become visible and decoded by special barcode readers. Owing to their physical dimensions, price and requirement of special training to use; cheap, small sized and easily carried domestic mobile invisible data matrix reader systems are required to be delivered to every inspector in the law enforcement units. In this paper, we first developed an apparatus attached to the smartphone including a red LED light and a high pass filter. Then, we promoted an algorithm to process captured images by smartphones and to decode all information stored in the invisible data matrix images. The proposed algorithm mainly involves four stages. In the first step, data matrix code is processed by Hough transform processing to find "L" shaped pattern. In the second step, borders of the data matrix are found by using the convex hull and corner detection methods. Afterwards, distortion of invisible data matrix corrected by geometric correction technique and the size of every module is fixed in rectangular shape. Finally, the invisible data matrix is scanned line by line in the horizontal axis to decode it. Based on the results obtained from the real test images of invisible data matrix captured with a smartphone, the proposed algorithm indicates high accuracy and low error rate. 12. Hough transform for clustered microcalcifications detection in full-field digital mammograms Science.gov (United States) Fanizzi, A.; Basile, T. M. A.; Losurdo, L.; Amoroso, N.; Bellotti, R.; Bottigli, U.; Dentamaro, R.; Didonna, V.; Fausto, A.; Massafra, R.; Moschetta, M.; Tamborra, P.; Tangaro, S.; La Forgia, D. 2017-09-01 Many screening programs use mammography as principal diagnostic tool for detecting breast cancer at a very early stage. Despite the efficacy of the mammograms in highlighting breast diseases, the detection of some lesions is still doubtless for radiologists. In particular, the extremely minute and elongated salt-like particles of microcalcifications are sometimes no larger than 0.1 mm and represent approximately half of all cancer detected by means of mammograms. Hence the need for automatic tools able to support radiologists in their work. Here, we propose a computer assisted diagnostic tool to support radiologists in identifying microcalcifications in full (native) digital mammographic images. The proposed CAD system consists of a pre-processing step, that improves contrast and reduces noise by applying Sobel edge detection algorithm and Gaussian filter, followed by a microcalcification detection step performed by exploiting the circular Hough transform. The procedure performance was tested on 200 images coming from the Breast Cancer Digital Repository (BCDR), a publicly available database. The automatically detected clusters of microcalcifications were evaluated by skilled radiologists which asses the validity of the correctly identified regions of interest as well as the system error in case of missed clustered microcalcifications. The system performance was evaluated in terms of Sensitivity and False Positives per images (FPi) rate resulting comparable to the state-of-art approaches. The proposed model was able to accurately predict the microcalcification clusters obtaining performances (sensibility = 91.78% and FPi rate = 3.99) which favorably compare to other state-of-the-art approaches. 13. Tracking with the Hough transformation for the central drift chamber of the GSI 4πexperiment International Nuclear Information System (INIS) Best, D. 1993-02-01 The adaptive Hough Transformation (AHT) treated in this thesis is a method to localize the peaks in the Hough field without calculating the background in detail. It applies an intelligent histogram and search strategy. It uses a small accumulator and decomposes the parameter region, which is momentaneously of interest, into few intervals, into which the HT maps the hits. The information in the accumulator is then used to redefine the parameter region, so that interesting regions can be studied with higher resolution. The iteration continues, until the parameters are determined with the wanted resolution. In the mean 4-7 iterations are necessary in order to obtain the center coordinates of a circular track up to 1 mm accurately. It was shown that the AHT extends the tracking possibilities to very high track densities. The time consumation for 100 tracks with track and vertex fitting lies in the range of 4-5 seconds. By this method in comparison to local procedures in this region of track multiplicities is be proved as superior, because it is not confronted with combinatorical difficulties. Thereby the track and point removal efficiency remains at above 95%, and the double-track resolution at 1%. The dominant majority of the particle tracks is almost completely reconstructed. (orig./HSI) [de 14. Multi-resolution inversion algorithm for the attenuated radon transform KAUST Repository Barbano, Paolo Emilio; Fokas, Athanasios S. 2011-01-01 We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed 15. First all-sky upper limits from LIGO on the strength of periodic gravitational waves using the Hough transform International Nuclear Information System (INIS) Abbott, B.; Adhikari, R.; Agresti, J.; Anderson, S.B.; Araya, M.; Armandula, H.; Asiri, F.; Barish, B.C.; Barnes, M.; Barton, M.A.; Bhawal, B.; Billingsley, G.; Black, E.; Blackburn, K.; Bork, R.; Brown, D.A.; Busby, D.; Cardenas, L.; Chandler, A.; Chapsky, J. 2005-01-01 We perform a wide parameter-space search for continuous gravitational waves over the whole sky and over a large range of values of the frequency and the first spin-down parameter. Our search method is based on the Hough transform, which is a semicoherent, computationally efficient, and robust pattern recognition technique. We apply this technique to data from the second science run of the LIGO detectors and our final results are all-sky upper limits on the strength of gravitational waves emitted by unknown isolated spinning neutron stars on a set of narrow frequency bands in the range 200-400 Hz. The best upper limit on the gravitational-wave strain amplitude that we obtain in this frequency range is 4.43x10 -23 16. ATMS software: Fuzzy Hough Transform in a hybrid algorithm for counting the overlapped etched tracks and orientation recognition International Nuclear Information System (INIS) Khayat, O.; Ghergherehchi, M.; Afarideh, H.; Durrani, S.A.; Pouyan, Ali A.; Kim, Y.S. 2013-01-01 A computer program named ATMS written in MATLAB and running with a friendly interface has been developed for recognition and parametric measurements of etched tracks in images captured from the surface of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of etched tracks and depending on the current working mode classifies them according to their radii (small object removal) or their axis (non-perpendicular or non-circular etched tracks), their mean intensity value and their orientation through the minor and major axes. Images of the detectors' surfaces are input to the code, which generates text and figure files as output, including the number of counted etched tracks with the associated track parameters, histograms and a figure showing edge and center of detected etched tracks. ATMS code is running hierarchically as calibration, testing and measurement modes to demonstrate the reliability, repeatability and adaptability. Fuzzy Hough Transform is used for the estimation of the number of etched tracks and their parameters, providing results even in cases that overlapping and orientation occur. ATMS code is finally converted to a standalone file which makes it able to run out of MATLAB environment. - Highlights: ► Presenting a novel code named ATMS for nuclear track measurements. ► Execution in three modes for generality, adaptability and reliability. ► Using Fuzzy Hough Transform for overlapping detection and orientation recognition. ► Using DFT as a filter for noise removal process in track images. ► Processing the noisy track images and demonstration of the presented code 17. Multi-resolution inversion algorithm for the attenuated radon transform KAUST Repository Barbano, Paolo Emilio 2011-09-01 We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed by combining a memory-efficient implementation of the analytical inversion formula (AIF [1], [2]) with a wavelet-based version of a recently discovered regularization technique [3]. The paper introduces all the main aspects of the new AIF, as well numerical experiments on real and simulated data. Those display a substantial improvement in reconstruction quality when compared to linear or iterative algorithms. © 2011 IEEE. 18. W-transform method for feature-oriented multiresolution image retrieval Energy Technology Data Exchange (ETDEWEB) Kwong, M.K.; Lin, B. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div. 1995-07-01 Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tangs W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases. 19. Layering extraction from subsurface radargrams over Greenland and the Martian NPLD by combining wavelet analysis with Hough transforms Science.gov (United States) Xiong, Si-Ting; Muller, Jan-Peter 2017-04-01 Extracting lines from an imagery is a solved problem in the field of edge detection. Different to images taken by camera, radargrams are a set of radar echo profiles, which record wave energy reflected by subsurface reflectors, at each location of a radar footprint along the satellite's ground track. The radargrams record where there is a dielectric contrast caused by different deposits, and other subsurface features, such as facies, and internal distributions like porosity and fluids. Among the subsurface features, layering is an important one which reflect the sequence of seasonal or yearly deposits on the ground [1-2]. In the field of image processing, line detection methods, such as the Radon Transform or Hough Transform, are able to extract these subsurface layers from rasterised versions of the echograms. However, due to the attenuation of radar waves whilst propagating through geological media, radargrams sometimes suffer from gradient and high background noise. These attributes of radargrams cause errors in detection when conventional line detection methods are directly applied. In this study, we have developed a continuous wavelet analysis technique to be applied directly to the radar echo profiles in a radargram in order to detect segmented lines, and then a conventional line detection method, such as a Hough transform can be applied to connect these segmented lines. This processing chain is tested by using datasets from a radargram acquired by the Multi-channel Coherent Radar Depth Sounder (MCoRDS) on an airborne platform in Greenland and a radargram acquired by the SHAllow RADar (SHARAD) on board the Mars Reconnaissance Orbiter (MRO) [3] over Martian North Polar Layered Deposits (NPLD). Keywords: Subsurface mapping, Radargram, SHARAD, Greenland, Martian NPLD, Subsurface layering, line detection References: [1] Phillips, R. J., et al. "Mars north polar deposits: Stratigraphy, age, and geodynamical response." Science 320.5880 (2008): 1182-1185. [2] Cutts 20. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping Directory of Open Access Journals (Sweden) Abhijeet Ravankar 2016-05-01 Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization. 1. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy Energy Technology Data Exchange (ETDEWEB) Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org 2009-02-07 Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation. 2. Localization of skeletal and aortic landmarks in trauma CT data based on the discriminative generalized Hough transform Science.gov (United States) Lorenz, Cristian; Hansis, Eberhard; Weese, Jürgen; Carolus, Heike 2016-03-01 Computed tomography is the modality of choice for poly-trauma patients to assess rapidly skeletal and vascular integrity of the whole body. Often several scans with and without contrast medium or with different spatial resolution are acquired. Efficient reading of the resulting extensive set of image data is vital, since it is often time critical to initiate the necessary therapeutic actions. A set of automatically found landmarks can facilitate navigation in the data and enables anatomy oriented viewing. Following this intention, we selected a comprehensive set of 17 skeletal and 5 aortic landmarks. Landmark localization models for the Discriminative Generalized Hough Transform (DGHT) were automatically created based on a set of about 20 training images with ground truth landmark positions. A hierarchical setup with 4 resolution levels was used. Localization results were evaluated on a separate test set, consisting of 50 to 128 images (depending on the landmark) with available ground truth landmark locations. The image data covers a large amount of variability caused by differences of field-of-view, resolution, contrast agent, patient gender and pathologies. The median localization error for the set of aortic landmarks was 14.4 mm and for the set of skeleton landmarks 5.5 mm. Median localization errors for individual landmarks ranged from 3.0 mm to 31.0 mm. The runtime performance for the whole landmark set is about 5s on a typical PC. 3. Multi-Resolution Wavelet-Transformed Image Analysis of Histological Sections of Breast Carcinomas Directory of Open Access Journals (Sweden) Hae-Gil Hwang 2005-01-01 Full Text Available Multi-resolution images of histological sections of breast cancer tissue were analyzed using texture features of Haar- and Daubechies transform wavelets. Tissue samples analyzed were from ductal regions of the breast and included benign ductal hyperplasia, ductal carcinoma in situ (DCIS, and invasive ductal carcinoma (CA. To assess the correlation between computerized image analysis and visual analysis by a pathologist, we created a two-step classification system based on feature extraction and classification. In the feature extraction step, we extracted texture features from wavelet-transformed images at 10× magnification. In the classification step, we applied two types of classifiers to the extracted features, namely a statistics-based multivariate (discriminant analysis and a neural network. Using features from second-level Haar transform wavelet images in combination with discriminant analysis, we obtained classification accuracies of 96.67 and 87.78% for the training and testing set (90 images each, respectively. We conclude that the best classifier of carcinomas in histological sections of breast tissue are the texture features from the second-level Haar transform wavelet images used in a discriminant function. 4. Fourier-based quantification of renal glomeruli size using Hough transform and shape descriptors. Science.gov (United States) 2017-11-01 Analysis of glomeruli geometry is important in histopathological evaluation of renal microscopic images. Due to the shape and size disparity of even glomeruli of same kidney, automatic detection of these renal objects is not an easy task. Although manual measurements are time consuming and at times are not very accurate, it is commonly used in medical centers. In this paper, a new method based on Fourier transform following usage of some shape descriptors is proposed to detect these objects and their geometrical parameters. Reaching the goal, a database of 400 regions are selected randomly. 200 regions of which are part of glomeruli and the other 200 regions are not belong to renal corpuscles. ROC curve is used to decide which descriptor could classify two groups better. f_measure, which is a combination of both tpr (true positive rate) and fpr (false positive rate), is also proposed to select optimal threshold for descriptors. Combination of three parameters (solidity, eccentricity, and also mean squared error of fitted ellipse) provided better result in terms of f_measure to distinguish desired regions. Then, Fourier transform of outer edges is calculated to form a complete curve out of separated region(s). The generality of proposed model is verified by use of cross validation method, which resulted tpr of 94%, and fpr of 5%. Calculation of glomerulus' and Bowman's space with use of the algorithm are also compared with a non-automatic measurement done by a renal pathologist, and errors of 5.9%, 5.4%, and 6.26% are resulted in calculation of Capsule area, Bowman space, and glomeruli area, respectively. Having tested different glomeruli with various shapes, the experimental consequences show robustness and reliability of our method. Therefore, it could be used to illustrate renal diseases and glomerular disorders by measuring the morphological changes accurately and expeditiously. Copyright © 2017 Elsevier B.V. All rights reserved. 5. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech. Science.gov (United States) Campo, D.; Quintero, O. L.; Bastidas, M. 2016-04-01 We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify. 6. Polar exponential sensor arrays unify iconic and Hough space representation Science.gov (United States) Weiman, Carl F. R. 1990-01-01 The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here. 7. Improved Hough search for gravitational wave pulsars International Nuclear Information System (INIS) 2006-01-01 We describe an improved version of the Hough transform search for continuous gravitational waves from isolated neutron stars assuming the input to be short segments of Fourier transformed data. The method presented here takes into account possible nonstationarities of the detector noise and the amplitude modulation due to the motion of the detector. These two effects are taken into account for the first stage only, i.e. the peak selection, to create the time-frequency map of our data, while the Hough transform itself is performed in the standard way 8. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data Energy Technology Data Exchange (ETDEWEB) Chamana, Manohar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mather, Barry A [National Renewable Energy Laboratory (NREL), Golden, CO (United States) 2017-10-19 A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validation is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies. 9. First evaluation of the CPU, GPGPU and MIC architectures for real time particle tracking based on Hough transform at the LHC International Nuclear Information System (INIS) V Halyo, V Halyo; LeGresley, P; Lujan, P; Karpusenko, V; Vladimirov, A 2014-01-01 Recent innovations focused around parallel processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's Xeon Phi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on multi-core Intel i7-3770 and Intel Xeon E5-2697v2 CPUs, an NVIDIA Tesla K20c GPU, and an Intel Xeon Phi 7120 coprocessor. Preliminary time performance will be presented 10. First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC CERN Document Server Halyo, V.; Lujan, P.; Karpusenko, V.; Vladimirov, A. 2014-04-07 Recent innovations focused around {\\em parallel} processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's \\xphi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU, and an Intel \\x... 11. Circular Hough transform diffraction analysis: A software tool for automated measurement of selected area electron diffraction patterns within Digital MicrographTM International Nuclear Information System (INIS) Mitchell, D.R.G. 2008-01-01 A software tool (script and plugin) for computing circular Hough transforms (CHT) in Digital Micrograph TM has been developed, for the purpose of automated analysis of selected area electron diffraction patterns (SADPs) of polycrystalline materials. The CHT enables the diffraction pattern centre to be determined with sub-pixel accuracy, regardless of the exposure condition of the transmitted beam or if a beam stop is present. Radii of the diffraction rings can also be accurately measured with sub-pixel precision. If the pattern is calibrated against a known camera length, then d-spacings with an accuracy of better than 1% can be obtained. These measurements require no a priori knowledge of the pattern and very limited user interaction. The accuracy of the CHT is degraded by distortion introduced by the projector lens, and this should be minimised prior to pattern acquisition. A number of optimisations in the CHT software enable rapid processing of patterns; a typical analysis of a 1kx1k image taking just a few minutes. The CHT tool appears robust and is even able to accurately measure SADPs with very incomplete diffraction rings due to texture effects. This software tool is freely downloadable via the Internet 12. Circular Hough transform diffraction analysis: A software tool for automated measurement of selected area electron diffraction patterns within Digital Micrograph{sup TM} Energy Technology Data Exchange (ETDEWEB) Mitchell, D.R.G. [Institute of Materials and Engineering Science, ANSTO, PMB 1, Menai, NSW 2234 (Australia)], E-mail: drm@ansto.gov.au 2008-03-15 A software tool (script and plugin) for computing circular Hough transforms (CHT) in Digital Micrograph{sup TM} has been developed, for the purpose of automated analysis of selected area electron diffraction patterns (SADPs) of polycrystalline materials. The CHT enables the diffraction pattern centre to be determined with sub-pixel accuracy, regardless of the exposure condition of the transmitted beam or if a beam stop is present. Radii of the diffraction rings can also be accurately measured with sub-pixel precision. If the pattern is calibrated against a known camera length, then d-spacings with an accuracy of better than 1% can be obtained. These measurements require no a priori knowledge of the pattern and very limited user interaction. The accuracy of the CHT is degraded by distortion introduced by the projector lens, and this should be minimised prior to pattern acquisition. A number of optimisations in the CHT software enable rapid processing of patterns; a typical analysis of a 1kx1k image taking just a few minutes. The CHT tool appears robust and is even able to accurately measure SADPs with very incomplete diffraction rings due to texture effects. This software tool is freely downloadable via the Internet. 13. Signal and image multiresolution analysis CERN Document Server Ouahabi, Abdelialil 2012-01-01 Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing 14. Design and application of discrete wavelet packet transform based multiresolution controller for liquid level system. Science.gov (United States) Paul, Rimi; Sengupta, Anindita 2017-11-01 A new controller based on discrete wavelet packet transform (DWPT) for liquid level system (LLS) has been presented here. This controller generates control signal using node coefficients of the error signal which interprets many implicit phenomena such as process dynamics, measurement noise and effect of external disturbances. Through simulation results on LLS problem, this controller is shown to perform faster than both the discrete wavelet transform based controller and conventional proportional integral controller. Also, it is more efficient in terms of its ability to provide better noise rejection. To overcome the wind up phenomenon by considering the saturation due to presence of actuator, anti-wind up technique is applied to the conventional PI controller and compared to the wavelet packet transform based controller. In this case also, packet controller is found better than the other ones. This similar work has been extended for analogous first order RC plant as well as second order plant also. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved. 15. Collaborative Proposal: Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach Energy Technology Data Exchange (ETDEWEB) Estep, Donald 2013-04-15 Despite the great interest in regional modeling for both weather and climate applications, regional modeling is not yet at the stage that it can be used routinely and effectively for climate modeling of the ocean. The overarching goal of this project is to transform how climate models are used by developing and implementing a robust, efficient, and accurate global approach to regional ocean modeling. To achieve this goal, we will use theoretical and computational means to resolve several basic modeling and algorithmic issues. The first task is to develop techniques for transitioning between parameterized and high-fidelity regional ocean models as the discretization grid transitions from coarse to fine regions. The second task is to develop estimates for the error in scientifically relevant quantities of interest that provide a systematic way to automatically determine where refinement is needed in order to obtain accurate simulations of dynamic and tracer transport in regional ocean models. The third task is to develop efficient, accurate, and robust time-stepping schemes for variable spatial resolution discretizations used in regional ocean models of dynamics and tracer transport. The fourth task is to develop frequency-dependent eddy viscosity finite element and discontinuous Galerkin methods and study their performance and effectiveness for simulation of dynamics and tracer transport in regional ocean models. These four projects share common difficulties and will be approach using a common computational and mathematical toolbox. This is a multidisciplinary project involving faculty and postdocs from Colorado State University, Florida State University, and Penn State University along with scientists from Los Alamos National Laboratory. The completion of the tasks listed within the discussion of the four sub-projects will go a long way towards meeting our goal of developing superior regional ocean models that will transform how climate system models are used. 16. Probabilistic active recognition of multiple objects using Hough-based geometric matching features CSIR Research Space (South Africa) Govender, N 2015-01-01 Full Text Available be recognized simultaneously, and occlusion and clutter (through distracter objects) is common. We propose a representation for object viewpoints using Hough transform based geometric matching features, which are robust in such circumstances. We show how... 17. SU-G-IeP1-01: A Novel MRI Post-Processing Algorithm for Visualization of the Prostate LDR Brachytherapy Seeds and Calcifications Based On B0 Field Inhomogeneity Correction and Hough Transform Energy Technology Data Exchange (ETDEWEB) Nosrati, R [Reyrson University, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Soliman, A; Owrangi, A [Sunnybrook Research Institute, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Ghugre, N [Sunnybrook Research Institute, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada); Morton, G [Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada); Pejovic-Milic, A [Reyrson University, Toronto, Ontario (Canada); Song, W [Reyrson University, Toronto, Ontario (Canada); Sunnybrook Research Institute, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada) 2016-06-15 Purpose: This study aims at developing an MRI-only workflow for post-implant dosimetry of the prostate LDR brachytherapy seeds. The specific goal here is to develop a post-processing algorithm to produce positive contrast for the seeds and prostatic calcifications and differentiate between them on MR images. Methods: An agar-based phantom incorporating four dummy seeds (I-125) and five calcifications of different sizes (from sheep cortical bone) was constructed. Seeds were placed arbitrarily in the coronal plane. The phantom was scanned with 3T Philips Achieva MR scanner using an 8-channel head coil array. Multi-echo turbo spin echo (ME-TSE) and multi-echo gradient recalled echo (ME-GRE) sequences were acquired. Due to minimal susceptibility artifacts around seeds, ME-GRE sequence (flip angle=15; TR/TE=20/2.3/2.3; resolution=0.7×0.7×2mm3) was further processed.The induced field inhomogeneity due to the presence of titaniumencapsulated seeds was corrected using a B0 field map. B0 map was calculated using the ME-GRE sequence by calculating the phase difference at two different echo times. Initially, the product of the first echo and B0 map was calculated. The features corresponding to the seeds were then extracted in three steps: 1) the edge pixels were isolated using “Prewitt” operator; 2) the Hough transform was employed to detect ellipses approximately matching the dimensions of the seeds and 3) at the position and orientation of the detected ellipses an ellipse was drawn on the B0-corrected image. Results: The proposed B0-correction process produced positive contrast for the seeds and calcifications. The Hough transform based on Prewitt edge operator successfully identified all the seeds according to their ellipsoidal shape and dimensions in the edge image. Conclusion: The proposed post-processing algorithm successfully visualized the seeds and calcifications with positive contrast and differentiates between them according to their shapes. Further 18. On frame multiresolution analysis DEFF Research Database (Denmark) Christensen, Ole 2003-01-01 We use the freedom in frame multiresolution analysis to construct tight wavelet frames (even in the case where the refinable function does not generate a tight frame). In cases where a frame multiresolution does not lead to a construction of a wavelet frame we show how one can nevertheless... 19. A multiresolution model of rhythmic expectancy NARCIS (Netherlands) Smith, L.M.; Honing, H.; Miyazaki, K.; Hiraga, Y.; Adachi, M.; Nakajima, Y.; Tsuzaki, M. 2008-01-01 We describe a computational model of rhythmic cognition that predicts expected onset times. A dynamic representation of musical rhythm, the multiresolution analysis using the continuous wavelet transform is used. This representation decomposes the temporal structure of a musical rhythm into time 20. EYE CONTROLLED SWITCHING USING CIRCULAR HOUGH TRANSFORM OpenAIRE Sagar Lakhmani 2014-01-01 The paper presents hands free interface between electrical appliances or devices. This technology is intended to replace conventional switching devices for the use of disabled. It is a new way to interact with the electrical or electronic devices that we use in our daily life. The paper illustrates how the movement of eye cornea and blinking can be used for switching the devices. The basic Circle Detection algorithm is used to determine the position of eye. Eye blinking is used... 1. Multiresolution signal decomposition schemes NARCIS (Netherlands) J. Goutsias (John); H.J.A.M. Heijmans (Henk) 1998-01-01 textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis 2. A multiresolution method for solving the Poisson equation using high order regularization DEFF Research Database (Denmark) Hejlesen, Mads Mølholm; Walther, Jens Honore 2016-01-01 We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates... 3. A multiresolution remeshed Vortex-In-Cell algorithm using patches DEFF Research Database (Denmark) Rasmussen, Johannes Tophøj; Cottet, Georges-Henri; Walther, Jens Honore 2011-01-01 We present a novel multiresolution Vortex-In-Cell algorithm using patches of varying resolution. The Poisson equation relating the fluid vorticity and velocity is solved using Fast Fourier Transforms subject to free space boundary conditions. Solid boundaries are implemented using the semi... 4. Compact optical processor for Hough and frequency domain features Science.gov (United States) Ott, Peter 1996-11-01 Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done 5. Multiresolution analysis of Bursa Malaysia KLCI time series Science.gov (United States) Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed 2017-05-01 In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results. 6. A study of Hough Transform-based fingerprint alignment algorithms CSIR Research Space (South Africa) Mlambo, CS 2014-10-01 Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time... 7. Comparison of effective Hough transform-based fingerprint alignment approaches CSIR Research Space (South Africa) Mlambo, CS 2014-08-01 Full Text Available points set with larger rotation and small number of points. The DRBA approach was found to perform better with minutiae points with large amount of translation, and the computational time was less than that of LMBA approach. However, the memory usage... 8. Multiresolution forecasting for futures trading using wavelet decompositions. Science.gov (United States) Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B 2001-01-01 We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades. 9. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering KAUST Repository Sicat, Ronell Barrera; Kruger, Jens; Moller, Torsten; Hadwiger, Markus 2014-01-01 This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined 10. Multiresolution Analysis Adapted to Irregularly Spaced Data Directory of Open Access Journals (Sweden) Anissa Mokraoui 2009-01-01 Full Text Available This paper investigates the mathematical background of multiresolution analysis in the specific context where the signal is represented by irregularly sampled data at known locations. The study is related to the construction of nested piecewise polynomial multiresolution spaces represented by their corresponding orthonormal bases. Using simple spline basis orthonormalization procedures involves the construction of a large family of orthonormal spline scaling bases defined on consecutive bounded intervals. However, if no more additional conditions than those coming from multiresolution are imposed on each bounded interval, the orthonormal basis is represented by a set of discontinuous scaling functions. The spline wavelet basis also has the same problem. Moreover, the dimension of the corresponding wavelet basis increases with the spline degree. An appropriate orthonormalization procedure of the basic spline space basis, whatever the degree of the spline, allows us to (i provide continuous scaling and wavelet functions, (ii reduce the number of wavelets to only one, and (iii reduce the complexity of the filter bank. Examples of the multiresolution implementations illustrate that the main important features of the traditional multiresolution are also satisfied. 11. Invariant Hough Random Ferns for Object Detection and Tracking Directory of Open Access Journals (Sweden) Yimin Lin 2014-01-01 Full Text Available This paper introduces an invariant Hough random ferns (IHRF incorporating rotation and scale invariance into the local feature description, random ferns classifier training, and Hough voting stages. It is especially suited for object detection under changes in object appearance and scale, partial occlusions, and pose variations. The efficacy of this approach is validated through experiments on a large set of challenging benchmark datasets, and the results demonstrate that the proposed method outperforms state-of-the-art conventional methods such as bounding-box-based and part-based methods. Additionally, we also propose an efficient clustering scheme based on the local patches’ appearance and their geometric relations that can provide pixel-accurate, top-down segmentations from IHRF back-projections. This refined segmentation can be used to improve the quality of online object tracking because it avoids the drifting problem. Thus, an online tracking framework based on IHRF, which is trained and updated in each frame to distinguish and segment the object from the background, is established. Finally, the experimental results on both object segmentation and long-term object tracking show that this method yields accurate and robust tracking performance in a variety of complex scenarios, especially in cases of severe occlusions and nonrigid deformations. 12. Interactive indirect illumination using adaptive multiresolution splatting. Science.gov (United States) Nichols, Greg; Wyman, Chris 2010-01-01 Global illumination provides a visual richness not achievable with the direct illumination models used by most interactive applications. To generate global effects, numerous approximations attempt to reduce global illumination costs to levels feasible in interactive contexts. One such approximation, reflective shadow maps, samples a shadow map to identify secondary light sources whose contributions are splatted into eye space. This splatting introduces significant overdraw that is usually reduced by artificially shrinking each splat's radius of influence. This paper introduces a new multiresolution approach for interactively splatting indirect illumination. Instead of reducing GPU fill rate by reducing splat size, we reduce fill rate by rendering splats into a multiresolution buffer. This takes advantage of the low-frequency nature of diffuse and glossy indirect lighting, allowing rendering of indirect contributions at low resolution where lighting changes slowly and at high-resolution near discontinuities. Because this multiresolution rendering occurs on a per-splat basis, we can significantly reduce fill rate without arbitrarily clipping splat contributions below a given threshold-those regions simply are rendered at a coarse resolution. 13. A Quantitative Analysis of an EEG Epileptic Record Based on MultiresolutionWavelet Coefficients Directory of Open Access Journals (Sweden) Mariel Rosenblatt 2014-11-01 Full Text Available The characterization of the dynamics associated with electroencephalogram (EEG signal combining an orthogonal discrete wavelet transform analysis with quantifiers originated from information theory is reviewed. In addition, an extension of this methodology based on multiresolution quantities, called wavelet leaders, is presented. In particular, the temporal evolution of Shannon entropy and the statistical complexity evaluated with different sets of multiresolution wavelet coefficients are considered. Both methodologies are applied to the quantitative EEG time series analysis of a tonic-clonic epileptic seizure, and comparative results are presented. In particular, even when both methods describe the dynamical changes of the EEG time series, the one based on wavelet leaders presents a better time resolution. 14. Omstrede kwessies en teksinterne korrektiewe in Skilpoppe van Barrie Hough OpenAIRE M.J. Fritz; E.S. Van der Westhuizen 2010-01-01 Controversial issues and text-internal correctives in Skilpoppe by Barrie Hough This article focuses on the binary relations between controversial issues and text internal correctives by making use of examples from “Skilpoppe” (Babushka dolls) (2002) by Barrie Hough. The article starts with a discussion of controversial issues, including the four main categories, identified as violence, sexuality, politics and religion and continues briefly to the censorship as enacted before the Films an... 15. A new class of morphological pyramids for multiresolution image analysis NARCIS (Netherlands) Roerdink, Jos B.T.M.; Asano, T; Klette, R; Ronse, C 2003-01-01 We study nonlinear multiresolution signal decomposition based on morphological pyramids. Motivated by a problem arising in multiresolution volume visualization, we introduce a new class of morphological pyramids. In this class the pyramidal synthesis operator always has the same form, i.e. a 16. Target recognition by wavelet transform International Nuclear Information System (INIS) Li Zhengdong; He Wuliang; Zheng Xiaodong; Cheng Jiayuan; Peng Wen; Pei Chunlan; Song Chen 2002-01-01 Wavelet transform has an important character of multi-resolution power, which presents pyramid structure, and this character coincides the way by which people distinguish object from coarse to fineness and from large to tiny. In addition to it, wavelet transform benefits to reducing image noise, simplifying calculation, and embodying target image characteristic point. A method of target recognition by wavelet transform is provided 17. Multiresolution Computation of Conformal Structures of Surfaces Directory of Open Access Journals (Sweden) Xianfeng Gu 2003-10-01 Full Text Available An efficient multiresolution method to compute global conformal structures of nonzero genus triangle meshes is introduced. The homology, cohomology groups of meshes are computed explicitly, then a basis of harmonic one forms and a basis of holomorphic one forms are constructed. A progressive mesh is generated to represent the original surface at different resolutions. The conformal structure is computed for the coarse level first, then used as the estimation for that of the finer level, by using conjugate gradient method it can be refined to the conformal structure of the finer level. 18. Network coding for multi-resolution multicast DEFF Research Database (Denmark) 2013-01-01 A method, apparatus and computer program product for utilizing network coding for multi-resolution multicast is presented. A network source partitions source content into a base layer and one or more refinement layers. The network source receives a respective one or more push-back messages from one...... or more network destination receivers, the push-back messages identifying the one or more refinement layers suited for each one of the one or more network destination receivers. The network source computes a network code involving the base layer and the one or more refinement layers for at least one...... of the one or more network destination receivers, and transmits the network code to the one or more network destination receivers in accordance with the push-back messages.... 19. On analysis of electroencephalogram by multiresolution-based energetic approach Science.gov (United States) Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer 2013-10-01 Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data. 20. A Misleading Review of Response Bias: Comment on McGrath, Mitchell, Kim, and Hough (2010) Science.gov (United States) Rohling, Martin L.; Larrabee, Glenn J.; Greiffenstein, Manfred F.; Ben-Porath, Yossef S.; Lees-Haley, Paul; Green, Paul; Greve, Kevin W. 2011-01-01 In the May 2010 issue of "Psychological Bulletin," R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such… 1. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy Science.gov (United States) Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng 2018-02-01 Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided. 2. An efficient multi-resolution GA approach to dental image alignment Science.gov (United States) Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany 2006-02-01 Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments. 3. Morphological pyramids in multiresolution MIP rendering of large volume data : Survey and new results NARCIS (Netherlands) Roerdink, J.B.T.M. We survey and extend nonlinear signal decompositions based on morphological pyramids, and their application to multiresolution maximum intensity projection (MIP) volume rendering with progressive refinement and perfect reconstruction. The structure of the resulting multiresolution rendering 4. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling Science.gov (United States) Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana 2018-01-01 This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season. 5. An ROI multi-resolution compression method for 3D-HEVC Science.gov (United States) Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan 2017-09-01 3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced. 6. Telescopic multi-resolution augmented reality Science.gov (United States) Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold 2014-05-01 To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations. 7. Multiresolution molecular mechanics: Implementation and efficiency Energy Technology Data Exchange (ETDEWEB) Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu 2017-01-01 Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated. 8. Transformation DEFF Research Database (Denmark) Bock, Lars Nicolai 2011-01-01 Artiklen diskuterer ordet "transformation" med udgangspunkt i dels hvorledes ordet bruges i arkitektfaglig terminologi og dels med fokus på ordets potentielle indhold og egnethed i samme teminologi.......Artiklen diskuterer ordet "transformation" med udgangspunkt i dels hvorledes ordet bruges i arkitektfaglig terminologi og dels med fokus på ordets potentielle indhold og egnethed i samme teminologi.... 9. TRANSFORMATION Energy Technology Data Exchange (ETDEWEB) LACKS,S.A. 2003-10-09 Transformation, which alters the genetic makeup of an individual, is a concept that intrigues the human imagination. In Streptococcus pneumoniae such transformation was first demonstrated. Perhaps our fascination with genetics derived from our ancestors observing their own progeny, with its retention and assortment of parental traits, but such interest must have been accelerated after the dawn of agriculture. It was in pea plants that Gregor Mendel in the late 1800s examined inherited traits and found them to be determined by physical elements, or genes, passed from parents to progeny. In our day, the material basis of these genetic determinants was revealed to be DNA by the lowly bacteria, in particular, the pneumococcus. For this species, transformation by free DNA is a sexual process that enables cells to sport new combinations of genes and traits. Genetic transformation of the type found in S. pneumoniae occurs naturally in many species of bacteria (70), but, initially only a few other transformable species were found, namely, Haemophilus influenzae, Neisseria meningitides, Neisseria gonorrheae, and Bacillus subtilis (96). Natural transformation, which requires a set of genes evolved for the purpose, contrasts with artificial transformation, which is accomplished by shocking cells either electrically, as in electroporation, or by ionic and temperature shifts. Although such artificial treatments can introduce very small amounts of DNA into virtually any type of cell, the amounts introduced by natural transformation are a million-fold greater, and S. pneumoniae can take up as much as 10% of its cellular DNA content (40). 10. Single-resolution and multiresolution extended-Kalman-filter-based reconstruction approaches to optical refraction tomography. Science.gov (United States) Naik, Naren; Vasu, R M; Ananthasayanam, M R 2010-02-20 The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. 11. Application of a Hough Search for Continuous Gravitational Waves on Data from the Fifth LIGO Science Run Science.gov (United States) 2014-01-01 We report on an all-sky search for periodic gravitational waves in the frequency range 50-1000 Hertz with the first derivative of frequency in the range -8.9 × 10(exp -10) Hertz per second to zero in two years of data collected during LIGO's fifth science run. Our results employ a Hough transform technique, introducing a chi(sup 2) test and analysis of coincidences between the signal levels in years 1 and 2 of observations that offers a significant improvement in the product of strain sensitivity with compute cycles per data sample compared to previously published searches. Since our search yields no surviving candidates, we present results taking the form of frequency dependent, 95% confidence upper limits on the strain amplitude h(sub 0). The most stringent upper limit from year 1 is 1.0 × 10(exp -24) in the 158.00-158.25 Hertz band. In year 2, the most stringent upper limit is 8.9 × 10(exp -25) in the 146.50-146.75 Hertz band. This improved detection pipeline, which is computationally efficient by at least two orders of magnitude better than our flagship Einstein@Home search, will be important for 'quicklook' searches in the Advanced LIGO and Virgo detector era. 12. Science.gov (United States) Baker, W.R. 1959-08-25 Transformers of a type adapted for use with extreme high power vacuum tubes where current requirements may be of the order of 2,000 to 200,000 amperes are described. The transformer casing has the form of a re-entrant section being extended through an opening in one end of the cylinder to form a coaxial terminal arrangement. A toroidal multi-turn primary winding is disposed within the casing in coaxial relationship therein. In a second embodiment, means are provided for forming the casing as a multi-turn secondary. The transformer is characterized by minimized resistance heating, minimized external magnetic flux, and an economical construction. 13. Decompositions of bubbly flow PIV velocity fields using discrete wavelets multi-resolution and multi-section image method International Nuclear Information System (INIS) Choi, Je-Eun; Takei, Masahiro; Doh, Deog-Hee; Jo, Hyo-Jae; Hassan, Yassin A.; Ortiz-Villafuerte, Javier 2008-01-01 Currently, wavelet transforms are widely used for the analyses of particle image velocimetry (PIV) velocity vector fields. This is because the wavelet provides not only spatial information of the velocity vectors, but also of the time and frequency domains. In this study, a discrete wavelet transform is applied to real PIV images of bubbly flows. The vector fields obtained by a self-made cross-correlation PIV algorithm were used for the discrete wavelet transform. The performances of the discrete wavelet transforms were investigated by changing the level of power of discretization. The images decomposed by wavelet multi-resolution showed conspicuous characteristics of the bubbly flows for the different levels. A high spatial bubble concentrated area could be evaluated by the constructed discrete wavelet transform algorithm, in which high-leveled wavelets play dominant roles in revealing the flow characteristics 14. Adaptive multi-resolution Modularity for detecting communities in networks Science.gov (United States) Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He 2018-02-01 Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity. 15. Multiresolution analysis applied to text-independent phone segmentation International Nuclear Information System (INIS) Cherniz, AnalIa S; Torres, MarIa E; Rufiner, Hugo L; Esposito, Anna 2007-01-01 Automatic speech segmentation is of fundamental importance in different speech applications. The most common implementations are based on hidden Markov models. They use a statistical modelling of the phonetic units to align the data along a known transcription. This is an expensive and time-consuming process, because of the huge amount of data needed to train the system. Text-independent speech segmentation procedures have been developed to overcome some of these problems. These methods detect transitions in the evolution of the time-varying features that represent the speech signal. Speech representation plays a central role is the segmentation task. In this work, two new speech parameterizations based on the continuous multiresolution entropy, using Shannon entropy, and the continuous multiresolution divergence, using Kullback-Leibler distance, are proposed. These approaches have been compared with the classical Melbank parameterization. The proposed encodings increase significantly the segmentation performance. Parameterization based on the continuous multiresolution divergence shows the best results, increasing the number of correctly detected boundaries and decreasing the amount of erroneously inserted points. This suggests that the parameterization based on multiresolution information measures provide information related to acoustic features that take into account phonemic transitions 16. Automatic Lumbar Vertebrae Segmentation in Fluoroscopic Images Via Optimised Concurrent Hough Transform National Research Council Canada - National Science Library Zheng, Yalin 2001-01-01 .... Digital videofluoroscopy (DVF) was widely used to obtain images for motion studies. This can provide motion sequences of the lumbar spine, but the images obtained often suffer due to noise, exacerbated by the very low radiation dosage... 17. The application of Hough transform-based fingerprint alignment on match-on-card CSIR Research Space (South Africa) Mlambo, S 2015-03-01 Full Text Available of these cards, has led to the need for further improvements on smart cards combined with fingerprint biometrics. Due to the insufficient memory space and few instruction sets in Java smart cards, developers and programmers are faced with implementing efficient... 18. Tracking within Hadronic Showers in the CALICE SDHCAL prototype using a Hough Transform Technique Czech Academy of Sciences Publication Activity Database Deng, Z.; Wang, Y.; Yue, Q.; Cvach, Jaroslav; Janata, Milan; Kovalčuk, Michal; Kvasnička, Jiří; Polák, Ivo; Smolík, Jan; Vrba, Václav; Zálešák, Jaroslav; Zuklín, Josef 2017-01-01 Roč. 12, May (2017), s. 1-15, č. článku P05009. ISSN 1748-0221 Institutional support: RVO:68378271 Keywords : calorimeter methods * calorimeters * gaseous detectors Subject RIV: BF - Elementary Particles and High Energy Physics OBOR OECD: Particles and field physics Impact factor: 1.220, year: 2016 19. Automatic 3D building reconstruction from airbornelaser scanning and cadastral data using hough transform DEFF Research Database (Denmark) Bodum, Lars; Overby, Jens; Kjems, Erik 2004-01-01 degree of details. However, it is possible to create virtual 3D models of buildings, by processing these data. Roof polygons are generated using airborne laser scanning of 1x1 meter grid and ground plans (footprints) extracted from technical feature maps. An effective algorithm is used for fixing...... might lead to multiple slightly differing planes. Such planes are detected and merged. Intersecting planes are identified, and a polygon mesh of the roof is constructed. Due to the low precision of the laser scanning, a rule-based postprocessing of the roof is applied before adding the walls.... 20. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals. Science.gov (United States) Verma, Gyanendra K; Tiwary, Uma Shanker 2014-11-15 The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved. 1. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization Science.gov (United States) Nielson, Gregory M. 1997-01-01 This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first. 2. 3D shape detection of the indoor space based on 3D-Hough method OpenAIRE 安齋, 達也; ANZAI, Tatsuya 2013-01-01 This paper describes methods for detecting the 3D shapes of the indoor space that is represented as a combination of planes such as a wall, desk, or whatnot. Detecting the planes makes it possible to perform calibration of multiple sensors and 3D mapping, and then produces various services such as the acquisition of life logs, AR interaction, and invader detection. This paper proposes and verifies three algorithms. First, it mentions a way to use2D-Hough.The proposed technique converts 3D dat... 3. Multiresolution persistent homology for excessively large biomolecular datasets Energy Technology Data Exchange (ETDEWEB) Xia, Kelin; Zhao, Zhixiong [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, Michigan 48824 (United States) 2015-10-07 Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs. 4. Multiresolution with Hierarchical Modulations for Long Term Evolution of UMTS Directory of Open Access Journals (Sweden) Soares Armando 2009-01-01 Full Text Available In the Long Term Evolution (LTE of UMTS the Interactive Mobile TV scenario is expected to be a popular service. By using multiresolution with hierarchical modulations this service is expected to be broadcasted to larger groups achieving significant reduction in power transmission or increasing the average throughput. Interactivity in the uplink direction will not be affected by multiresolution in the downlink channels, since it will be supported by dedicated uplink channels. The presence of interactivity will allow for a certain amount of link quality feedback for groups or individuals. As a result, an optimization of the achieved throughput will be possible. In this paper system level simulations of multi-cellular networks considering broadcast/multicast transmissions using the OFDM/OFDMA based LTE technology are presented to evaluate the capacity, in terms of number of TV channels with given bit rates or total spectral efficiency and coverage. multiresolution with hierarchical modulations is presented to evaluate the achievable throughput gain compared to single resolution systems of Multimedia Broadcast/Multicast Service (MBMS standardised in Release 6. 5. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering KAUST Repository Sicat, Ronell Barrera 2014-12-31 This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. 6. MR-CDF: Managing multi-resolution scientific data Science.gov (United States) Salem, Kenneth 1993-01-01 MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time. 7. Multisensor multiresolution data fusion for improvement in classification Science.gov (United States) Rubeena, V.; Tiwari, K. C. 2016-04-01 The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification. 8. Multiresolution Network Temporal and Spatial Scheduling Model of Scenic Spot Directory of Open Access Journals (Sweden) Peng Ge 2013-01-01 Full Text Available Tourism is one of pillar industries of the world economy. Low-carbon tourism will be the mainstream direction of the scenic spots' development, and the ω path of low-carbon tourism development is to develop economy and protect environment simultaneously. However, as the tourists' quantity is increasing, the loads of scenic spots are out of control. And the instantaneous overload in some spots caused the image phenomenon of full capacity of the whole scenic spot. Therefore, realizing the real-time schedule becomes the primary purpose of scenic spot’s management. This paper divides the tourism distribution system into several logically related subsystems and constructs a temporal and spatial multiresolution network scheduling model according to the regularity of scenic spots’ overload phenomenon in time and space. It also defines dynamic distribution probability and equivalent dynamic demand to realize the real-time prediction. We define gravitational function between fields and takes it as the utility of schedule, after resolving the transportation model of each resolution, it achieves hierarchical balance between demand and capacity of the system. The last part of the paper analyzes the time complexity of constructing a multiresolution distribution system. 9. Spatial Quality of Manually Geocoded Multispectral and Multiresolution Mosaics Directory of Open Access Journals (Sweden) Andrija Krtalić 2008-05-01 Full Text Available The digital airborne multisensor and multiresolution system for collection of information (images about mine suspected area was created, within European commission project Airborne Minefield Area Reduction (ARC, EC IST-2000-25300, http://www.arc.vub.ac.be to gain a better perspective in mine suspected areas (MSP in the Republic of Croatia. The system consists of a matrix camera (visible and near infrared range of electromagnetic spectrum, 0.4-1.1 µm, thermal (thermal range of electromagnetic spectrum, 8-14 µm and a hyperspectral linear scanner. Because of a specific purpose and seeking object on the scene, the flights for collecting the images took place at heights from 130 m to 900 m above the ground. The result of a small relative flight height and large MSPs was a large number of images which cover MSPs. Therefore, the need for merging images in largest parts, for a better perspective in whole MSPs and the interaction of detected object influences on the scene appeared. The mentioned system did not dispose of the module for automatic mosaicking and geocoding, so mosaicking and after that geocoding were done manually. This process made the classification of the scene (better distinguishing of objects on the scene and fusion of multispectral and multiresolution images after that possible. Classification and image fusion can be even done by manually mosaicking and geocoding. This article demonstrated this claim. 10. Traffic Multiresolution Modeling and Consistency Analysis of Urban Expressway Based on Asynchronous Integration Strategy Directory of Open Access Journals (Sweden) Liyan Zhang 2017-01-01 Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario. 11. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation Directory of Open Access Journals (Sweden) Peng Shao 2014-08-01 Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix. 12. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods Directory of Open Access Journals (Sweden) Domingues M. O. 2013-12-01 Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. 13. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation Directory of Open Access Journals (Sweden) Hezerul Abdul Karim 2004-09-01 Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation. 14. Multiresolution 3-D reconstruction from side-scan sonar images. Science.gov (United States) Coiras, Enrique; Petillot, Yvan; Lane, David M 2007-02-01 In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed. 15. Multiresolution strategies for the numerical solution of optimal control problems Science.gov (United States) Jain, Sachin There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a 16. OpenCL-based vicinity computation for 3D multiresolution mesh compression Science.gov (United States) Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri 2017-03-01 3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation. 17. Multiresolution analysis over graphs for a motor imagery based online BCI game. Science.gov (United States) Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy 2016-01-01 Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. Copyright © 2015 Elsevier Ltd. All rights reserved. 18. Adaptive multiresolution method for MAP reconstruction in electron tomography Energy Technology Data Exchange (ETDEWEB) Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland) 2016-11-15 3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently. 19. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction, Phase II Data.gov (United States) National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and... 20. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations KAUST Repository Sicat, Ronell Barrera 2015-01-01 approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i 1. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis DEFF Research Database (Denmark) Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P. 2004-01-01 A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable... 2. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation Energy Technology Data Exchange (ETDEWEB) Mirkin, B. [Rutgers Univ., Piscataway, NJ (United States) 1996-12-31 A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory. 3. Static multiresolution grids with inline hierarchy information for cosmic ray propagation Energy Technology Data Exchange (ETDEWEB) Müller, Gero, E-mail: gero.mueller@physik.rwth-aachen.de [III. Physikalisches Institut A, RWTH Aachen University, D-52056 Aachen (Germany) 2016-08-01 For numerical simulations of cosmic-ray propagation fast access to static magnetic field data is required. We present a data structure for multiresolution vector grids which is optimized for fast access, low overhead and shared memory use. The hierarchy information is encoded into the grid itself, reducing the memory overhead. Benchmarks show that in certain scenarios the differences in deflections introduced by sampling the magnetic field model can be significantly reduced when using the multiresolution approach. 4. The radon transform. Theory and implementation International Nuclear Information System (INIS) Toft, P. 1996-01-01 The subject of this Ph.D. thesis is the mathematical Radon transform, which is well suited for curve detection in digital images, and for reconstruction of tomography images. The thesis is divided into two main parts. Part I describes the Radon- and the Hough-transform and especially their discrete approximations with respect to curve parameter detection in digital images. The sampling relationships of the Radon transform is reviewed from a digital signal processing point of view. The discrete Radon transform is investigated for detection of curves, and aspects regarding the performance of the Radon transform assuming various types of noise is covered. Furthermore, a new fast scheme for estimating curve parameters is presented. Part II of the thesis describes the inverse Radon transform in 2D and 3D with focus on reconstruction of tomography images. Some of the direct reconstruction schemes are analyzed, including their discrete implementation. Furthermore, several iterative reconstruction schemes based on linear algebra are reviewed and applied for reconstruction of Positron Emission Tomography (PET) images. A new and very fast implementation of 2D iterative reconstruction methods is devised. In a more practical oriented chapter, the noise in PET images is modelled from a very large number of measurements. Several packagers for Radon- and Hough-transform based curve detection and direct/iterative 2D and 3D reconstruction have been developed and provided for free. (au) 140 refs 5. A multiresolution image based approach for correction of partial volume effects in emission tomography International Nuclear Information System (INIS) Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D 2006-01-01 Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in 6. Multiresolution molecular mechanics: Surface effects in nanoscale materials Energy Technology Data Exchange (ETDEWEB) Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu 2017-05-01 Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights: 7. A Biologically Motivated Multiresolution Approach to Contour Detection Directory of Open Access Journals (Sweden) Alessandro Neri 2007-01-01 Full Text Available Standard edge detectors react to all local luminance changes, irrespective of whether they are due to the contours of the objects represented in a scene or due to natural textures like grass, foliage, water, and so forth. Moreover, edges due to texture are often stronger than edges due to object contours. This implies that further processing is needed to discriminate object contours from texture edges. In this paper, we propose a biologically motivated multiresolution contour detection method using Bayesian denoising and a surround inhibition technique. Specifically, the proposed approach deploys computation of the gradient at different resolutions, followed by Bayesian denoising of the edge image. Then, a biologically motivated surround inhibition step is applied in order to suppress edges that are due to texture. We propose an improvement of the surround suppression used in previous works. Finally, a contour-oriented binarization algorithm is used, relying on the observation that object contours lead to long connected components rather than to short rods obtained from textures. Experimental results show that our contour detection method outperforms standard edge detectors as well as other methods that deploy inhibition. 8. A fast multi-resolution approach to tomographic PIV Science.gov (United States) Discetti, Stefano; Astarita, Tommaso 2012-03-01 Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy. 9. Circular Hough Transform and Local Circularity Measure for Weight Estimation of a Graph-Cut based Wood Stack Measurement DEFF Research Database (Denmark) Galsgaard, Bo; Lundtoft, Dennis Holm; Nikolov, Ivan Adriyanov 2015-01-01 are finally scaled and used to acquire the necessary wood stack measurements in real-world scale (in cm). The proposed system, which works automatically, has been tested on two different datasets, containing real outdoor images of logs which vary in shapes and sizes. The experimental results show......One of the time consuming tasks in the timber industry is the manually measurement of features of wood stacks. Such features include, but are not limited to, the number of the logs in a stack, their diameters distribution, and their volumes. Computer vision techniques have recently been used...... for solving this real-world industrial application. Such techniques are facing many challenges as the task is usually performed in outdoor, uncontrolled, environments. Furthermore, the logs can vary in texture and they can be occluded by different obstacles. These all make the segmentation of the wood logs... 10. A one-time truncate and encode multiresolution stochastic framework Energy Technology Data Exchange (ETDEWEB) Abgrall, R.; Congedo, P.M.; Geraci, G., E-mail: gianluca.geraci@inria.fr 2014-01-15 In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method. 11. On-the-Fly Decompression and Rendering of Multiresolution Terrain Energy Technology Data Exchange (ETDEWEB) Lindstrom, P; Cohen, J D 2009-04-02 We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance. 12. Applying multi-resolution numerical methods to geodynamics Science.gov (United States) Davies, David Rhodri Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled 13. Characterizing and understanding the climatic determinism of high- to low-frequency variations in precipitation in northwestern France using a coupled wavelet multiresolution/statistical downscaling approach Science.gov (United States) Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime 2017-04-01 Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic 14. A new fractional wavelet transform Science.gov (United States) Dai, Hongzhe; Zheng, Zhibao; Wang, Wei 2017-03-01 The fractional Fourier transform (FRFT) is a potent tool to analyze the time-varying signal. However, it fails in locating the fractional Fourier domain (FRFD)-frequency contents which is required in some applications. A novel fractional wavelet transform (FRWT) is proposed to solve this problem. It displays the time and FRFD-frequency information jointly in the time-FRFD-frequency plane. The definition, basic properties, inverse transform and reproducing kernel of the proposed FRWT are considered. It has been shown that an FRWT with proper order corresponds to the classical wavelet transform (WT). The multiresolution analysis (MRA) associated with the developed FRWT, together with the construction of the orthogonal fractional wavelets are also presented. Three applications are discussed: the analysis of signal with time-varying frequency content, the FRFD spectrum estimation of signals that involving noise, and the construction of fractional Harr wavelet. Simulations verify the validity of the proposed FRWT. 15. A multi-resolution approach to heat kernels on discrete surfaces KAUST Repository Vaxman, Amir; Ben-Chen, Mirela; Gotsman, Craig 2010-01-01 process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel 16. Adaptive multiresolution Hermite-Binomial filters for image edge and texture analysis NARCIS (Netherlands) Gu, Y.H.; Katsaggelos, A.K. 1994-01-01 A new multiresolution image analysis approach using adaptive Hermite-Binomial filters is presented in this paper. According to the local image structural and textural properties, the analysis filter kernels are made adaptive both in their scales and orders. Applications of such an adaptive filtering 17. Image compression using the W-transform Energy Technology Data Exchange (ETDEWEB) Reynolds, W.D. Jr. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div. 1995-12-31 The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities. 18. Multiresolution signal decomposition schemes. Part 2: Morphological wavelets NARCIS (Netherlands) H.J.A.M. Heijmans (Henk); J. Goutsias (John) 1999-01-01 htmlabstractIn its original form, the wavelet transform is a linear tool. However, it has been increasingly recognized that nonlinear extensions are possible. A major impulse to the development of nonlinear wavelet transforms has been given by the introduction of the lifting scheme by Sweldens. The 19. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations KAUST Repository Sicat, Ronell B. 2015-11-25 The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel 20. Multiresolution wavelet-ANN model for significant wave height forecasting. Digital Repository Service at National Institute of Oceanography (India) Deka, P.C.; Mandal, S.; Prahlada, R. Hybrid wavelet artificial neural network (WLNN) has been applied in the present study to forecast significant wave heights (Hs). Here Discrete Wavelet Transformation is used to preprocess the time series data (Hs) prior to Artificial Neural Network... 1. Global multi-resolution terrain elevation data 2010 (GMTED2010) Science.gov (United States) Danielson, Jeffrey J.; Gesch, Dean B. 2011-01-01 -second DTEDRegistered level 0, the USGS and the National Geospatial-Intelligence Agency (NGA) have collaborated to produce an enhanced replacement for GTOPO30, the Global Land One-km Base Elevation (GLOBE) model and other comparable 30-arc-second-resolution global models, using the best available data. The new model is called the Global Multi-resolution Terrain Elevation Data 2010, or GMTED2010 for short. This suite of products at three different resolutions (approximately 1,000, 500, and 250 meters) is designed to support many applications directly by providing users with generic products (for example, maximum, minimum, and median elevations) that have been derived directly from the raw input data that would not be available to the general user or would be very costly and time-consuming to produce for individual applications. The source of all the elevation data is captured in metadata for reference purposes. It is also hoped that as better data become available in the future, the GMTED2010 model will be updated. 2. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction Energy Technology Data Exchange (ETDEWEB) Tsantis, Stavros [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Spiliopoulos, Stavros; Karnabatidis, Dimitrios [Department of Radiology, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Skouroliakou, Aikaterini [Department of Energy Technology Engineering, Technological Education Institute of Athens, Athens 12210 (Greece); Hazle, John D. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Kagadis, George C., E-mail: gkagad@gmail.com, E-mail: George.Kagadis@med.upatras.gr, E-mail: GKagadis@mdanderson.org [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504, Greece and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States) 2014-07-15 Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A 3. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy International Nuclear Information System (INIS) Tang, Jing; Rahmim, Arman 2015-01-01 A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE 4. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy Science.gov (United States) Tang, Jing; Rahmim, Arman 2015-01-01 A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE 5. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction International Nuclear Information System (INIS) Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C. 2014-01-01 Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A 6. Multi-resolution analysis using integrated microscopic configuration with local patterns for benign-malignant mass classification Science.gov (United States) Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim 2018-02-01 In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods. 7. Suitability of an MRMCE (multi-resolution minimum cross entropy) algorithm for online monitoring of a two-phase flow International Nuclear Information System (INIS) Wang, Qi; Wang, Huaxiang; Xin, Shan 2011-01-01 The flow regimes are important characteristics to describe two-phase flows, and measurement of two-phase flow parameters is becoming increasingly important in many industrial processes. Computerized tomography (CT) has been applied to two-phase/multi-phase flow measurement in recent years. Image reconstruction of CT often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive, especially for the case of online flow regime identification. In this paper, minimum cross entropy reconstruction based on multi-resolution processing (MRMCE) is presented for oil–gas two-phase flow regime identification. A regularized MCE solution is obtained using the simultaneous multiplicative algebraic reconstruction technique (SMART) at a coarse resolution level, where important information on the reconstructed image is contained. Then, the solution in the finest resolution is obtained by inverse fast wavelet transformation. Both computer simulation and static/dynamic experiments were carried out for typical flow regimes. Results obtained indicate that the proposed method can dramatically reduce the computational time and improve the quality of the reconstructed image with suitable decomposition levels compared with the single-resolution maximum likelihood expectation maximization (MLEM), alternating minimization (AM), Landweber, iterative least square technique (ILST) and minimum cross entropy (MCE) methods. Therefore, the MRMCE method is suitable for identification of dynamic two-phase flow regimes 8. A multi-resolution envelope-power based model for speech intelligibility DEFF Research Database (Denmark) Jørgensen, Søren; Ewert, Stephan D.; Dau, Torsten 2013-01-01 The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well...... to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments...... with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv... 9. Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram Directory of Open Access Journals (Sweden) Kuo-Chin Fan 2010-01-01 Full Text Available Average Motion Energy (AME image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH. To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach. 10. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography. Science.gov (United States) De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc 2010-09-01 In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources. 11. Accuracy assessment of tree crown detection using local maxima and multi-resolution segmentation International Nuclear Information System (INIS) Khalid, N; Hamid, J R A; Latif, Z A 2014-01-01 Diversity of trees forms an important component in the forest ecosystems and needs proper inventories to assist the forest personnel in their daily activities. However, tree parameter measurements are often constrained by physical inaccessibility to site locations, high costs, and time. With the advancement in remote sensing technology, such as the provision of higher spatial and spectral resolution of imagery, a number of developed algorithms fulfil the needs of accurate tree inventories information in a cost effective and timely manner over larger forest areas. This study intends to generate tree distribution map in Ampang Forest Reserve using the Local Maxima and Multi-Resolution image segmentation algorithm. The utilization of recent worldview-2 imagery with Local Maxima and Multi-Resolution image segmentation proves to be capable of detecting and delineating the tree crown in its accurate standing position 12. Pathfinder: multiresolution region-based searching of pathology images using IRM. OpenAIRE Wang, J. Z. 2000-01-01 The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave... 13. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach Science.gov (United States) 2002-01-01 their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested 14. A MULTIRESOLUTION METHOD FOR THE SIMULATION OF SEDIMENTATION IN INCLINED CHANNELS OpenAIRE Buerger, Raimund; Ruiz-Baier, Ricardo; Schneider, Kai; Torres, Hector 2012-01-01 An adaptive multiresolution scheme is proposed for the numerical solution of a spatially two-dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model consists in a version of the Stokes equations for incompressible fluid flow coupled with a hyperbolic conservation law for the local solids concentration. We study the process in an inclined, rectangular closed vessel, a configuration that gives rise a well-known increase of settling rat... 15. Adaptive Multiresolution Methods: Practical issues on Data Structures, Implementation and Parallelization* Directory of Open Access Journals (Sweden) Bachmann M. 2011-12-01 Full Text Available The concept of fully adaptive multiresolution finite volume schemes has been developed and investigated during the past decade. Here grid adaptation is realized by performing a multiscale decomposition of the discrete data at hand. By means of hard thresholding the resulting multiscale data are compressed. From the remaining data a locally refined grid is constructed. The aim of the present work is to give a self-contained overview on the construction of an appropriate multiresolution analysis using biorthogonal wavelets, its efficient realization by means of hash maps using global cell identifiers and the parallelization of the multiresolution-based grid adaptation via MPI using space-filling curves. Le concept des schémas de volumes finis multi-échelles et adaptatifs a été développé et etudié pendant les dix dernières années. Ici le maillage adaptatif est réalisé en effectuant une décomposition multi-échelle des données discrètes proches. En les tronquant à l’aide d’une valeur seuil fixée, les données multi-échelles obtenues sont compressées. A partir de celles-ci, le maillage est raffiné localement. Le but de ce travail est de donner un aperçu concis de la construction d’une analyse appropriée de multiresolution utilisant les fonctions ondelettes biorthogonales, de son efficacité d’application en terme de tables de hachage en utilisant des identification globales de cellule et de la parallélisation du maillage adaptatif multirésolution via MPI à l’aide des courbes remplissantes. 16. Combining nonlinear multiresolution system and vector quantization for still image compression Energy Technology Data Exchange (ETDEWEB) Wong, Y. 1993-12-17 It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding. 17. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV Directory of Open Access Journals (Sweden) Huang Shyh-Fang 2012-01-01 Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly. 18. LOD map--A visual interface for navigating multiresolution volume visualization. Science.gov (United States) Wang, Chaoli; Shen, Han-Wei 2006-01-01 In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets. 19. A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes Science.gov (United States) Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei 2017-07-01 Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation. 20. A multi-resolution HEALPix data structure for spherically mapped point data Directory of Open Access Journals (Sweden) Robert W. Youngren 2017-06-01 Full Text Available Data describing entities with locations that are points on a sphere are described as spherically mapped. Several data structures designed for spherically mapped data have been developed. One of them, known as Hierarchical Equal Area iso-Latitude Pixelization (HEALPix, partitions the sphere into twelve diamond-shaped equal-area base cells and then recursively subdivides each cell into four diamond-shaped subcells, continuing to the desired level of resolution. Twelve quadtrees, one associated with each base cell, store the data records associated with that cell and its subcells.HEALPix has been used successfully for numerous applications, notably including cosmic microwave background data analysis. However, for applications involving sparse point data HEALPix has possible drawbacks, including inefficient memory utilization, overwriting of proximate points, and return of spurious points for certain queries.A multi-resolution variant of HEALPix specifically optimized for sparse point data was developed. The new data structure allows different areas of the sphere to be subdivided at different levels of resolution. It combines HEALPix positive features with the advantages of multi-resolution, including reduced memory requirements and improved query performance.An implementation of the new Multi-Resolution HEALPix (MRH data structure was tested using spherically mapped data from four different scientific applications (warhead fragmentation trajectories, weather station locations, galaxy locations, and synthetic locations. Four types of range queries were applied to each data structure for each dataset. Compared to HEALPix, MRH used two to four orders of magnitude less memory for the same data, and on average its queries executed 72% faster. Keywords: Computer science 1. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation Science.gov (United States) Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W. 2012-03-01 Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art. 2. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision CERN Document Server Vujović, Igor 2015-01-01 This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control. 3. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology Science.gov (United States) Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C. 1998-02-01 We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures. 4. Multiresolution wavelet analysis of heartbeat intervals discriminates healthy patients from those with cardiac pathology OpenAIRE Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C. 1997-01-01 We applied multiresolution wavelet analysis to the sequence of times between human heartbeats (R-R intervals) and have found a scale window, between 16 and 32 heartbeats, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as either belonging to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of... 5. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion Science.gov (United States) Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em 2017-10-01 We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations. 6. Multi-Resolution Multimedia QoE Models for IPTV Applications Directory of Open Access Journals (Sweden) 2012-01-01 Full Text Available Internet television (IPTV is rapidly gaining popularity and is being widely deployed in content delivery networks on the Internet. In order to proactively deliver optimum user quality of experience (QoE for IPTV, service providers need to identify network bottlenecks in real time. In this paper, we develop psycho-acoustic-visual models that can predict user QoE of multimedia applications in real time based on online network status measurements. Our models are neural network based and cater to multi-resolution IPTV applications that include QCIF, QVGA, SD, and HD resolutions encoded using popular audio and video codec combinations. On the network side, our models account for jitter and loss levels, as well as router queuing disciplines: packet-ordered and time-ordered FIFO. We evaluate the performance of our multi-resolution multimedia QoE models in terms of prediction characteristics, accuracy, speed, and consistency. Our evaluation results demonstrate that the models are pertinent for real-time QoE monitoring and resource adaptation in IPTV content delivery networks. 7. A VIRTUAL GLOBE-BASED MULTI-RESOLUTION TIN SURFACE MODELING AND VISUALIZETION METHOD Directory of Open Access Journals (Sweden) X. Zheng 2016-06-01 Full Text Available The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis. 8. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods. Science.gov (United States) Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus 2015-09-03 Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved. 9. Long-range force and moment calculations in multiresolution simulations of molecular systems International Nuclear Information System (INIS) 2012-01-01 Multiresolution simulations of molecular systems such as DNAs, RNAs, and proteins are implemented using models with different resolutions ranging from a fully atomistic model to coarse-grained molecules, or even to continuum level system descriptions. For such simulations, pairwise force calculation is a serious bottleneck which can impose a prohibitive amount of computational load on the simulation if not performed wisely. Herein, we approximate the resultant force due to long-range particle-body and body-body interactions applicable to multiresolution simulations. Since the resultant force does not necessarily act through the center of mass of the body, it creates a moment about the mass center. Although this potentially important torque is neglected in many coarse-grained models which only use particle dynamics to formulate the dynamics of the system, it should be calculated and used when coarse-grained simulations are performed in a multibody scheme. Herein, the approximation for this moment due to far-field particle-body and body-body interactions is also provided. 10. Study on spillover effect of copper futures between LME and SHFE using wavelet multiresolution analysis Institute of Scientific and Technical Information of China (English) 2007-01-01 Research on information spillover effects between financial markets remains active in the economic community. A Granger-type model has recently been used to investigate the spillover between London Metal Exchange (LME) and Shanghai Futures Exchange (SHFE), however, possible correlation between the future price and return on different time scales have been ignored. In this paper, wavelet multiresolution decomposition is used to investigate the spillover effects of copper future returns between the two markets. The daily return time series are decomposed on 2n (n=1, ..., 6) frequency bands through wavelet multiresolution analysis. The correlation between the two markets is studied with decomposed data. It is shown that high frequency detail components represent much more energy than low-frequency smooth components. The relation between copper future daily returns in LME and that in SHFE are different on different time scales. The fluctuations of the copper future daily returns in LME have large effect on that in SHFE in 32-day scale, but small effect in high frequency scales. It also has evidence that strong effects exist between LME and SHFE for monthly responses of the copper futures but not for daily responses. 11. Real-time Multiresolution Crosswalk Detection with Walk Light Recognition for the Blind Directory of Open Access Journals (Sweden) ROMIC, K. 2018-02-01 Full Text Available Real-time image processing and object detection techniques have a great potential to be applied in digital assistive tools for the blind and visually impaired persons. In this paper, algorithm for crosswalk detection and walk light recognition is proposed with the main aim to help blind person when crossing the road. The proposed algorithm is optimized to work in real-time on portable devices using standard cameras. Images captured by camera are processed while person is moving and decision about detected crosswalk is provided as an output along with the information about walk light if one is present. Crosswalk detection method is based on multiresolution morphological image processing, while the walk light recognition is performed by proposed 6-stage algorithm. The main contributions of this paper are accurate crosswalk detection with small processing time due to multiresolution processing and the recognition of the walk lights covering only small amount of pixels in image. The experiment is conducted using images from video sequences captured in realistic situations on crossings. The results show 98.3% correct crosswalk detections and 89.5% correct walk lights recognition with average processing speed of about 16 frames per second. 12. Digital Correlation based on Wavelet Transform for Image Detection International Nuclear Information System (INIS) Barba, L; Vargas, L; Torres, C; Mattos, L 2011-01-01 In this work is presented a method for the optimization of digital correlators to improve the characteristic detection on images using wavelet transform as well as subband filtering. It is proposed an approach of wavelet-based image contrast enhancement in order to increase the performance of digital correlators. The multiresolution representation is employed to improve the high frequency content of images taken into account the input contrast measured for the original image. The energy of correlation peaks and discrimination level of several objects are improved with this technique. To demonstrate the potentiality in extracting characteristics using the wavelet transform, small objects inside reference images are detected successfully. 13. Inferring species richness and turnover by statistical multiresolution texture analysis of satellite imagery. Directory of Open Access Journals (Sweden) Matteo Convertino Full Text Available BACKGROUND: The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. METHODOLOGY/PRINCIPAL FINDINGS: We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL. Species turnover, or [Formula: see text] diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species 14. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure Science.gov (United States) Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S. 2014-08-01 Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver 15. Sparse PDF maps for non-linear multi-resolution image operations KAUST Repository 2012-11-01 We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters (e.g., median or mode filters), and bilateral filters. © 2012 ACM. 16. Investigations of homologous disaccharides by elastic incoherent neutron scattering and wavelet multiresolution analysis Energy Technology Data Exchange (ETDEWEB) Magazù, S.; Migliardo, F. [Dipartimento di Fisica e di Scienze della Terra dell’, Università degli Studi di Messina, Viale F. S. D’Alcontres 31, 98166 Messina (Italy); Vertessy, B.G. [Institute of Enzymology, Hungarian Academy of Science, Budapest (Hungary); Caccamo, M.T., E-mail: maccamo@unime.it [Dipartimento di Fisica e di Scienze della Terra dell’, Università degli Studi di Messina, Viale F. S. D’Alcontres 31, 98166 Messina (Italy) 2013-10-16 Highlights: • Innovative multiresolution wavelet analysis of elastic incoherent neutron scattering. • Elastic Incoherent Neutron Scattering measurements on homologues disaccharides. • EINS wavevector analysis. • EINS temperature analysis. - Abstract: In the present paper the results of a wavevector and thermal analysis of Elastic Incoherent Neutron Scattering (EINS) data collected on water mixtures of three homologous disaccharides through a wavelet approach are reported. The wavelet analysis allows to compare both the spatial properties of the three systems in the wavevector range of Q = 0.27 Å{sup −1} ÷ 4.27 Å{sup −1}. It emerges that, differently from previous analyses, for trehalose the scalograms are constantly lower and sharper in respect to maltose and sucrose, giving rise to a global spectral density along the wavevector range markedly less extended. As far as the thermal analysis is concerned, the global scattered intensity profiles suggest a higher thermal restrain of trehalose in respect to the other two homologous disaccharides. 17. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models. Science.gov (United States) Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng 2016-09-01 Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized. 18. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data Science.gov (United States) Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei 2013-08-01 We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods. 19. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation Science.gov (United States) Lacaze, Alberto; Meystel, Michael; Meystel, Alex 1994-01-01 This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature. 20. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation. Directory of Open Access Journals (Sweden) Najah Alsubaie Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners. 1. A multi-resolution approach to heat kernels on discrete surfaces KAUST Repository Vaxman, Amir 2010-07-26 Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM. 2. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network Science.gov (United States) Zhu, Aichun; Wang, Tian; Snoussi, Hichem 2018-03-01 This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation. 3. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media Science.gov (United States) Barajas-Solano, D. A.; Tartakovsky, A. M. 2017-12-01 We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species. 4. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network Directory of Open Access Journals (Sweden) Aichun Zhu 2018-03-01 Full Text Available This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN. Firstly, a Relative Mixture Deformable Model (RMDM is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation. 5. Using wavelet multi-resolution nature to accelerate the identification of fractional order system International Nuclear Information System (INIS) Li Yuan-Lu; Meng Xiao; Ding Ya-Qing 2017-01-01 Because of the fractional order derivatives, the identification of the fractional order system (FOS) is more complex than that of an integral order system (IOS). In order to avoid high time consumption in the system identification, the least-squares method is used to find other parameters by fixing the fractional derivative order. Hereafter, the optimal parameters of a system will be found by varying the derivative order in an interval. In addition, the operational matrix of the fractional order integration combined with the multi-resolution nature of a wavelet is used to accelerate the FOS identification, which is achieved by discarding wavelet coefficients of high-frequency components of input and output signals. In the end, the identifications of some known fractional order systems and an elastic torsion system are used to verify the proposed method. (paper) 6. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast KAUST Repository She, James 2009-11-01 Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network. 7. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast KAUST Repository She, James; Ho, Pin-Han; Shihada, Basem 2009-01-01 Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network. 8. Multiresolution analysis of the spatiotemporal variability in global radiation observed by a dense network of 99 pyranometers Science.gov (United States) Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas 2017-03-01 The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken 9. The wavelet transform and the suppression theory of binocular vision for stereo image compression Energy Technology Data Exchange (ETDEWEB) Reynolds, W.D. Jr [Argonne National Lab., IL (United States); Kenyon, R.V. [Illinois Univ., Chicago, IL (United States) 1996-08-01 In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information. 10. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches Science.gov (United States) Duchaineau, Mark 2001-06-01 Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results. 11. Exploring a Multiresolution Modeling Approach within the Shallow-Water Equations Energy Technology Data Exchange (ETDEWEB) Ringler, Todd D.; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William 2011-11-01 The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward 12. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs Science.gov (United States) Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu 2017-04-01 Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}. 13. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer Science.gov (United States) Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik 2018-05-01 Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications. 14. Detection of pulmonary nodules on lung X-ray images. Studies on multi-resolutional filter and energy subtraction images International Nuclear Information System (INIS) Sawada, Akira; Sato, Yoshinobu; Kido, Shoji; Tamura, Shinichi 1999-01-01 The purpose of this work is to prove the effectiveness of an energy subtraction image for the detection of pulmonary nodules and the effectiveness of multi-resolutional filter on an energy subtraction image to detect pulmonary nodules. Also we study influential factors to the accuracy of detection of pulmonary nodules from viewpoints of types of images, types of digital filters and types of evaluation methods. As one type of images, we select an energy subtraction image, which removes bones such as ribs from the conventional X-ray image by utilizing the difference of X-ray absorption ratios at different energy between bones and soft tissue. Ribs and vessels are major causes of CAD errors in detection of pulmonary nodules and many researches have tried to solve this problem. So we select conventional X-ray images and energy subtraction X-ray images as types of images, and at the same time select ∇ 2 G (Laplacian of Guassian) filter, Min-DD (Minimum Directional Difference) filter and our multi-resolutional filter as types of digital filters. Also we select two evaluation methods and prove the effectiveness of an energy subtraction image, the effectiveness of Min-DD filter on a conventional X-ray image and the effectiveness of multi-resolutional filter on an energy subtraction image. (author) 15. Crack Identification in CFRP Laminated Beams Using Multi-Resolution Modal Teager–Kaiser Energy under Noisy Environments Science.gov (United States) Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław 2017-01-01 Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager–Kaiser energy, which is the Teager–Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager–Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates. PMID:28773016 16. Coresident sensor fusion and compression using the wavelet transform Energy Technology Data Exchange (ETDEWEB) Yocky, D.A. 1996-03-11 Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial fusion software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image. 17. Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments Science.gov (United States) Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H. 2017-12-01 The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both 18. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability Science.gov (United States) Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D. 2018-04-01 This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent 19. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels International Nuclear Information System (INIS) Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R 2005-01-01 Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm 20. Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach to Regional Ocean Modeling Energy Technology Data Exchange (ETDEWEB) Gunzburger, Max 2013-03-14 We review the results obtained under grant support. Details are given in the publications listed at the end of the review. We also provide lists of the personnel funded by the grant and of other collaborators on grant-related research and of the talks delivered, also under grant related research. We collaborated closely with geophysicists at the Los Alamos National Laboratory and the National Center for Atmospheric Research; especially noteworthy is our collaboration with Todd Ringler of LANL who was an active partner on much of our work. 1. A Precise Lane Detection Algorithm Based on Top View Image Transformation and Least-Square Approaches Directory of Open Access Journals (Sweden) Byambaa Dorj 2016-01-01 Full Text Available The next promising key issue of the automobile development is a self-driving technique. One of the challenges for intelligent self-driving includes a lane-detecting and lane-keeping capability for advanced driver assistance systems. This paper introduces an efficient and lane detection method designed based on top view image transformation that converts an image from a front view to a top view space. After the top view image transformation, a Hough transformation technique is integrated by using a parabolic model of a curved lane in order to estimate a parametric model of the lane in the top view space. The parameters of the parabolic model are estimated by utilizing a least-square approach. The experimental results show that the newly proposed lane detection method with the top view transformation is very effective in estimating a sharp and curved lane leading to a precise self-driving capability. 2. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution KAUST Repository Tagle, Felipe 2017-12-06 Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia. 3. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data. Science.gov (United States) Zhao, Yize; Kang, Jian; Long, Qi 2018-01-01 Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods. 4. Developing a real-time emulation of multiresolutional control architectures for complex, discrete-event systems Energy Technology Data Exchange (ETDEWEB) Davis, W.J.; Macro, J.G.; Brook, A.L. [Univ. of Illinois, Urbana, IL (United States)] [and others 1996-12-31 This paper first discusses an object-oriented, control architecture and then applies the architecture to produce a real-time software emulator for the Rapid Acquisition of Manufactured Parts (RAMP) flexible manufacturing system (FMS). In specifying the control architecture, the coordinated object is first defined as the primary modeling element. These coordinated objects are then integrated into a Recursive, Object-Oriented Coordination Hierarchy. A new simulation methodology, the Hierarchical Object-Oriented Programmable Logic Simulator, is then employed to model the interactions among the coordinated objects. The final step in implementing the emulator is to distribute the models of the coordinated objects over a network of computers and to synchronize their operation to a real-time clock. The paper then introduces the Hierarchical Subsystem Controller as an intelligent controller for the coordinated object. The proposed approach to intelligent control is then compared to the concept of multiresolutional semiosis that has been developed by Dr. Alex Meystel. Finally, the plans for implementing an intelligent controller for the RAMP FMS are discussed. 5. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis Energy Technology Data Exchange (ETDEWEB) Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) 2016-01-01 An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging. 6. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry. Science.gov (United States) Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek 2014-09-21 Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms. 7. On the use of adaptive multiresolution method with time-varying tolerance for compressible fluid flows Science.gov (United States) 2017-12-01 In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method. 8. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation Science.gov (United States) Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A. 2016-07-01 Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training. 9. Automatic multiresolution age-related macular degeneration detection from fundus images Science.gov (United States) Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida 2014-03-01 Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model. 10. Paraxial diffractive elements for space-variant linear transforms Science.gov (United States) Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan 1998-06-01 Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments. 11. Experimental Evaluation of Integral Transformations for Engineering Drawings Vectorization Directory of Open Access Journals (Sweden) 2014-12-01 Full Text Available The concept of digital manufacturing supposes application of digital technologies in the whole product life cycle. Direct digital manufacturing includes such information technology processes, where products are directly manufactured from 3D CAD model. In digital manufacturing, engineering drawing is replaced by CAD product model. In the contemporary practice, lots of engineering paper-based drawings are still archived. They could be digitalized by scanner and stored to one of the raster graphics format and after that vectorized for interactive editing in the specific software system for technical drawing or for archiving in some of the standard vector graphics file format. The vector format is suitable for 3D model generating, too.The article deals with using of selected integral transformations (Fourier, Hough in the phase of digitalized raster engineering drawings vectorization. 12. Identifying Spatial Units of Human Occupation in the Brazilian Amazon Using Landsat and CBERS Multi-Resolution Imagery OpenAIRE Dal’Asta, Ana Paula; Brigatti, Newton; Amaral, Silvana; Escada, Maria Isabel Sobral; Monteiro, Antonio Miguel Vieira 2012-01-01 Every spatial unit of human occupation is part of a network structuring an extensive process of urbanization in the Amazon territory. Multi-resolution remote sensing data were used to identify and map human presence and activities in the Sustainable Forest District of Cuiabá-Santarém highway (BR-163), west of Pará, Brazil. The limits of spatial units of human occupation were mapped based on digital classification of Landsat-TM5 (Thematic Mapper 5) image (30m spatial resolution). High-spatial-... 13. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems. Science.gov (United States) Kim, Won Hwa; Chung, Moo K; Singh, Vikas 2013-01-01 The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem. 14. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study. Science.gov (United States) Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z 2014-01-01 Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society. 15. The multi-resolution capability of Tchebichef moments and its applications to the analysis of fluorescence excitation-emission spectra Science.gov (United States) Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin 2018-01-01 Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra. 16. Discrete Fourier and wavelet transforms an introduction through linear algebra with applications to signal processing CERN Document Server Goodman, Roe W 2016-01-01 This textbook for undergraduate mathematics, science, and engineering students introduces the theory and applications of discrete Fourier and wavelet transforms using elementary linear algebra, without assuming prior knowledge of signal processing or advanced analysis.It explains how to use the Fourier matrix to extract frequency information from a digital signal and how to use circulant matrices to emphasize selected frequency ranges. It introduces discrete wavelet transforms for digital signals through the lifting method and illustrates through examples and computer explorations how these transforms are used in signal and image processing. Then the general theory of discrete wavelet transforms is developed via the matrix algebra of two-channel filter banks. Finally, wavelet transforms for analog signals are constructed based on filter bank results already presented, and the mathematical framework of multiresolution analysis is examined. 17. A DTM MULTI-RESOLUTION COMPRESSED MODEL FOR EFFICIENT DATA STORAGE AND NETWORK TRANSFER Directory of Open Access Journals (Sweden) L. Biagi 2012-08-01 Full Text Available In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions. At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a 18. Weighted least squares phase unwrapping based on the wavelet transform Science.gov (United States) Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia 2007-01-01 The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method. 19. Towards discrete wavelet transform-based human activity recognition Science.gov (United States) Khare, Manish; Jeon, Moongu 2017-06-01 Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods. 20. Detección automática de NEOs en imágenes CCD utilizando la transformada de Hough Science.gov (United States) Ruétalo, M.; Tancredi, G. El interés y la dedicación por los objetos que se acercan a la órbita de la Tierra (NEOs) ha aumentado considerablemente en los últimos años, tanto que se han iniciado varias campañas de búsqueda sistemática para aumentar la población identificada de éstos. El uso de placas fotográficas e identificación visual está siendo sustituído, progresivamente, por el uso de cámaras CCD y paquetes de detección automática de los objetos en las imágenes digitales. Una parte muy importante para la implementación exitosa de un programa automatizado de detección de este tipo es el desarrollo de algoritmos capaces de identificar objetos de baja relación señal-ruido y con requerimientos computacionales no elevados. En el presente trabajo proponemos la utilización de la transformada de Hough (utilizada en algunas áreas de visión artificial) para detectar automáticamente trazas, aproximadamente rectilíneas y de baja relación señal-ruido, en imágenes CCD. Desarrollamos una primera implementación de un algoritmo basado en ésta y lo probamos con una serie de imágenes reales conteniendo trazas con picos de señales de entre ~1 σ y ~3 σ por encima del nivel del ruido de fondo. El algoritmo detecta, sin inconvenientes, la mayoría de los casos y en tiempos razonablemente adecuados. 1. Biomolecular surface construction by PDE transform. Science.gov (United States) Zheng, Qiong; Yang, Siyang; Wei, Guo-Wei 2012-03-01 virus surface capsid. Virus surface morphologies of different resolutions are attained by adjusting the propagation time. Therefore, the present PDE transform provides a multiresolution analysis in the surface visualization. Extensive numerical experiment and comparison with an established surface model indicate that the present PDE transform is a robust, stable, and efficient approach for biomolecular surface generation in Cartesian meshes. Copyright © 2012 John Wiley & Sons, Ltd. 2. Transformative Learning Science.gov (United States) Wang, Victor C. X.; Cranton, Patricia 2011-01-01 The theory of transformative learning has been explored by different theorists and scholars. However, few scholars have made an attempt to make a comparison between transformative learning and Confucianism or between transformative learning and andragogy. The authors of this article address these comparisons to develop new and different insights… 3. Identifying Spatial Units of Human Occupation in the Brazilian Amazon Using Landsat and CBERS Multi-Resolution Imagery Directory of Open Access Journals (Sweden) 2012-01-01 Full Text Available Every spatial unit of human occupation is part of a network structuring an extensive process of urbanization in the Amazon territory. Multi-resolution remote sensing data were used to identify and map human presence and activities in the Sustainable Forest District of Cuiabá-Santarém highway (BR-163, west of Pará, Brazil. The limits of spatial units of human occupation were mapped based on digital classification of Landsat-TM5 (Thematic Mapper 5 image (30m spatial resolution. High-spatial-resolution CBERS-HRC (China-Brazil Earth Resources Satellite-High-Resolution Camera images (5 m merged with CBERS-CCD (Charge Coupled Device images (20 m were used to map spatial arrangements inside each populated unit, describing intra-urban characteristics. Fieldwork data validated and refined the classification maps that supported the categorization of the units. A total of 133 spatial units were individualized, comprising population centers as municipal seats, villages and communities, and units of human activities, such as sawmills, farmhouses, landing strips, etc. From the high-resolution analysis, 32 population centers were grouped in four categories, described according to their level of urbanization and spatial organization as: structured, recent, established and dependent on connectivity. This multi-resolution approach provided spatial information about the urbanization process and organization of the territory. It may be extended into other areas or be further used to devise a monitoring system, contributing to the discussion of public policy priorities for sustainable development in the Amazon. 4. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions. Energy Technology Data Exchange (ETDEWEB) Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet [Carnegie Institution for Science, Stanford, CA; Michalak, Anna M. [Carnegie Institution for Science, Stanford, CA; van Bloemen Waanders, Bart Gustaaf [Sandia National Laboratories, Albuquerque, NM; McKenna, Sean Andrew [IBM Research, Mulhuddart, Dublin 15, Ireland 2013-04-01 The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization. 5. Backlund transformations as canonical transformations International Nuclear Information System (INIS) Villani, A.; Zimerman, A.H. 1977-01-01 Toda and Wadati as well as Kodama and Wadati have shown that the Backlund transformations, for the exponential lattice equation, sine-Gordon equation, K-dV (Korteweg de Vries) equation and modifies K-dV equation, are canonical transformation. It is shown that the Backlund transformation for the Boussinesq equation, for a generalized K-dV equation, for a model equation for shallow water waves and for the nonlinear Schroedinger equation are also canonical transformations [pt 6. Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography Science.gov (United States) Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J. 1995-04-01 This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients. 7. The gridding method for image reconstruction by Fourier transformation International Nuclear Information System (INIS) Schomberg, H.; Timmer, J. 1995-01-01 This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform CERN Document Server Agaian, Sos; Egiazarian, Karen; Astola, Jaakko 2011-01-01 The Hadamard matrix and Hadamard transform are fundamental problem-solving tools in a wide spectrum of scientific disciplines and technologies, such as communication systems, signal and image processing (signal representation, coding, filtering, recognition, and watermarking), digital logic (Boolean function analysis and synthesis), and fault-tolerant system design. Hadamard Transforms intends to bring together different topics concerning current developments in Hadamard matrices, transforms, and their applications. Each chapter begins with the basics of the theory, progresses to more advanced 9. A high order multi-resolution solver for the Poisson equation with application to vortex methods DEFF Research Database (Denmark) Hejlesen, Mads Mølholm; Spietz, Henrik Juul; Walther, Jens Honore A high order method is presented for solving the Poisson equation subject to mixed free-space and periodic boundary conditions by using fast Fourier transforms (FFT). The high order convergence is achieved by deriving mollified Green’s functions from a high order regularization function which... 10. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system Science.gov (United States) Orr, Shlomo; Meystel, Alexander M. 2005-03-01 Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une 11. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition Directory of Open Access Journals (Sweden) Kun-Ching Wang 2015-01-01 Full Text Available The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI. In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII. The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. 12. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES OpenAIRE Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique 2007-01-01 4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est... 13. Psychoacoustic Music Analysis Based on the Discrete Wavelet Packet Transform Directory of Open Access Journals (Sweden) Xing He 2008-01-01 Full Text Available Psychoacoustical computational models are necessary for the perceptual processing of acoustic signals and have contributed significantly in the development of highly efficient audio analysis and coding. In this paper, we present an approach for the psychoacoustic analysis of musical signals based on the discrete wavelet packet transform. The proposed method mimics the multiresolution properties of the human ear closer than other techniques and it includes simultaneous and temporal auditory masking. Experimental results show that this method provides better masking capabilities and it reduces the signal-to-masking ratio substantially more than other approaches, without introducing audible distortion. This model can lead to greater audio compression by permitting further bit rate reduction and more secure watermarking by providing greater signal space for information hiding. 14. Applications of wavelet transforms for nuclear power plant signal analysis International Nuclear Information System (INIS) Seker, S.; Turkcan, E.; Upadhyaya, B.R.; Erbay, A.S. 1998-01-01 The safety of Nuclear Power Plants (NPPs) may be enhanced by the timely processing of information derived from multiple process signals from NPPs. The most widely used technique in signal analysis applications is the Fourier transform in the frequency domain to generate power spectral densities (PSD). However, the Fourier transform is global in nature and will obscure any non-stationary signal feature. Lately, a powerful technique called the Wavelet Transform, has been developed. This transform uses certain basis functions for representing the data in an effective manner, with capability for sub-band analysis and providing time-frequency localization as needed. This paper presents a brief overview of wavelets applied to the nuclear industry for signal processing and plant monitoring. The basic theory of Wavelets is also summarized. In order to illustrate the application of wavelet transforms data were acquired from the operating nuclear power plant Borssele in the Netherlands. The experimental data consist of various signals in the power plant and are selected from a stationary power operation. Their frequency characteristics and the mutual relations were investigated using MATLAB signal processing and wavelet toolbox for computing their PSDs and coherence functions by multi-resolution analysis. The results indicate that the sub-band PSD matches with the original signal PSD and enhances the estimation of coherence functions. The Wavelet analysis demonstrates the feasibility of application to stationary signals to provide better estimates in the frequency band of interest as compared to the classical FFT approach. (author) 15. Visualizing Transformation DEFF Research Database (Denmark) Pedersen, Pia 2012-01-01 Transformation, defined as the step of extracting, arranging and simplifying data into visual form (M. Neurath, 1974), was developed in connection with ISOTYPE (International System Of TYpographic Picture Education) and might well be the most important legacy of Isotype to the field of graphic...... design. Recently transformation has attracted renewed interest because of the book The Transformer written by Robin Kinross and Marie Neurath. My on-going research project, summarized in this paper, identifies and depicts the essential principles of data visualization underlying the process...... of transformation with reference to Marie Neurath’s sketches on the Bilston Project. The material has been collected at the Otto and Marie Neurath Collection housed at the University of Reading, UK. By using data visualization as a research method to look directly into the process of transformation, the project... 16. Security Transformation National Research Council Canada - National Science Library Metz, Steven 2003-01-01 ... adjustment. With American military forces engaged around the world in both combat and stabilization operations, the need for rigorous and critical analysis of security transformation has never been greater... 17. Landskabets transformation DEFF Research Database (Denmark) Munck Petersen, Rikke 2005-01-01 Seminaroplæg fra forskere. Faglige seminarer på KA, forår 2005. Belyser transformation af det danske landskab fysisk som holdningsmæssigt, samt hvordan phd-arbejdets egen proces håndterer den.......Seminaroplæg fra forskere. Faglige seminarer på KA, forår 2005. Belyser transformation af det danske landskab fysisk som holdningsmæssigt, samt hvordan phd-arbejdets egen proces håndterer den.... 18. Covariant Transform OpenAIRE 2010-01-01 The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe... 19. Transforming Anatomy OpenAIRE Hall, Anndee 2017-01-01 Abstract: Transforming Anatomy Studying historic books allows people to witness the transformation of the world right before their very eyes. The Bruxellensis Icones Anatomicae[1] by Andreas Vesalius is a vital piece of evidence in the movement from a more rudimentary understanding of the human body into the more complex and accurate development of modern anatomy. Vesalius’ research worked to both refute and confirm findings of his predecessor, the great historical Greek philosopher, Galen... 20. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory Energy Technology Data Exchange (ETDEWEB) Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Luz, R. J. Barreira; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D' Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; Mauro, G. De; Neto, J. R. T. de Mello; Mitri, I. De; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Giulio, C. Di; Matteo, A. Di; Castro, M. L. Díaz; Diogo, F.; Dobrigkeit, C.; D' Olivo, J. C.; Anjos, R. C. dos; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Berisso, M. Gómez; Vitale, P. F. Gómez; González, N.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Mezek, G. Kukec; Kunka, N.; Awad, A. Kuotb; LaHurd, D.; Lauscher, M.; Legumina, R.; de Oliveira, M. A. Leigui; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; Casado, A. López; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Bravo, O. Martínez; Meza, J. J. Masías; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Selmi-Dei, D. Pakk; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; de Carvalho, W. Rodrigues; Fernandez, G. Rodriguez; Rojo, J. Rodriguez; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Durán, M. Suarez; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Peixoto, C. J. Todero; Tomankova, L.; Tomé, B.; Elipe, G. Torralba; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Galicia, J. F. Valdés; Valiño, I.; Valore, L.; Aar, G. van; Bodegom, P. van; Berg, A. M. van den; Vliet, A. van; Varela, E.; Cárdenas, B. Vargas; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Quispe, I. D. Vergara; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F. 2017-06-01 We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80(o) and energies in excess of 4 EeV (4 × 1018 eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured, while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p-values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10-5 in the case of the angular power spectrum, and 2.5 × 10-3 in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales. 1. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data Science.gov (United States) Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John 2014-01-01 The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans. 2. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study Science.gov (United States) Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar 2014-01-01 Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410 3. A multi-resolution approach for an automated fusion of different low-cost 3D sensors. Science.gov (United States) Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner 2014-04-24 The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. 4. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator Science.gov (United States) Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley 2014-03-01 We will describe a general formalism for obtaining spatially localized (sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO). 5. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data KAUST Repository Castruccio, Stefano 2018-01-23 Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)—coarser or larger spatial units—rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke. 6. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis International Nuclear Information System (INIS) Sun, Y.; Hou, Y.; Yan, Y. 2004-01-01 With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author) 7. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system Science.gov (United States) Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W. 1996-01-01 We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization. 8. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory Energy Technology Data Exchange (ETDEWEB) Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P.; Andringa, S. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL (Portugal); Aglietta, M. [Osservatorio Astrofisico di Torino (INAF), Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3 (France); Albuquerque, I.F.M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET) (Argentina); Almela, A.; Andrada, B. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela (Spain); Anastasi, G.A. [Gran Sasso Science Institute (INFN), L' Aquila (Italy); Anchordoqui, L., E-mail: auger_spokespersons@fnal.gov [Department of Physics and Astronomy, Lehman College, City University of New York (United States); and others 2017-06-01 We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80{sup o} and energies in excess of 4 EeV (4 × 10{sup 18} eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p -values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10{sup −5} in the case of the angular power spectrum, and 2.5 × 10{sup −3} in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales. 9. Sustainable transformation DEFF Research Database (Denmark) Andersen, Nicolai Bo This paper is about sustainable transformation with a particular focus on listed buildings. It is based on the notion that sustainability is not just a question of energy conditions, but also about the building being robust. Robust architecture means that the building can be maintained and rebuilt......, that it can be adapted to changing functional needs, and that it has an architectural and cultural value. A specific proposal for a transformation that enhances the architectural qualities and building heritage values of an existing building forms the empirical material, which is discussed using different...... theoretical lenses. It is proposed that three parameters concerning the ꞌtransformabilityꞌ of the building can contribute to a more nuanced understanding of sustainable transformation: technical aspects, programmatic requirements and narrative value. It is proposed that the concept of ꞌsustainable... 10. Identity transformation DEFF Research Database (Denmark) Neergaard, Helle; Robinson, Sarah; Jones, Sally , as well as the resources they have when they come to the classroom. It also incorporates perspectives from (ii) transformational learning and explores the concept of (iii) nudging from a pedagogical viewpoint, proposing it as an important tool in entrepreneurship education. The study incorporates......This paper develops the concept of ‘pedagogical nudging’ and examines four interventions in an entrepreneurship classroom and the potential it has for student identity transformation. Pedagogical nudging is positioned as a tool, which in the hands of a reflective, professional......) assists students in straddling the divide between identities, the emotions and tensions this elicits, and (iv) transform student understanding. We extend nudging theory into a new territory. Pedagogical nudging techniques may be able to unlock doors and bring our students beyond the unacknowledged... 11. Sustainable transformation DEFF Research Database (Denmark) Andersen, Nicolai Bo This paper is about sustainable transformation with a particular focus on listed buildings. It is based on the notion that sustainability is not just a question of energy conditions, but also about the building being robust. Robust architecture means that the building can be maintained and rebuil... 12. Transformer core NARCIS (Netherlands) Mehendale, A.; Hagedoorn, Wouter; Lötters, Joost Conrad 2008-01-01 A transformer core includes a stack of a plurality of planar core plates of a magnetically permeable material, which plates each consist of a first and a second sub-part that together enclose at least one opening. The sub-parts can be fitted together via contact faces that are located on either side 13. Transformer core NARCIS (Netherlands) Mehendale, A.; Hagedoorn, Wouter; Lötters, Joost Conrad 2010-01-01 A transformer core includes a stack of a plurality of planar core plates of a magnetically permeable material, which plates each consist of a first and a second sub-part that together enclose at least one opening. The sub-parts can be fitted together via contact faces that are located on either side 14. Superconducting transformer International Nuclear Information System (INIS) Murphy, J.H. 1982-01-01 A superconducting transformer having a winding arrangement that provides for current limitation when subjected to a current transient as well as more efficient utilization of radial spacing and winding insulation. Structural innovations disclosed include compressed conical shaped winding layers and a resistive matrix to promote rapid switching of current between parallel windings 15. Transformation & Metamorphosis Science.gov (United States) Lott, Debra 2009-01-01 The sculptures of Canadian artist Brian Jungen are a great inspiration for a lesson on creating new forms. Jungen transforms found objects into unique creations without fully concealing their original form or purpose. Frank Stella's sculpture series, including "K.132,2007" made of stainless steel and spray paint, is another great example of… 16. Transforming Society DEFF Research Database (Denmark) Enemark, Stig; Dahl Højgaard, Pia 2017-01-01 , was a result of transforming society from a feudal system to a capitalistic and market based economy. This story is interesting in itself - but it also provides a key to understanding the cadastral system of today. The system has evolved over time and now serves a whole range of functions in society. The paper... 17. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales Science.gov (United States) Sangireddy, H.; Passalacqua, P.; Stark, C. P. 2013-12-01 Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic 18. Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform Directory of Open Access Journals (Sweden) Zhao Rentao 2014-06-01 Full Text Available There is significant difference in the imaging features of infrared image and color image, but their fusion images also have very good complementary information. In this paper, based on the characteristics of infrared image and color image, first of all, wavelet transform is applied to the luminance component of the infrared image and color image. In multi resolution the relevant regional variance is regarded as the activity measure, relevant regional variance ratio as the matching measure, and the fusion image is enhanced in the process of integration, thus getting the fused images by final synthesis module and multi-resolution inverse transform. The experimental results show that the fusion image obtained by the method proposed in this paper is better than the other methods in keeping the useful information of the original infrared image and the color information of the original color image. In addition, the fusion image has stronger adaptability and better visual effect. 19. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4 Science.gov (United States) Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas 2018-04-01 In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required. 20. A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions Directory of Open Access Journals (Sweden) J. Ray 2014-09-01 Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available. 1. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site Science.gov (United States) Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R. 2011-09-01 Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype. 2. Statistical downscaling of rainfall: a non-stationary and multi-resolution approach Science.gov (United States) Rashid, Md. Mamunur; Beecham, Simon; Chowdhury, Rezaul Kabir 2016-05-01 A novel downscaling technique is proposed in this study whereby the original rainfall and reanalysis variables are first decomposed by wavelet transforms and rainfall is modelled using the semi-parametric additive model formulation of Generalized Additive Model in Location, Scale and Shape (GAMLSS). The flexibility of the GAMLSS model makes it feasible as a framework for non-stationary modelling. Decomposition of a rainfall series into different components is useful to separate the scale-dependent properties of the rainfall as this varies both temporally and spatially. The study was conducted at the Onkaparinga river catchment in South Australia. The model was calibrated over the period 1960 to 1990 and validated over the period 1991 to 2010. The model reproduced the monthly variability and statistics of the observed rainfall well with Nash-Sutcliffe efficiency (NSE) values of 0.66 and 0.65 for the calibration and validation periods, respectively. It also reproduced well the seasonal rainfall over the calibration (NSE = 0.37) and validation (NSE = 0.69) periods for all seasons. The proposed model was better than the tradition modelling approach (application of GAMLSS to the original rainfall series without decomposition) at reproducing the time-frequency properties of the observed rainfall, and yet it still preserved the statistics produced by the traditional modelling approach. When downscaling models were developed with general circulation model (GCM) historical output datasets, the proposed wavelet-based downscaling model outperformed the traditional downscaling model in terms of reproducing monthly rainfall for both the calibration and validation periods. 3. Discrete transforms CERN Document Server Firth, Jean M 1992-01-01 The analysis of signals and systems using transform methods is a very important aspect of the examination of processes and problems in an increasingly wide range of applications. Whereas the initial impetus in the development of methods appropriate for handling discrete sets of data occurred mainly in an electrical engineering context (for example in the design of digital filters), the same techniques are in use in such disciplines as cardiology, optics, speech analysis and management, as well as in other branches of science and engineering. This text is aimed at a readership whose mathematical background includes some acquaintance with complex numbers, linear differen­ tial equations, matrix algebra, and series. Specifically, a familiarity with Fourier series (in trigonometric and exponential forms) is assumed, and an exposure to the concept of a continuous integral transform is desirable. Such a background can be expected, for example, on completion of the first year of a science or engineering degree cour... 4. CREATION OF A MULTIRESOLUTION AND MULTIACCURACY DTM: PROBLEMS AND SOLUTIONS FOR HELI-DEM CASE STUDY Directory of Open Access Journals (Sweden) L. Biagi 2014-01-01 a horizontal resolution of 20 meters; in addition a LiDAR DTM with a horizontal resolution of 1 meter, which covers only the main hydrographic basins, is also available. The two DTMs have been transformed into the same reference frame. The cross-validation of the two datasets has been performed comparing the low resolution DTM with the local high resolution DTM. Then, where significant differences are present, GPS survey have been used as external validation. The results are presented. Moreover, a possible strategy for the future fusion of the data, is shortly summarized at the end of the paper. 5. XML Transformations Directory of Open Access Journals (Sweden) Felician ALECU 2012-04-01 Full Text Available XSLT style sheets are designed to transform the XML documents into something else. The two most popular parsers of the moment are the Document Object Model (DOM and the Simple API for XML (SAX. DOM is an official recommendation of the W3C (available at http://www.w3.org/TR/REC-DOM-Level-1, while SAX is a de facto standard. A good parser should be fast, space efficient, rich in functionality and easy to use. 6. RF transformer Science.gov (United States) Smith, James L.; Helenberg, Harold W.; Kilsdonk, Dennis J. 1979-01-01 There is provided an improved RF transformer having a single-turn secondary of cylindrical shape and a coiled encapsulated primary contained within the secondary. The coil is tapered so that the narrowest separation between the primary and the secondary is at one end of the coil. The encapsulated primary is removable from the secondary so that a variety of different capacity primaries can be utilized with one secondary. 7. Transformative Agency DEFF Research Database (Denmark) Majgaard, Klaus The purpose of this paper is to enhance the conceptual understanding of the mediatory relationship between paradoxes on an organizational and an individual level. It presents a concept of agency that comprises and mediates between a structural and individual pole. The constitution of this agency ...... is achieved through narrative activity that oscillates between the poles and transforms paradoxes through the configuration of plots and metaphors. Empirical cases are introduced in order to illustrate the implications of this understanding.... 8. Electrical transformer handbook Energy Technology Data Exchange (ETDEWEB) Hurst, R.W.; Horne, D. (eds.) 2005-07-01 This handbook is a valuable user guide intended for electrical engineering and maintenance personnel, electrical contractors and electrical engineering students. It provides current information on techniques and technologies that can help extend the life of transformers. It discusses transformer testing, monitoring, design, commissioning, retrofitting and other elements involved in keeping electrical transformers in safe and efficient operation. It demonstrates how a power transformer can be put to use and common problems faced by owners. In addition to covering control techniques, testing and maintenance procedures, this handbook covers the power transformer; control electrical power transformer; electrical power transformer; electrical theory transformer; used electrical transformer; down electrical step transformer; electrical manufacturer transformer; electrical picture transformer; electrical transformer work; electrical surplus transformer; current transformer; step down transformer; voltage transformer; step up transformer; isolation transformer; low voltage transformer; toroidal transformer; high voltage transformer; and control power transformer. The handbook includes articles from leading experts on overcurrent protection of transformers; ventilated dry-type transformers; metered load factors for low-voltage, and dry-type transformers in buildings. The maintenance of both dry-type or oil-filled transformers was discussed with reference to sealing, gaskets, oils, moisture and testing. The adoption of dynamic load practices was also discussed along with the reclamation or recycling of used lube oil, transformer dielectric fluids and aged solid insulation. A buyer's guide and directory of transformer manufacturers and suppliers was also included. refs., tabs., figs. 9. Hamlet's Transformation. Science.gov (United States) Usher, P. D. 1997-12-01 William Shakespeare's Hamlet has much evidence to suggest that the Bard was aware of the cosmological models of his time, specifically the geocentric bounded Ptolemaic and Tychonic models, and the infinite Diggesian. Moreover, Shakespeare describes how the Ptolemaic model is to be transformed to the Diggesian. Hamlet's "transformation" is the reason that Claudius, who personifies the Ptolemaic model, summons Rosencrantz and Guildenstern, who personify the Tychonic. Pantometria, written by Leonard Digges and his son Thomas in 1571, contains the first technical use of the word "transformation." At age thirty, Thomas Digges went on to propose his Perfit Description, as alluded to in Act Five where Hamlet's age is given as thirty. In Act Five as well, the words "bore" and "arms" refer to Thomas' vocation as muster-master and his scientific interest in ballistics. England's leading astronomer was also the father of the poet whose encomium introduced the First Folio of 1623. His oldest child Dudley became a member of the Virginia Company and facilitated the writing of The Tempest. Taken as a whole, such manifold connections to Thomas Digges support Hotson's contention that Shakespeare knew the Digges family. Rosencrantz and Guildenstern in Hamlet bear Danish names because they personify the Danish model, while the king's name is latinized like that of Claudius Ptolemaeus. The reason Shakespeare anglicized "Amleth" to "Hamlet" was because he saw a parallel between Book Three of Saxo Grammaticus and the eventual triumph of the Diggesian model. But Shakespeare eschewed Book Four, creating this particular ending from an infinity of other possibilities because it "suited his purpose," viz. to celebrate the concept of a boundless universe of stars like the Sun. 10. TRANSFORMER APPARATUS Science.gov (United States) Wolfgang, F.; Nicol, J. 1962-11-01 Transformer apparatus is designed for measuring the amount of a paramagnetic substance dissolved or suspended in a diamagnetic liquid. The apparatus consists of a cluster of tubes, some of which are closed and have sealed within the diamagnetic substance without any of the paramagnetic material. The remaining tubes are open to flow of the mix- ture. Primary and secondary conductors are wrapped around the tubes in such a way as to cancel noise components and also to produce a differential signal on the secondaries based upon variations of the content of the paramagnetic material. (AEC) 11. Rotary Transformer Science.gov (United States) McLyman, Colonel Wm. T. 1996-01-01 None given. From first Par: Many spacecraft (S/C) and surface rovers require the transfer of signals and power across rotating interfaces. Science instruments, antennas and solar arrays are elements needing rotary power transfer for certain (S/C) configurations. Delivery of signal and power has mainly been done by using the simplest means, the slip ring approach. This approach, although simple, leaves debris generating noise over a period of time...The rotary transformer is a good alternative to slip rings for signal and power transfer. 12. Forestry transformation International Nuclear Information System (INIS) Beer, G. 2003-01-01 State forestry company Lesy, s.p., Banska Bystrica have chosen Austrian state forestry company to operate as their restructuring advisor. 20 million Sk (0.142 mn Euro) were assigned to transformation of Lesy SR from a state enterprise to a state-owned joint-stock company. The whole process should take two years. The joint-stock company should be established at the beginning of next year. 'What we have to do first is to define the objectives and perspectives of this restructuring,' claims new director, Karol Vins. The new boss recalled all directors of the 26 branches. They were given a lot of freedom to trade with wood. The new management wants to establish a profit-making company. At the moment the company has total claims of 600 million Sk (14.59 million Eur) it will have to provision for 13. Transforming vulnerability. Science.gov (United States) Jones, Patricia S; Zhang, Xinwei Esther; Meleis, Afaf I 2003-11-01 Asian American immigrant women engaged in filial caregiving are at special risk for health problems due to complex contextual factors related to immigration, cultural traditions, and role transition. This study examines the experience of two groups of immigrant Asian American women who are caring for older parents. A total of 41 women (22 Chinese American and 19 Filipino American) were interviewed in a study based on Strauss and Corbin's grounded theory methodology. The women were determined to be loyal to their traditional culture, which included strong filial values, while adapting to a new culture. Through the struggle of meeting role expectations and coping with paradox, the women mobilized personal and family resources to transform vulnerability into strength and well-being. CERN Document Server Rutherford, Ernest 2012-01-01 Radioactive Transformations describes Ernest Rutherford's Nobel Prize-winning investigations into the mysteries of radioactive matter. In this historic work, Rutherford outlines the scientific investigations that led to and coincided with his own research--including the work of Wilhelm Rӧntgen, J. J. Thomson, and Marie Curie--and explains in detail the experiments that provided a glimpse at special relativity, quantum mechanics, and other concepts that would shape modern physics. This new edition features a comprehensive introduction by Nobel Laureate Frank Wilczek which engagingly explains how Rutherford's early research led to a better understanding of topics as diverse as the workings of the atom's nucleus, the age of our planet, and the fusion in stars. 15. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA Directory of Open Access Journals (Sweden) Wonhee Lee 2014-02-01 Full Text Available A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT. Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU. 16. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA Science.gov (United States) Lee, Wonhee; Park, Chan Gook 2014-01-01 A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA) with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT). Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU). PMID:24556675 17. A novel application of the S-transform in removing powerline interference from biomedical signals International Nuclear Information System (INIS) Huang, Chien-Chun; Young, Ming-Shing; Liang, Sheng-Fu; Shaw, Fu-Zen 2009-01-01 Powerline interference always disturbs recordings of biomedical signals. Numerous methods have been developed to reduce powerline interference. However, most of these techniques not only reduce the interference but also attenuate the 60 Hz power of the biomedical signals themselves. In the present study, we applied the S-transform, which provides an absolute phase of each frequency in a multi-resolution time–frequency analysis, to reduce 60 Hz interference. According to results from an electrocardiogram (ECG) to which a simulated 60 Hz noise was added, the S-transform de-noising process restored a power spectrum identical to that of the original ECG coincident with a significant reduction in the 60 Hz interference. Moreover, the S-transform de-noised the signal in an intensity-independent manner when reducing the 60 Hz interference. In both a real ECG signal from the MIT database and natural brain activity contaminated with 60 Hz interference, the S-transform also displayed superior merit to a notch filter in the aspect of reducing noise and preserving the signal. Based on these data, a novel application of the S-transform for removing powerline interference is established 18. Transforming giants. Science.gov (United States) Kanter, Rosabeth Moss 2008-01-01 Large corporations have long been seen as lumbering, inflexible, bureaucratic--and clueless about global developments. But recently some multinationals seem to be transforming themselves: They're engaging employees, moving quickly, and introducing innovations that show true connection with the world. Harvard Business School's Kanter ventured with a research team inside a dozen global giants--including IBM, Procter & Gamble, Omron, CEMEX, Cisco, and Banco Real--to discover what has been driving the change. After conducting more than 350 interviews on five continents, she and her colleagues came away with a strong sense that we are witnessing the dawn of a new model of corporate power: The coordination of actions and decisions on the front lines now appears to stem from widely shared values and a sturdy platform of common processes and technology, not from top-down decrees. In particular, the values that engage the passions of far-flung workforces stress openness, inclusion, and making the world a better place. Through this shift in what might be called their guidance systems, the companies have become as creative and nimble as much smaller ones, even while taking on social and environmental challenges of a scale that only large enterprises could attempt. IBM, for instance, has created a nonprofit partnership, World Community Grid, through which any organization or individual can donate unused computing power to research projects and see what is being done with the donation in real time. IBM has gained an inspiring showcase for its new technology, helped business partners connect with the company in a positive way, and offered individuals all over the globe the chance to contribute to something big. 19. Transformer Protection Using the Wavelet Transform OpenAIRE ÖZGÖNENEL, Okan; ÖNBİLGİN, Güven; KOCAMAN, Çağrı 2014-01-01 This paper introduces a novel approach for power transformer protection algorithm. Power system signals such as current and voltage have traditionally been analysed by the Fast Fourier Transform. This paper aims to prove that the Wavelet Transform is a reliable and computationally efficient tool for distinguishing between the inrush currents and fault currents. The simulated results presented clearly show that the proposed technique for power transformer protection facilitates the a... 20. The Bargmann transform and canonical transformations International Nuclear Information System (INIS) Villegas-Blas, Carlos 2002-01-01 This paper concerns a relationship between the kernel of the Bargmann transform and the corresponding canonical transformation. We study this fact for a Bargmann transform introduced by Thomas and Wassell [J. Math. Phys. 36, 5480-5505 (1995)]--when the configuration space is the two-sphere S 2 and for a Bargmann transform that we introduce for the three-sphere S 3 . It is shown that the kernel of the Bargmann transform is a power series in a function which is a generating function of the corresponding canonical transformation (a classical analog of the Bargmann transform). We show in each case that our canonical transformation is a composition of two other canonical transformations involving the complex null quadric in C 3 or C 4 . We also describe quantizations of those two other canonical transformations by dealing with spaces of holomorphic functions on the aforementioned null quadrics. Some of these quantizations have been studied by Bargmann and Todorov [J. Math. Phys. 18, 1141-1148 (1977)] and the other quantizations are related to the work of Guillemin [Integ. Eq. Operator Theory 7, 145-205 (1984)]. Since suitable infinite linear combinations of powers of the generating functions are coherent states for L 2 (S 2 ) or L 2 (S 3 ), we show finally that the studied Bargmann transforms are actually coherent states transforms 1. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting Science.gov (United States) 2013-12-01 Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved. 2. Generalized Fourier transforms classes DEFF Research Database (Denmark) Berntsen, Svend; Møller, Steen 2002-01-01 The Fourier class of integral transforms with kernels $B(\\omega r)$ has by definition inverse transforms with kernel $B(-\\omega r)$. The space of such transforms is explicitly constructed. A slightly more general class of generalized Fourier transforms are introduced. From the general theory foll...... follows that integral transform with kernels which are products of a Bessel and a Hankel function or which is of a certain general hypergeometric type have inverse transforms of the same structure.... 3. Army Maintenance System Transformation National Research Council Canada - National Science Library Gilbertson, Frank V 2006-01-01 .... Used in conjunction with pertinent historical data and developed with Army transformation goals in mind, General Systems thinking can provide the framework for guiding maintenance transformation... 4. Negotiated Grammar Transformation NARCIS (Netherlands) 2012-01-01 htmlabstractIn this paper, we study controlled adaptability of metamodel transformations. We consider one of the most rigid metamodel transformation formalisms — automated grammar transformation with operator suites, where a transformation script is built in such a way that it is essentially meant 5. On Hurwitz transformations International Nuclear Information System (INIS) Kibler, M.; Hage Hassan, M. 1991-04-01 A bibliography on the Hurwitz transformations is given. We deal here, with some details, with two particular Hurwitz transformations, viz, the R 4 → R 3 Kustaanheimo-Stiefel transformation and its R 8 → R 5 compact extension. These transformations are derived in the context of Fock-Bargmann-Schwinger calculus with special emphasis on angular momentum theory 6. Generalized Fourier transforms classes DEFF Research Database (Denmark) Berntsen, Svend; Møller, Steen 2002-01-01 The Fourier class of integral transforms with kernels $B(\\omega r)$ has by definition inverse transforms with kernel $B(-\\omega r)$. The space of such transforms is explicitly constructed. A slightly more general class of generalized Fourier transforms are introduced. From the general theory... 7. Transforming the Way We Teach Function Transformations Science.gov (United States) Faulkenberry, Eileen Durand; Faulkenberry, Thomas J. 2010-01-01 In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input… 8. Segmentation of Polarimetric SAR Images Usig Wavelet Transformation and Texture Features Science.gov (United States) Rezaeian, A.; Homayouni, S.; Safari, A. 2015-12-01 Polarimetric Synthetic Aperture Radar (PolSAR) sensors can collect useful observations from earth's surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR) are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT). Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM) and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada. 9. SEGMENTATION OF POLARIMETRIC SAR IMAGES USIG WAVELET TRANSFORMATION AND TEXTURE FEATURES Directory of Open Access Journals (Sweden) A. Rezaeian 2015-12-01 Full Text Available Polarimetric Synthetic Aperture Radar (PolSAR sensors can collect useful observations from earth’s surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT. Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada. 10. Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD Science.gov (United States) Favorskaya, M. N.; Savchina, E. I. 2017-05-01 Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate. 11. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain Directory of Open Access Journals (Sweden) Qiuling Wu 2018-05-01 Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption. 12. Implementing wavelet packet transform for valve failure detection using vibration and acoustic emission signals International Nuclear Information System (INIS) Sim, H Y; Ramli, R; Abdullah, M A K 2012-01-01 The efficiency of reciprocating compressors relies heavily on the health condition of its moving components, most importantly its valves. Previous studies showed good correlation between the dynamic response and the physical condition of the valves. These can be achieved by employing vibration technique which is capable of monitoring the response of the valve, and acoustic emission technique which is capable of detecting the valves' material deformation. However, the relationship/comparison between the two techniques is rarely investigated. In this paper, the two techniques were examined using time-frequency analysis. Wavelet packet transform (WPT) was chosen as the multi-resolution analysis technique over continuous wavelet transform (CWT), and discrete wavelet transform (DWT). This is because WPT could overcome the high computational time and high redundancy problem in CWT and could provide detailed analysis of the high frequency components compared to DWT. The features of both signals can be extracted by evaluating the normalised WPT coefficients for different time window under different valve conditions. By comparing the normalised coefficients over a certain time frame and frequency range, the feature vectors revealing the condition of valves can be constructed. One way analysis of variance was employed on these feature vectors to test the significance of data under different valve conditions. It is believed that AE signals can give a better representation of the valve condition as it can detect both the fluid motion and material deformation of valves as compared to the vibration signals. 13. 4D-CT Lung registration using anatomy-based multi-level multi-resolution optical flow analysis and thin-plate splines. Science.gov (United States) Min, Yugang; Neylon, John; Shah, Amish; Meeks, Sanford; Lee, Percy; Kupelian, Patrick; Santhanam, Anand P 2014-09-01 The accuracy of 4D-CT registration is limited by inconsistent Hounsfield unit (HU) values in the 4D-CT data from one respiratory phase to another and lower image contrast for lung substructures. This paper presents an optical flow and thin-plate spline (TPS)-based 4D-CT registration method to account for these limitations. The use of unified HU values on multiple anatomy levels (e.g., the lung contour, blood vessels, and parenchyma) accounts for registration errors by inconsistent landmark HU value. While 3D multi-resolution optical flow analysis registers each anatomical level, TPS is employed for propagating the results from one anatomical level to another ultimately leading to the 4D-CT registration. 4D-CT registration was validated using target registration error (TRE), inverse consistency error (ICE) metrics, and a statistical image comparison using Gamma criteria of 1 % intensity difference in 2 mm(3) window range. Validation results showed that the proposed method was able to register CT lung datasets with TRE and ICE values <3 mm. In addition, the average number of voxel that failed the Gamma criteria was <3 %, which supports the clinical applicability of the propose registration mechanism. The proposed 4D-CT registration computes the volumetric lung deformations within clinically viable accuracy. 14. Shoreline change after 12 years of tsunami in Banda Aceh, Indonesia: a multi-resolution, multi-temporal satellite data and GIS approach Science.gov (United States) Sugianto, S.; Heriansyah; Darusman; Rusdi, M.; Karim, A. 2018-04-01 The Indian Ocean Tsunami event on the 26 December 2004 has caused severe damage of some shorelines in Banda Aceh City, Indonesia. Tracing back the impact can be seen using remote sensing data combined with GIS. The approach is incorporated with image processing to analyze the extent of shoreline changes with multi-temporal data after 12 years of tsunami. This study demonstrates multi-resolution and multi-temporal satellite images of QuickBird and IKONOS to demarcate the shoreline of Banda Aceh shoreline from before and after tsunami. The research has demonstrated a significant change to the shoreline in the form of abrasion between 2004 and 2005 from few meters to hundred meters’ change. The change between 2004 and 2011 has not returned to the previous stage of shoreline before the tsunami, considered post tsunami impact. The abrasion occurs between 18.3 to 194.93 meters. Further, the change in 2009-2011 shows slowly change of shoreline of Banda Aceh, considered without impact of tsunami e.g. abrasion caused by ocean waves that erode the coast and on specific areas accretion occurs caused by sediment carried by the river flow into the sea near the shoreline of the study area. 15. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems Directory of Open Access Journals (Sweden) Daehyeok Kim 2017-06-01 Full Text Available In this paper, we present a multi-resolution mode CMOS image sensor (CIS for intelligent surveillance system (ISS applications. A low column fixed-pattern noise (CFPN comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution with supply voltages of 3.3 V (analog and 1.8 V (digital and 14 frame/s of frame rates. 16. New Resolution Strategy for Multi-scale Reaction Waves using Time Operator Splitting and Space Adaptive Multiresolution: Application to Human Ischemic Stroke* Directory of Open Access Journals (Sweden) Louvet Violaine 2011-12-01 Full Text Available We tackle the numerical simulation of reaction-diffusion equations modeling multi-scale reaction waves. This type of problems induces peculiar difficulties and potentially large stiffness which stem from the broad spectrum of temporal scales in the nonlinear chemical source term as well as from the presence of large spatial gradients in the reactive fronts, spatially very localized. A new resolution strategy was recently introduced ? that combines a performing time operator splitting with high oder dedicated time integration methods and space adaptive multiresolution. Based on recent theoretical studies of numerical analysis, such a strategy leads to a splitting time step which is not restricted neither by the fastest scales in the source term nor by stability limits related to the diffusion problem, but only by the physics of the phenomenon. In this paper, the efficiency of the method is evaluated through 2D and 3D numerical simulations of a human ischemic stroke model, conducted on a simplified brain geometry, for which a simple parallelization strategy for shared memory architectures was implemented, in order to reduce computing costs related to “detailed chemistry” features of the model. 17. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems. Science.gov (United States) Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn 2017-06-25 In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates. 18. Three dimensional canonical transformations International Nuclear Information System (INIS) Tegmen, A. 2010-01-01 A generic construction of canonical transformations is given in three-dimensional phase spaces on which Nambu bracket is imposed. First, the canonical transformations are defined as based on cannonade transformations. Second, it is shown that determination of the generating functions and the transformation itself for given generating function is possible by solving correspondent Pfaffian differential equations. Generating functions of type are introduced and all of them are listed. Infinitesimal canonical transformations are also discussed as the complementary subject. Finally, it is shown that decomposition of canonical transformations is also possible in three-dimensional phase spaces as in the usual two-dimensional ones. 19. Laplace Transforms without Integration Science.gov (United States) Robertson, Robert L. 2017-01-01 Calculating Laplace transforms from the definition often requires tedious integrations. This paper provides an integration-free technique for calculating Laplace transforms of many familiar functions. It also shows how the technique can be applied to probability theory. 20. On Poisson Nonlinear Transformations Directory of Open Access Journals (Sweden) Nasir Ganikhodjaev 2014-01-01 Full Text Available We construct the family of Poisson nonlinear transformations defined on the countable sample space of nonnegative integers and investigate their trajectory behavior. We have proved that these nonlinear transformations are regular. 1. Chemical Transformation Simulator Science.gov (United States) The Chemical Transformation Simulator (CTS) is a web-based, high-throughput screening tool that automates the calculation and collection of physicochemical properties for an organic chemical of interest and its predicted products resulting from transformations in environmental sy... 2. Diffusionless phase transformations International Nuclear Information System (INIS) Vejman, K.M. 1987-01-01 Diffusionless phase transformations in metals and alloys in the process of which atomic displacements occur at the distances lower than interatomic ones and relative correspondence of neighbour atoms is preserved, are considered. Special attention is paid to the mechanism of martensitic transformations. Phenomenologic crystallographical theory of martensitic transformations are presented. Two types of martensitic transformations different from the energy viewpoint are pointed out - thermoelastic and non-thermoelastic ones - which are characterized by transformation hysteresis and ways of martensite - initial phase reverse transformation realization. Mechanical effect in the martensitic transformations have been analyzed. The problem of diffusionless formation of ω-phases and the effect of impurities and vacancies on the process are briefly discussed. The role of charge density waves in phase transformations of the second type (transition of initial phase into noncommensurate one) and of the first type (transition of noncommensurate phase into commensurate one) is considered 3. Equations For Rotary Transformers Science.gov (United States) Salomon, Phil M.; Wiktor, Peter J.; Marchetto, Carl A. 1988-01-01 Equations derived for input impedance, input power, and ratio of secondary current to primary current of rotary transformer. Used for quick analysis of transformer designs. Circuit model commonly used in textbooks on theory of ac circuits. 4. Entropy of Baker's Transformation Institute of Scientific and Technical Information of China (English) 栾长福 2003-01-01 Four theorems about four different kinds of entropies for Baker's transformation are presented. The Kolmogorov entropy of Baker's transformation is sensitive to the initial flips by the time. The topological entropy of Baker's transformation is found to be log k. The conditions for the state of Baker's transformation to be forbidden are also derived. The relations among the Shanonn, Kolmogorov, topological and Boltzmann entropies are discussed in details. 5. Electrostatic shielding of transformers Energy Technology Data Exchange (ETDEWEB) De Leon, Francisco 2017-11-28 Toroidal transformers are currently used only in low-voltage applications. There is no published experience for toroidal transformer design at distribution-level voltages. Toroidal transformers are provided with electrostatic shielding to make possible high voltage applications and withstand the impulse test. Science.gov (United States) 2009-12-01 Defense Business Transformation by Jacques S. Gansler and William Lucyshyn The Center for Technology and National...REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Defense Business Transformation 5a. CONTRACT NUMBER 5b. GRANT NUMBER...vii Part One: DoD Business Transformation 7. Transforming the Force and Logistics Transformation National Research Council Canada - National Science Library Cook, Katherine M 2006-01-01 U.S. Army transformation strategy addresses the imperative to change the Army from a Cold War-oriented design to one that is more responsive, agile, and adaptable to present and emerging threats across... 8. Fractional finite Fourier transform. Science.gov (United States) Khare, Kedar; George, Nicholas 2004-07-01 We show that a fractional version of the finite Fourier transform may be defined by using prolate spheroidal wave functions of order zero. The transform is linear and additive in its index and asymptotically goes over to Namias's definition of the fractional Fourier transform. As a special case of this definition, it is shown that the finite Fourier transform may be inverted by using information over a finite range of frequencies in Fourier space, the inversion being sensitive to noise. Numerical illustrations for both forward (fractional) and inverse finite transforms are provided. 9. Nonsynchronous Noncommensurate Impedance Transformers DEFF Research Database (Denmark) Zhurbenko, Vitaliy; Kim, K 2012-01-01 Nonsynchronous noncommensurate impedance transformers consist of a combination of two types of transmission lines: transmission lines with a characteristic impedance equal to the impedance of the source, and transmission lines with a characteristic impedance equal to the load. The practical...... advantage of such transformers is that they can be constructed using sections of transmission lines with a limited variety of characteristic impedances. These transformers also provide comparatively compact size in applications where a wide transformation ratio is required. This paper presents the data...... matrix approach and experimentally verified by synthesizing a 12-section nonsynchronous noncommensurate impedance transformer. The measured characteristics of the transformer are compared to the characteristics of a conventional tapered line transformer.... 10. Clustering of France Monthly Precipitation, Temperature and Discharge Based on their Multiresolution Links with 500mb Geopotential Height from 1968 to 2008 Science.gov (United States) Massei, N.; Fossa, M.; Dieppois, B.; Vidal, J. P.; Fournier, M.; Laignel, B. 2017-12-01 In the context of climate change and ever growing use of water resources, identifying how the climate and watershed signature in discharge variability changes with the geographic location is of prime importance. This study aims at establishing how 1968-2008 multiresolution links between 3 local hydrometerological variables (precipitation, temperature and discharge) and 500 mb geopotential height are structured over France. First, a methodology that allows to encode the 3D geopotential height data into its 1D conformal modulus time series is introduced. Then, for each local variable, their covariations with the geopotential height are computed with cross wavelet analysis. Finally, a clustering analysis of each variable cross spectra is done using bootstrap clustering.We compare the clustering results for each local variable in order to untangle the watershed from the climate drivers in France's rivers discharge. Additionally, we identify the areas in the geopotential height field that are responsible for the spatial structure of each local variable.Main results from this study show that for precipitation and discharge, clear spatial zones emerge. Each cluster is characterized either by different different amplitudes and/or time scales of covariations with geopotential height. Precipitation and discharge clustering differ with the later being simpler which indicates a strong low frequency modulation by the watersheds all over France. Temperature on the other hand shows less clearer spatial zones. For precipitation and discharge, we show that the main action path starts at the northern tropical zone then moves up the to central North Atlantic zone which seems to indicates an interaction between the convective cells variability and the reinforcement of the westerlies jets as one of the main control of the precipitation and discharge over France. Temperature shows a main zone of action directly over France hinting at local temperature/pressure interactions. 11. Accurate reconstruction in digital holographic microscopy using Fresnel dual-tree complex wavelet transform Science.gov (United States) Zhang, Xiaolei; Zhang, Xiangchao; Yuan, He; Zhang, Hao; Xu, Min 2018-02-01 Digital holography is a promising measurement method in the fields of bio-medicine and micro-electronics. But the captured images of digital holography are severely polluted by the speckle noise because of optical scattering and diffraction. Via analyzing the properties of Fresnel diffraction and the topographies of micro-structures, a novel reconstruction method based on the dual-tree complex wavelet transform (DT-CWT) is proposed. This algorithm is shiftinvariant and capable of obtaining sparse representations for the diffracted signals of salient features, thus it is well suited for multiresolution processing of the interferometric holograms of directional morphologies. An explicit representation of orthogonal Fresnel DT-CWT bases and a specific filtering method are developed. This method can effectively remove the speckle noise without destroying the salient features. Finally, the proposed reconstruction method is compared with the conventional Fresnel diffraction integration and Fresnel wavelet transform with compressive sensing methods to validate its remarkable superiority on the aspects of topography reconstruction and speckle removal. 12. The use of wavelet transforms in the solution of two-phase flow problems International Nuclear Information System (INIS) Moridis, G.J.; Nikolaou, M.; You, Yong 1994-10-01 In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts 13. Transformative environmental governance Science.gov (United States) Chaffin, Brian C.; Garmestani, Ahjond S.; Gunderson, Lance H.; Harm Benson, Melinda; Angeler, David G.; Arnold, Craig Anthony (Tony); Cosens, Barbara; Kundis Craig, Robin; Ruhl, J.B.; Allen, Craig R. 2016-01-01 Transformative governance is an approach to environmental governance that has the capacity to respond to, manage, and trigger regime shifts in coupled social-ecological systems (SESs) at multiple scales. The goal of transformative governance is to actively shift degraded SESs to alternative, more desirable, or more functional regimes by altering the structures and processes that define the system. Transformative governance is rooted in ecological theories to explain cross-scale dynamics in complex systems, as well as social theories of change, innovation, and technological transformation. Similar to adaptive governance, transformative governance involves a broad set of governance components, but requires additional capacity to foster new social-ecological regimes including increased risk tolerance, significant systemic investment, and restructured economies and power relations. Transformative governance has the potential to actively respond to regime shifts triggered by climate change, and thus future research should focus on identifying system drivers and leading indicators associated with social-ecological thresholds. 14. National-scale crop type mapping and area estimation using multi-resolution remote sensing and field survey Science.gov (United States) Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M. 2016-12-01 Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in 15. Quantized Bogoliubov transformations International Nuclear Information System (INIS) Geyer, H.B. 1984-01-01 The boson mapping of single fermion operators in a situation dominated by the pairing force gives rise to a transformation that can be considered a quantized version of the Bogoliubov transformation. This transformation can also be obtained as an exact special case of operators constructed from an approximate treatment of particle number projection, suggesting a method of obtaining the boson mapping in cases more complicated than that of pairing force domination 16. Phase transformation and diffusion CERN Document Server Kale, G B; Dey, G K 2008-01-01 Given that the basic purpose of all research in materials science and technology is to tailor the properties of materials to suit specific applications, phase transformations are the natural key to the fine-tuning of the structural, mechanical and corrosion properties. A basic understanding of the kinetics and mechanisms of phase transformation is therefore of vital importance. Apart from a few cases involving crystallographic martensitic transformations, all phase transformations are mediated by diffusion. Thus, proper control and understanding of the process of diffusion during nucleation, g 17. The convolution transform CERN Document Server Hirschman, Isidore Isaac 2005-01-01 In studies of general operators of the same nature, general convolution transforms are immediately encountered as the objects of inversion. The relation between differential operators and integral transforms is the basic theme of this work, which is geared toward upper-level undergraduates and graduate students. It may be read easily by anyone with a working knowledge of real and complex variable theory. Topics include the finite and non-finite kernels, variation diminishing transforms, asymptotic behavior of kernels, real inversion theory, representation theory, the Weierstrass transform, and 18. Coaxial pulse matching transformer International Nuclear Information System (INIS) Ledenev, V.V.; Khimenko, L.T. 1986-01-01 This paper describes a coaxial pulse matching transformer with comparatively simple design, increased mechanical strength, and low stray inductance. The transformer design makes it easy to change the turns ratio. The circuit of the device and an expression for the current multiplication factor are presented; experiments confirm the efficiency of the transformer. Apparatus with a coaxial transformer for producing high-power pulsed magnetic fields is designed (current pulses of 1-10 MA into a load and a natural frequency of 100 kHz) 19. Distributed photovoltaic grid transformers CERN Document Server 2014-01-01 The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is 20. A relation connecting scale transformation, Galilean transformation and Baecklund transformation for the nonlinear Schroedinger equation International Nuclear Information System (INIS) Steudel, H. 1980-01-01 It is shown that the two-parameter manifold of Baecklund transformations known for the nonlinear Schroedinger equation can be generated from one Baecklund transformation with specified parameters by use of scale transformation and Galilean transformation. (orig.) 1. Discrete Gabor transform and discrete Zak transform NARCIS (Netherlands) Bastiaans, M.J.; Namazi, N.M.; Matthews, K. 1996-01-01 Gabor's expansion of a discrete-time signal into a set of shifted and modulated versions of an elementary signal or synthesis window is introduced, along with the inverse operation, i.e. the Gabor transform, which uses an analysis window that is related to the synthesis window and with the help of 2. Transformative environmental governance Science.gov (United States) Transformative governance is an approach to environmental governance that has the capacity to respond to, manage, and trigger regime shifts in coupled social-ecological systems (SESs) at multiple scales. The goal of transformative governance is to actively shift degraded SESs to ... 3. Genetic Transformation of Bacteria. Science.gov (United States) Moss, Robert. 1991-01-01 An activity in which students transform an ampicillin-sensitive strain of E. coli with a plasmid containing a gene for ampicillin resistance is described. The procedure for the preparation of competent cells and the transformation of competent E. coli is provided. (KR) Energy Technology Data Exchange (ETDEWEB) Szu, H.; Hsu, C. [Univ. of Southwestern Louisiana, Lafayette, LA (United States) 1996-12-31 Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs. 5. On an integral transform Directory of Open Access Journals (Sweden) D. Naylor 1986-01-01 Full Text Available This paper establishes properties of a convolution type integral transform whose kernel is a Macdonald type Bessel function of zero order. An inversion formula is developed and the transform is applied to obtain the solution of some related integral equations. 6. A Transformation Called "Twist" Science.gov (United States) Hwang, Daniel 2010-01-01 The transformations found in secondary mathematics curriculum are typically limited to stretches and translations (e.g., ACARA, 2010). Advanced students may find the transformation, twist, to be of further interest. As most available resources are written for professional-level readers, this article is intended to be an introduction accessible to… 7. Transformation of technical infrastructure DEFF Research Database (Denmark) Nielsen, Susanne Balslev 1998-01-01 article about the need of new planning forums in order to initiate transformations with in management of large technical systems for energy, waste and water supply.......article about the need of new planning forums in order to initiate transformations with in management of large technical systems for energy, waste and water supply.... 8. Flames of Transformation DEFF Research Database (Denmark) Sørensen, Tim Flohr; Bille, Mikkel 2008-01-01 This paper explores the transformative power of fire, its fundamental ability to change material worlds and affect our experience of its materiality. The paper examines material transformations related to death as a means of illustrating the powerful property of fire as a materially destructive yet... 9. Disc piezoelectric ceramic transformers. Science.gov (United States) Erhart, Jirií; Půlpán, Petr; Doleček, Roman; Psota, Pavel; Lédl, Vít 2013-08-01 In this contribution, we present our study on disc-shaped and homogeneously poled piezoelectric ceramic transformers working in planar-extensional vibration modes. Transformers are designed with electrodes divided into wedge, axisymmetrical ring-dot, moonie, smile, or yin-yang segments. Transformation ratio, efficiency, and input and output impedances were measured for low-power signals. Transformer efficiency and transformation ratio were measured as a function of frequency and impedance load in the secondary circuit. Optimum impedance for the maximum efficiency has been found. Maximum efficiency and no-load transformation ratio can reach almost 100% and 52 for the fundamental resonance of ring-dot transformers and 98% and 67 for the second resonance of 2-segment wedge transformers. Maximum efficiency was reached at optimum impedance, which is in the range from 500 Ω to 10 kΩ, depending on the electrode pattern and size. Fundamental vibration mode and its overtones were further studied using frequency-modulated digital holographic interferometry and by the finite element method. Complementary information has been obtained by the infrared camera visualization of surface temperature profiles at higher driving power. 10. A Selective CPS Transformation DEFF Research Database (Denmark) Nielsen, Lasse Riechstein 2001-01-01 characterize this involvement as a control effect and we present a selective CPS transformation that makes functions and expressions continuation-passing if they have a control effect, and that leaves the rest of the program in direct style. We formalize this selective CPS transformation with an operational... 11. Integral transformational coaching NARCIS (Netherlands) Keizer, W.A.J.; Nandram, S.S. 2009-01-01 In Chap. 12, Keizer and Nandram present the concept of Integral Transformational Coaching based on the concept of Flow and its effects on work performance. Integral Transformational Coaching is a method that prevents and cures unhealthy stress and burnout. They draw on some tried and tested 12. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms Directory of Open Access Journals (Sweden) Nasreddine Taleb 2010-09-01 Full Text Available Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT. An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. 13. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms. Science.gov (United States) Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine 2010-01-01 Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. 14. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain Science.gov (United States) Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin 2018-04-01 Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation. 15. Extraction of Nucleolus Candidate Zone in White Blood Cells of Peripheral Blood Smear Images Using Curvelet Transform Directory of Open Access Journals (Sweden) 2012-01-01 Full Text Available The main part of each white blood cell (WBC is its nucleus which contains chromosomes. Although white blood cells (WBCs with giant nuclei are the main symptom of leukemia, they are not sufficient to prove this disease and other symptoms must be investigated. For example another important symptom of leukemia is the existence of nucleolus in nucleus. The nucleus contains chromatin and a structure called the nucleolus. Chromatin is DNA in its active form while nucleolus is composed of protein and RNA, which are usually inactive. In this paper, to diagnose this symptom and in order to discriminate between nucleoli and chromatins, we employ curvelet transform, which is a multiresolution transform for detecting 2D singularities in images. For this reason, at first nuclei are extracted by means of K-means method, then curvelet transform is applied on extracted nuclei and the coefficients are modified, and finally reconstructed image is used to extract the candidate locations of chromatins and nucleoli. This method is applied on 100 microscopic images and succeeds with specificity of 80.2% and sensitivity of 84.3% to detect the nucleolus candidate zone. After nucleolus candidate zone detection, new features that can be used to classify atypical and blast cells such as gradient of saturation channel are extracted. 16. Geological disaster survey based on Curvelet transform with borehole Ground Penetrating Radar in Tonglushan old mine site. Science.gov (United States) Tang, Xinjian; Sun, Tao; Tang, Zhijie; Zhou, Zenghui; Wei, Baoming 2011-06-01 Tonglushan old mine site located in Huangshi City, China, is very famous in the world. However, some of the ruins had suffered from geological disasters such as local deformation, surface cracking, in recent years. Structural abnormalities of rock-mass in deep underground were surveyed with borehole ground penetrating radar (GPR) to find out whether there were any mined galleries or mined-out areas below the ruins. With both the multiresolution analysis and sub-band directional of Curvelet transform, the feature information of targets' GPR signals were studied on Curvelet transform domain. Heterogeneity of geotechnical media and clutter jamming of complicated background of GPR signals could be conquered well, and the singularity characteristic information of typical rock mass signals could be extracted. Random noise had be removed by thresholding combined with Curvelet and the statistical characteristics of wanted signals and the noise, then direct wave suppression and the spatial distribution feature extraction could obtain a better result by making use of Curvelet transform directional. GprMax numerical modeling and analyzing of the sample data have verified the feasibility and effectiveness of our method. It is important and applicable for the analyzing of the geological structure and the disaster development about the Tonglushan old mine site. Copyright © 2011 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved. 17. Lectures on integral transforms CERN Document Server Akhiezer, N I 1988-01-01 This book, which grew out of lectures given over the course of several years at Kharkov University for students in the Faculty of Mechanics and Mathematics, is devoted to classical integral transforms, principally the Fourier transform, and their applications. The author develops the general theory of the Fourier transform for the space L^1(E_n) of integrable functions of n variables. His proof of the inversion theorem is based on the general Bochner theorem on integral transforms, a theorem having other applications within the subject area of the book. The author also covers Fourier-Plancherel theory in L^2(E_n). In addition to the general theory of integral transforms, connections are established with other areas of mathematical analysis--such as the theory of harmonic and analytic functions, the theory of orthogonal polynomials, and the moment problem--as well as to mathematical physics. 18. The transformativity approach DEFF Research Database (Denmark) Holm, Isak Winkel; Lauta, Kristian Cedervall 2017-01-01 During the last five to ten years, a considerable body of research has begun to explore how disasters, real and imagined, trigger social transformations. Even if the contributions to this this research stems from a multitude of academic disciplines, we argue in the article, they constitute...... an identifiable and promising approach for future disaster research. We suggest naming it the transformativity approach. Whereas the vulnerability approach explores the social causation of disasters, the transformativity approach reverses the direction of the gaze and investigates the social transformation...... brought about by disasters. Put simply, the concept of vulnerability is about the upstream causes of disaster and the concept of transformativity about the downstream effects. By discussing three recent contributions (by the historian Greg Bankoff, the legal sociologist Michelle Dauber... 19. Transformation of Digital Ecosystems DEFF Research Database (Denmark) Henningsson, Stefan; Hedman, Jonas 2014-01-01 the Digital Ecosystem Technology Transformation (DETT) framework for explaining technology-based transformation of digital ecosystems by integrating theories of business and technology ecosystems. The framework depicts ecosystem transformation as distributed and emergent from micro-, meso-, and macro- level......In digital ecosystems, the fusion relation between business and technology means that the decision of technical compatibility of the offering is also the decision of how to position the firm relative to the coopetive relations that characterize business ecosystems. In this article we develop...... coopetition. The DETT framework consists an alternative to the existing explanations of digital ecosystem transformation as the rational management of one central actor balancing ecosystem tensions. We illustrate the use of the framework by a case study of transformation in the digital payment ecosystem... 20. Transformers and motors CERN Document Server Shultz, George 1991-01-01 Transformers and Motors is an in-depth technical reference which was originally written for the National Joint Apprenticeship Training Committee to train apprentice and journeymen electricians. This book provides detailed information for equipment installation and covers equipment maintenance and repair. The book also includes troubleshooting and replacement guidelines, and it contains a minimum of theory and math.In this easy-to-understand, practical sourcebook, you'll discover:* Explanations of the fundamental concepts of transformers and motors* Transformer connections and d 1. Transforming Consumers Into Brands DEFF Research Database (Denmark) Erz, Antonia; Christensen, Anna-Bertha Heeris 2018-01-01 The goal of this research is to explore the transformational power of a new consumption and production practice, the practice of blogging, to understand its impact on consumers' identity transformations beyond their self-concept as consumers and on the blogosphere as an organizational field....... Through an exploratory study of over 12,000 blog posts from five fashion bloggers, complemented by in-depth interviews, we trace the transformation of consumer bloggers. We identify and describe three identity phases, the individual consumer, collective blogger and blogger identity phase, and two... 2. On numerical Bessel transformation International Nuclear Information System (INIS) Sommer, B.; Zabolitzky, J.G. 1979-01-01 The authors present a computer program to calculate the three dimensional Fourier or Bessel transforms and definite integrals with Bessel functions. Numerical integration of systems containing Bessel functions occurs in many physical problems, e.g. electromagnetic form factor of nuclei, all transitions involving multipole expansions at high momenta. Filon's integration rule is extended to spherical Bessel functions. The numerical error is of the order of the Simpson error term of the function which has to be transformed. Thus one gets a stable integral even at large arguments of the transformed function. (Auth.) 3. Energy Transformation of Croatia International Nuclear Information System (INIS) Potocnik, V. 2014-01-01 Due to obvious climate change, caused mainly by combustion of the fossil fuels, as well as to their modest reserves, energy transformation is under way. It is the transition from the fossil fuels to improved energy efficiency (ENEF) and renewable energy sources (RES). Leading role in the energy transformation has Germany with 'Energiewende', which among other includes closing of existing nuclear power plants until 2022. Croatia has very limited proven fossil fuels reserves, which cover 3/4 of primary energy in consumption. Croatia also has large potential for improvements in ENEF and RES. Therefore, energy transformation of Croatia is justified. (author). 4. Biolistics Transformation of Wheat Science.gov (United States) Sparks, Caroline A.; Jones, Huw D. We present a complete, step-by-step guide to the production of transformed wheat plants using a particle bombardment device to deliver plasmid DNA into immature embryos and the regeneration of transgenic plants via somatic embryogenesis. Currently, this is the most commonly used method for transforming wheat and it offers some advantages. However, it will be interesting to see whether this position is challenged as facile methods are developed for delivering DNA by Agrobacterium tumefaciens or by the production of transformants via a germ-line process (see other chapters in this book). 5. Laplace transforms essentials CERN Document Server Shafii-Mousavi, Morteza 2012-01-01 REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Laplace Transforms includes the Laplace transform, the inverse Laplace transform, special functions and properties, applications to ordinary linear differential equations, Fourier tr 6. Fourier transform NMR International Nuclear Information System (INIS) Hallenga, K. 1991-01-01 This paper discusses the concept of Fourier transformation one of the many precious legacies of the French mathematician Jean Baptiste Joseph Fourier, essential for understanding the link between continuous-wave (CW) and Fourier transform (FT) NMR. Although in modern FT NMR the methods used to obtain a frequency spectrum from the time-domain signal may vary greatly, from the efficient Cooley-Tukey algorithm to very elaborate iterative least-square methods based other maximum entropy method or on linear prediction, the principles for Fourier transformation are unchanged and give invaluable insight into the interconnection of many pairs of physical entities called Fourier pairs 7. Transformations in destination texture DEFF Research Database (Denmark) Gyimóthy, Szilvia 2018-01-01 This article takes heterogeographical approaches to understand Bollywood-induced destination transformations in Switzerland. Positioned within the theoretical field of mediatized mobility, the study contextualizes Bollywood-induced tourism in Europe the concept of texture. Textural analysis (base... 8. Matrices and transformations CERN Document Server Pettofrezzo, Anthony J 1978-01-01 Elementary, concrete approach: fundamentals of matrix algebra, linear transformation of the plane, application of properties of eigenvalues and eigenvectors to study of conics. Includes proofs of most theorems. Answers to odd-numbered exercises. 9. Fourier Transform Mass Spectrometry Science.gov (United States) Scigelova, Michaela; Hornshaw, Martin; Giannakopulos, Anastassios; Makarov, Alexander 2011-01-01 This article provides an introduction to Fourier transform-based mass spectrometry. The key performance characteristics of Fourier transform-based mass spectrometry, mass accuracy and resolution, are presented in the view of how they impact the interpretation of measurements in proteomic applications. The theory and principles of operation of two types of mass analyzer, Fourier transform ion cyclotron resonance and Orbitrap, are described. Major benefits as well as limitations of Fourier transform-based mass spectrometry technology are discussed in the context of practical sample analysis, and illustrated with examples included as figures in this text and in the accompanying slide set. Comparisons highlighting the performance differences between the two mass analyzers are made where deemed useful in assisting the user with choosing the most appropriate technology for an application. Recent developments of these high-performing mass spectrometers are mentioned to provide a future outlook. PMID:21742802 10. Transformation on Abandonment DEFF Research Database (Denmark) Krag, Mo Michelsen Stochholm 2016-01-01 This paper outlines a research project on the increasing quantity of abandoned houses in the depopulating rural villages, and it reports on how an attempt is made to establish a counter-practice of radical preservation based on a series of full-scale transformations of abandoned buildings. The aim...... of the transformations is to reveal and preserve material and immaterial values such as aspects of cultural heritage, local narratives, and building density. The responses of local people are used as a feedback mechanism and considered an important impact indicator. Eleven transformations of varying strategies have...... houses. Transformation prototypes are tested as present manifestations in rural villages as an alternative way to preserve buildings as well as memories.... 11. A DC Transformer Data.gov (United States) National Aeronautics and Space Administration — The goal of the project was to demonstrate a true direct current (DC) transformer, a new electro-mechanical component with potentially high power applications; in... 12. Commutation and Darboux transformation 1College of Military Engineering, Pune 411 031, India. 2Department of ... Liouville equation is a particular application of the commutation method. Darboux ... transformation of a differential operator L, then there exists a differential operator A of. Science.gov (United States) 1978-09-01 The objective of this program was to assess the effectiveness of retrofilling an askarel transformer supplied by the United States Department of Transportation with a 50 centistokes silicone fluid. The work tasks included an assessment of the electri... 14. Distributional Watson transforms NARCIS (Netherlands) Dijksma, A.; Snoo, H.S.V. de 1974-01-01 For all Watson transforms W in L2(R+) a triple of Hilbert space LG ⊂ L2(R+) ⊂ L'G is constructed such that W may be extended to L'G. These results allow the construction of a triple L ⊂ L2(R+) ⊂ L', where L is a Gelfand-Fréchet space. This leads to a theory of distributional Watson transforms. CERN Document Server Grover, Varun 2015-01-01 Featuring contributions from prominent thinkers and researchers, this volume in the ""Advances in Management Information Systems"" series provides a rich set of conceptual, empirical, and introspective studies that epitomize fundamental knowledge in the area of Business Process Transformation. Processes are interpreted broadly to include operational and managerial processes within and between organizations, as well as those involved in knowledge generation. Transformation includes radical and incremental change, its conduct, management, and outcome. The editors and contributing authors pay clo 16. Process for compound transformation KAUST Repository Basset, Jean-Marie 2016-12-29 Embodiments of the present disclosure provide for methods of using a catalytic system to chemically transform a compound (e.g., a hydrocarbon). In an embodiment, the method does not employ grafting the catalyst prior to catalysis. In particular, embodiments of the present disclosure provide for a process of hydrocarbon (e.g., C1 to C20 hydrocarbon) metathesis (e.g., alkane, olefin, or alkyne metathesis) transformation, where the process can be conducted without employing grafting prior to catalysis. 17. Supersymmetrically transformed periodic potentials OpenAIRE C, David J. Fernandez 2003-01-01 The higher order supersymmetric partners of a stationary periodic potential are studied. The transformation functions associated to the band edges do not change the spectral structure. However, when the transformation is implemented for factorization energies inside of the forbidden bands, the final potential will have again the initial band structure but it can have bound states encrusted into the gaps, giving place to localized periodicity defects. 18. Series Transmission Line Transformer Science.gov (United States) Buckles, Robert A.; Booth, Rex; Yen, Boris T. 2004-06-29 A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction. 19. High resolution (transformers. Science.gov (United States) Garcia-Souto, Jose A; Lamela-Rivera, Horacio 2006-10-16 A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted. DEFF Research Database (Denmark) Ramsey, Jase R.; Rutti, Raina M.; Lorenz, Melanie P. 2016-01-01 Despite significant increases in training and development of global managers, little is known about the precursors of transformational leadership in Multilatinas. While prior cross-cultural literature suggests that being an autocratic leader is ideal in Multilatinas, using transformational...... leadership theory, we argue that global leaders of Multilatinas embrace a more humanistic approach to leadership because of the importance of relationships between leaders and their followers. Additionally, we argue that global leaders with high levels of cultural intelligence will have high levels... 1. Power transformers quality assurance CERN Document Server Dasgupta, Indrajit 2009-01-01 About the Book: With the view to attain higher reliability in power system operation, the quality assurance in the field of distribution and power transformers has claimed growing attention. Besides new developments in the material technology and manufacturing processes of transformers, regular diagnostic testing and maintenance of any engineering product may be ascertained by ensuring: right selection of materials and components and their quality checks. application of correct manufacturing processes any systems engineering. the users awareness towards preventive maintenance. The 2. Kinetics of phase transformations International Nuclear Information System (INIS) Thompson, M.O.; Aziz, M.J.; Stephenson, G.B. 1992-01-01 This volume contains papers presented at the Materials Research Society symposium on Kinetics of Phase Transformations held in Boston, Massachusetts from November 26-29, 1990. The symposium provided a forum for research results in an exceptionally broad and interdisciplinary field. Presentations covered nearly every major class of transformations including solid-solid, liquid-solid, transport phenomena and kinetics modeling. Papers involving amorphous Si, a dominant topic at the symposium, are collected in the first section followed by sections on four major areas of transformation kinetics. The symposium opened with joint sessions on ion and electron beam induced transformations in conjunction with the Surface Chemistry and Beam-Solid Interactions: symposium. Subsequent sessions focused on the areas of ordering and nonlinear diffusion kinetics, solid state reactions and amorphization, kinetics and defects of amorphous silicon, and kinetics of melting and solidification. Seven internationally recognized invited speakers reviewed many of the important problems and recent results in these areas, including defects in amorphous Si, crystal to glass transformations, ordering kinetics, solid-state amorphization, computer modeling, and liquid/solid transformations 3. Bar piezoelectric ceramic transformers. Science.gov (United States) Erhart, Jiří; Pulpan, Půlpán; Rusin, Luboš 2013-07-01 Bar-shaped piezoelectric ceramic transformers (PTs) working in the longitudinal vibration mode (k31 mode) were studied. Two types of the transformer were designed--one with the electrode divided into two segments of different length, and one with the electrodes divided into three symmetrical segments. Parameters of studied transformers such as efficiency, transformation ratio, and input and output impedances were measured. An analytical model was developed for PT parameter calculation for both two- and three-segment PTs. Neither type of bar PT exhibited very high efficiency (maximum 72% for three-segment PT design) at a relatively high transformation ratio (it is 4 for two-segment PT and 2 for three-segment PT at the fundamental resonance mode). The optimum resistive loads were 20 and 10 kΩ for two- and three-segment PT designs for the fundamental resonance, respectively, and about one order of magnitude smaller for the higher overtone (i.e., 2 kΩ and 500 Ω, respectively). The no-load transformation ratio was less than 27 (maximum for two-segment electrode PT design). The optimum input electrode aspect ratios (0.48 for three-segment PT and 0.63 for two-segment PT) were calculated numerically under no-load conditions. 4. Transformers analysis, design, and measurement CERN Document Server Lopez-Fernandez, Xose M; Turowski, Janusz 2012-01-01 This book focuses on contemporary economic, design, diagnostics, and maintenance aspects of power, instrument, and high frequency transformers, which are critical to designers for a transformer stations. The text covers such topics as shell type and superconducting transformers as well as coreless PCB and planar transformers. It emphasizes challenges and strategies in transformer design and illustrates the importance of economics in transformers management by reviewing life cycle cost design and the use of decision methods to manage risk. 5. Transformers: analysis, design, and measurement National Research Council Canada - National Science Library López-Fernández, Xose M; Ertan, H. Bülent; Turowski, J 2013-01-01 "This book focuses on contemporary economic, design, diagnostics, and maintenance aspects of power, instrument, and high frequency transformers, which are critical to designers for a transformer stations... Science.gov (United States) McCall, Martin; Pendry, John B.; Galdi, Vincenzo; Lai, Yun; Horsley, S. A. R.; Li, Jensen; Zhu, Jian; Mitchell-Thomas, Rhiannon C.; Quevedo-Teruel, Oscar; Tassin, Philippe; Ginis, Vincent; Martini, Enrica; Minatti, Gabriele; Maci, Stefano; Ebrahimpouri, Mahsa; Hao, Yang; Kinsler, Paul; Gratus, Jonathan; Lukens, Joseph M.; Weiner, Andrew M.; Leonhardt, Ulf; Smolyaninov, Igor I.; Smolyaninova, Vera N.; Thompson, Robert T.; Wegener, Martin; Kadic, Muamer; Cummer, Steven A. 2018-06-01 Transformation optics asks, using Maxwell’s equations, what kind of electromagnetic medium recreates some smooth deformation of space? The guiding principle is Einstein’s principle of covariance: that any physical theory must take the same form in any coordinate system. This requirement fixes very precisely the required electromagnetic medium. The impact of this insight cannot be overestimated. Many practitioners were used to thinking that only a few analytic solutions to Maxwell’s equations existed, such as the monochromatic plane wave in a homogeneous, isotropic medium. At a stroke, transformation optics increases that landscape from ‘few’ to ‘infinity’, and to each of the infinitude of analytic solutions dreamt up by the researcher, there corresponds an electromagnetic medium capable of reproducing that solution precisely. The most striking example is the electromagnetic cloak, thought to be an unreachable dream of science fiction writers, but realised in the laboratory a few months after the papers proposing the possibility were published. But the practical challenges are considerable, requiring meta-media that are at once electrically and magnetically inhomogeneous and anisotropic. How far have we come since the first demonstrations over a decade ago? And what does the future hold? If the wizardry of perfect macroscopic optical invisibility still eludes us in practice, then what compromises still enable us to create interesting, useful, devices? While three-dimensional (3D) cloaking remains a significant technical challenge, much progress has been made in two dimensions. Carpet cloaking, wherein an object is hidden under a surface that appears optically flat, relaxes the constraints of extreme electromagnetic parameters. Surface wave cloaking guides sub-wavelength surface waves, making uneven surfaces appear flat. Two dimensions is also the setting in which conformal and complex coordinate transformations are realisable, and the possibilities in 7. Transformer oil maintenance Energy Technology Data Exchange (ETDEWEB) White, J. [A.F. White Ltd., Brantford, ON (Canada) 2002-08-01 Proactive treatment is required in the case of transformer oil, since the oil degrades over time, which could result in the potential failure of the transformer or costly repairs. A mineral-based oil is used for transformers because of its chemical properties and dielectric strength. Water and particulate are the main contaminants found in transformer oil, affecting the quality of the oil through reduced insulation. Acid that forms in the oil when reacting with oxygen is called oxidization. It reduces the heat dissipation of the transformer as the acid forms sludge which settles on the windings of the transformer. The first step in the preventive maintenance program associated with transformer oil is the testing of the oil. The base line is established through initial testing, and subsequent annual testing identifies any changes. The minimal requirements are: (1) dielectric breakdown, a measure of the voltage conducted by the oil; (2) neutralization/acid number, which detects the level of acid present in the oil; (3) interfacial tension, which identifies the presence of polar compounds; (4) colour, which displays quality, aging and the presence of contaminants; and (5) water, which decreases the dielectric breakdown voltage. The analysis of the gases present in the oil is another useful tool in a maintenance program for the determination of a possible fault such as arcing, corona or overheated connections and is accomplished through Dissolved Gas Analysis (DGA). Remediation treatment includes upgrading the oil. Ideally, reclamation should be performed in the early stages of the acid buildup before sludging occurs. Onsite reclamation includes Fuller's earth processing and degasification, a process briefly described by the author. 8. Multiresolution approximation for volatility processes NARCIS (Netherlands) E. Capobianco (Enrico) 2002-01-01 textabstractWe present an application of wavelet techniques to non-stationary time series with the aim of detecting the dependence structure which is typically found to characterize intraday stock index financial returns. It is particularly important to identify what components truly belong to the 9. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques Science.gov (United States) Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing 10. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques Science.gov (United States) Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart 2016-07-01 Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family 11. Martensitic transformation in zirconia International Nuclear Information System (INIS) Deville, Sylvain; Guenin, Gerard; Chevalier, Jerome 2004-01-01 We investigate by atomic force microscopy (AFM) the surface relief resulting from martensitic tetragonal to monoclinic phase transformation induced by low temperature autoclave aging in ceria-stabilized zirconia. AFM appears as a very powerful tool to investigate martensite relief quantitatively and with a great precision. The crystallographic phenomenological theory is used to predict the expected relief induced by the transformation, for the particular case of lattice correspondence ABC1, where tetragonal c axis becomes the monoclinic c axis. A model for variants spatial arrangement for this lattice correspondence is proposed and validated by the experimental observations. An excellent agreement is found between the quantitative calculations outputs and the experimental measurements at nanometer scale yielded by AFM. All the observed features are explained fully quantitatively by the calculations, with discrepancies between calculations and quantitative experimental measurements within the measurements and calculations precision range. In particular, the crystallographic orientation of the transformed grains is determined from the local characteristics of transformation induced relief. It is finally demonstrated that the strain energy is the controlling factor of the surface transformation induced by low temperature autoclave treatments in this material 12. Classification of displacive transformations: what is a martensitic transformation? International Nuclear Information System (INIS) Christian, J.W.; Olson, G.B.; Cohen, M. 1995-01-01 The displacive transformation classification proposed at ICOMAT 79 is reviewed in light of recent progress in mechanistic understanding. Issues considered include distinctions between shuffle transformation vs. self-accommodating shear, dilatation vs. shear-dominant transformation, and nucleated vs. continuous transformation. (orig.) 13. Transformation on Abandonment DEFF Research Database (Denmark) Krag, Mo Michelsen Stochholm 2017-01-01 in an attempt to sway the residents’ attitude towards a more nuanced view on ruins, thus influencing the public discourse on rural transformation. The residents’ responses are considered a significant impact indicator, supplementary to the physical transformations themselves. As such, the responses of the local...... in the process of provoking an exchange of memories of buildings and places among the residents in rural villages. Today’s state authorized funds for demolition projects, if redirected, could easily contribute to the on-going rural transformation through integration into radical preservation strategies. Instead......-practice indicates that this anxiety may not be legitimate. Time, when stretched in a ruination process or prolonged demolition, acts similarly to a mourning process, thus creating an exchange of memories of what is lost.... 14. Integrated magnetic transformer assembly DEFF Research Database (Denmark) 2014-01-01 The present invention relates to an integrated magnetics transformer assembly comprising a first magnetically permeable core forming a first substantially closed magnetic flux path and a second magnetically permeable core forming a second substantially closed magnetic flux path. A first input...... inductor winding is wound around a first predetermined segment of the first magnetically permeable core and a second input inductor winding is wound around a first predetermined segment of the second magnetically permeable core. The integrated magnetics transformer assembly further comprises a first output......-winding of the first output inductor winding and the first half-winding of the second output inductor winding are configured to produce aligned, i.e. in the same direction, magnetic fluxes through the first substantially closed magnetic flux path. The integrated magnetics transformer assembly is well- suited for use... 15. Nonlocal transformation optics. Science.gov (United States) Castaldi, Giuseppe; Galdi, Vincenzo; Alù, Andrea; Engheta, Nader 2012-02-10 We show that the powerful framework of transformation optics may be exploited for engineering the nonlocal response of artificial electromagnetic materials. Relying on the form-invariant properties of coordinate-transformed Maxwell's equations in the spectral domain, we derive the general constitutive "blueprints" of transformation media yielding prescribed nonlocal field-manipulation effects and provide a physically incisive and powerful geometrical interpretation in terms of deformation of the equifrequency contours. In order to illustrate the potentials of our approach, we present an example of application to a wave-splitting refraction scenario, which may be implemented via a simple class of artificial materials. Our results provide a systematic and versatile framework which may open intriguing venues in dispersion engineering of artificial materials. 16. Genetic transformation of switchgrass. Science.gov (United States) Xi, Yajun; Ge, Yaxin; Wang, Zeng-Yu 2009-01-01 Switchgrass (Panicum virgatum L.) is a highly productive warm-season C4 species that is being developed into a dedicated biofuel crop. This chapter describes a protocol that allows the generation of transgenic switchgrass plants by Agrobacterium tumefaciens-mediated transformation. Embryogenic calluses induced from caryopses or inflorescences were used as explants for inoculation with A. tumefaciens strain EHA105. Hygromycin phosphotransferase gene (hph) was used as the selectable marker and hygromycin was used as the selection agent. Calluses resistant to hygromycin were obtained after 5-6 weeks of selection. Soil-grown switchgrass plants were regenerated about 6 months after callus induction and Agrobacterium-mediated transformation. 17. Transformation of Follicular Lymphoma Science.gov (United States) Lossos, Izidore S.; Gascoyne, Randy D. 2011-01-01 Histological transformation of follicular lymphoma (FL) to a more aggressive non-Hodgkin's lymphomas is a pivotal event in the natural history of FL and is associated with poor outcome. While commonly observed in clinical practice and despite multiple studies designed to address its pathogenesis, the biology of this process represents an enigma. In this chapter we present a state of the art review summarizing the definition of histologic transformation, its incidence, pathogenesis, clinical manifestations, treatment and outcome. Furthermore, we specifically emphasize gaps in our knowledge that should be addressed in future studies. PMID:21658615 18. Transformation of technical infrastructure DEFF Research Database (Denmark) Nielsen, Susanne Balslev , the evolution of large technological systems and theories about organisational and technological transformationprocesses. The empirical work consist of three analysis at three different levels: socio-technical descriptions of each sector, an envestigation of one municipality and envestigations of one workshop......The scope of the project is to investigate the possibillities of - and the barriers for a transformation of technical infrastructure conserning energy, water and waste. It focus on urban ecology as a transformation strategy. The theoretical background of the project is theories about infrastructure... 19. Fourier transforms principles and applications CERN Document Server Hansen, Eric W 2014-01-01 Fourier Transforms: Principles and Applications explains transform methods and their applications to electrical systems from circuits, antennas, and signal processors-ably guiding readers from vector space concepts through the Discrete Fourier Transform (DFT), Fourier series, and Fourier transform to other related transform methods.  Featuring chapter end summaries of key results, over two hundred examples and four hundred homework problems, and a Solutions Manual this book is perfect for graduate students in signal processing and communications as well as practicing engineers. Science.gov (United States) Aguilar, Elena 2017-01-01 Leading a school can be a lonely, challenging job, Elena Aguilar has found in her years coaching principals. Aguilar describes how coaching approach she's developed--transformational coaching--helps principals get three things most of them need: a neutral person they can talk with confidentially, job-embedded professional development, and a safe… 1. Graph Transforming Java Data NARCIS (Netherlands) de Mol, M.J.; Rensink, Arend; Hunt, James J. This paper introduces an approach for adding graph transformation-based functionality to existing JAVA programs. The approach relies on a set of annotations to identify the intended graph structure, as well as on user methods to manipulate that structure, within the user’s own JAVA class 2. ATLAS Job Transforms CERN Document Server Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B 2013-01-01 The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to `transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini... 3. ATLAS Job Transforms CERN Document Server Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B 2013-01-01 The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to 'transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini... 4. Fourier Transform Mass Spectrometry. Science.gov (United States) Gross, Michael L.; Rempel, Don L. 1984-01-01 Discusses the nature of Fourier transform mass spectrometry and its unique combination of high mass resolution, high upper mass limit, and multichannel advantage. Examines its operation, capabilities and limitations, applications (ion storage, ion manipulation, ion chemistry), and future applications and developments. (JN) 5. The Power of Transformation DEFF Research Database (Denmark) Foged, Hans Isak Worre 2017-01-01 Transformation of the built environment in Denmark is estimated to become 51% of the total building activities in the future in order to accommodate new energy targets, a general population move to the city and to maintain buildings, which otherwise presents high architectural qualities. This poi......Transformation of the built environment in Denmark is estimated to become 51% of the total building activities in the future in order to accommodate new energy targets, a general population move to the city and to maintain buildings, which otherwise presents high architectural qualities....... This points to the need of new ideas, methods and models for architects to transform existing building envelopes beyond the current primary approach of simply adding and external insulation layer. The research studies and present thermal simulation methods, models, elementary design studies and applied design...... approaches to envelope transformations based on modifying colours and local geometries of an envelope. The study finds that colour can be used instrumentally as a design variable to control external surface heat accumulation and envelope heat transfer, whereas local geometric variations only present... 6. On the Meijer transformation Directory of Open Access Journals (Sweden) J. Conlan 1978-01-01 Full Text Available Recently [8], an operational calculus for the operator Bμ=t−μDt1+μD with −1<μ<∞ was developed via the algebraic approach [4], [13], [15]. This paper gives the integral transform version. In particular, a differentiation theorem and a convolution theorem are proved. 7. Parallel Fast Legendre Transform NARCIS (Netherlands) Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K. 1998-01-01 We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were 8. Tourism transformations: an introduction NARCIS (Netherlands) Dietvorst, A.G.J.; Ashworth, G.J. 1995-01-01 In order to emphasize the dynamic character of the tourism-recreation product, an overarching concept is presented which integrates both supply and demand. The model shows the continuing transformation of the original tourism-recreation resource (either a landscape, a monument, an urban public 9. Transformer Impedance Reflection Demonstration Science.gov (United States) Layton, William 2014-01-01 Questions often arise as to how a device attached to a transformer can draw power from the electrical power grid since it seems that the primary and secondary are not connected to one another. However, a closer look at how the primary and secondary are linked together magnetically and a consideration of the role of Lenz's law in this linkage… 10. Fixture for winding transformers Science.gov (United States) Mclyman, M. T. 1980-01-01 Bench-mounted fixture assists operator in winding toroid-shaped transformer cores. Toroid is rigidly held in place as wires are looped around. Arrangement frees both hands for rapid winding and untangling of wires that occurs when core is hand held. 11. Transformation of Abandonment DEFF Research Database (Denmark) Krag, Mo Michelsen Stochholm 2015-01-01 the controlled ruin will play the role of catalyst of the disclosing of hidden narratives and through decay in the end turn Figure 2 Transformation process: Controlled ruin 2014, Thisted Municipality, Denmark into nature. The demolition process is simply slowed down. Similarly to the mechanisms in a mourning... 12. Transformative Mixed Methods Research Science.gov (United States) Mertens, Donna M. 2010-01-01 Paradigms serve as metaphysical frameworks that guide researchers in the identification and clarification of their beliefs with regard to ethics, reality, knowledge, and methodology. The transformative paradigm is explained and illustrated as a framework for researchers who place a priority on social justice and the furtherance of human rights.… 13. Welfare State Transformation DEFF Research Database (Denmark) Obinger, Herbert; Starke, Peter 2014-01-01 This paper describes welfare state transformation in OECD countries since the 1970s against the background of the post-war settlement. Relying on quantitative macro-data and qualitative information from the literature, we show that welfare states have con-verged, especially regarding various... 14. Education as Habitus Transformations Science.gov (United States) von Rosenberg, Florian 2016-01-01 Unlike a conventional reading of Bourdieu, this article focuses on his work with regard to the transformation of social structure. In the context of a rereading, from an educational theory perspective, the article proposes an approach that allows for the linking of empirically informed social theory, on the one hand, and biography research… 15. Conformal transformations in superspace International Nuclear Information System (INIS) Dao Vong Duc 1977-01-01 The spinor extension of the conformal algebra is investigated. The transformation law of superfields under the conformal coordinate inversion R defined in the superspace is derived. Using R-technique, the superconformally covariant two-point and three-point correlation functions are found 16. Rainbow Fourier Transform Science.gov (United States) Alexandrov, Mikhail D.; Cairns, Brian; Mishchenko, Michael I. 2012-01-01 We present a novel technique for remote sensing of cloud droplet size distributions. Polarized reflectances in the scattering angle range between 135deg and 165deg exhibit a sharply defined rainbow structure, the shape of which is determined mostly by single scattering properties of cloud particles, and therefore, can be modeled using the Mie theory. Fitting the observed rainbow with such a model (computed for a parameterized family of particle size distributions) has been used for cloud droplet size retrievals. We discovered that the relationship between the rainbow structures and the corresponding particle size distributions is deeper than it had been commonly understood. In fact, the Mie theory-derived polarized reflectance as a function of reduced scattering angle (in the rainbow angular range) and the (monodisperse) particle radius appears to be a proxy to a kernel of an integral transform (similar to the sine Fourier transform on the positive semi-axis). This approach, called the rainbow Fourier transform (RFT), allows us to accurately retrieve the shape of the droplet size distribution by the application of the corresponding inverse transform to the observed polarized rainbow. While the basis functions of the proxy-transform are not exactly orthogonal in the finite angular range, this procedure needs to be complemented by a simple regression technique, which removes the retrieval artifacts. This non-parametric approach does not require any a priori knowledge of the droplet size distribution functional shape and is computationally fast (no look-up tables, no fitting, computations are the same as for the forward modeling). 17. Transformation language integration based on profiles and higher order transformations NARCIS (Netherlands) Van Gorp, P.M.E.; Keller, A.; Janssens, D.; Gaševic, D.; Lämmel, R.; Van Wyk, Eric 2009-01-01 For about two decades, researchers have been constructing tools for applying graph transformations on large model transformation case studies. Instead of incrementally extending a common core, these competitive tool builders have repeatedly reconstructed mechanisms that were already supported by 18. Understanding China's Transformations DEFF Research Database (Denmark) Li, Xing The objective of this paper is to offer a framework of understanding the dialectical nexus between China's internal evolutions and the external influences with a focus on the century-long "challenge-response" dynamism. That is to explore how external factors helped shaping China's internal...... transformations, i.e. how generations of Chinese have been struggling in responding to the external challenges and attempting to sinicize external political ideas in order to change China from within. Likewise, it is equally important to understand how China's inner transformation contributed to reshaping...... the world. Each time, be it China's dominance or decline, the capitalist world system has to adjust and readjust itself to the opportunities and constraints brought about by the "China factors".... 19. Transforming Virtual Teams DEFF Research Database (Denmark) Bjørn, Pernille 2005-01-01 Investigating virtual team collaboration in industry using grounded theory this paper presents the in-dept analysis of empirical work conducted in a global organization of 100.000 employees where a global virtual team with participants from Sweden, United Kingdom, Canada, and North America were...... studied. The research question investigated is how collaboration is negotiated within virtual teams? This paper presents findings concerning how collaboration is negotiated within a virtual team and elaborate the difficulties due to invisible articulation work and managing multiple communities...... in transforming the virtual team into a community. It is argued that translucence in communication structures within the virtual team and between team and management is essential for engaging in a positive transformation process of trustworthiness supporting the team becoming a community, managing the immanent... 20. Katedralskolen i Transformation DEFF Research Database (Denmark) 2016-01-01 stadig mulige byggefelter, hvis anvendelse kan ommøblere funktioner, så det imødekommer nye undervisningsformer og det sociale liv vi har i dag. En ny tilbygning står på skolens ønskeliste. Således er skolen i fortsat transformation. I tegninger og modeller fremviser udstillingen konkurrenceforslag til...... nye tilbygninger til Katedralskolen, der fortsætter traditionen om transformation og som fortsætter fortætning af skolen og dermed af byen. Byggeprogrammet er blevet diskuteret på baggrund af brugerinterviews af Katedralskolens lærere og elever. Arkitekter fra Arkitekttegnestuen Kjær & Richter i... 1. Cryogenic pulsed power transformers International Nuclear Information System (INIS) Rogers, J.D.; Eckels, P.W.; Hackworth, D.T.; Shestak, E.J.; Singh, S.K. 1988-01-01 Three liquid nitrogen cooled transformers, two with 14.4 MJ and one with 33.5 MJ storage capacity, are being built to provide respective currents of 0.31 and 0.95 MA to drive a distributed rail gun and are designed to withstand respective voltages of 70 and 200 kV. The transformers are contained in fiberglass reinforced polyester plastic dewars to avoid eddy current coupling and lateral forces that would exist with a metal dewar. To improve the coupling between windings the secondary winding is made relatively thin and is supported structurally for magnetic loading against the outer primary winding. The coils are pool bath cooled. Normal and fault mode analyses indicated safe operation with some precautions for venting nitrogen gas provided 2. A DC Transformer Science.gov (United States) Youngquist, Robert C.; Ihlefeld, Curtis M.; Starr, Stanley O. 2013-01-01 A component level dc transformer is described in which no alternating currents or voltages are present. It operates by combining features of a homopolar motor and a homopolar generator, both de devices, such that the output voltage of a de power supply can be stepped up (or down) with a corresponding step down (or up) in current. The basic theory for this device is developed, performance predictions are made, and the results from a small prototype are presented. Based on demonstrated technology in the literature, this de transformer should be scalable to low megawatt levels, but it is more suited to high current than high voltage applications. Significant development would be required before it could achieve the kilovolt levels needed for de power transmission. 3. High voltage isolation transformer Science.gov (United States) Clatterbuck, C. H.; Ruitberg, A. P. (Inventor) 1985-01-01 A high voltage isolation transformer is provided with primary and secondary coils separated by discrete electrostatic shields from the surfaces of insulating spools on which the coils are wound. The electrostatic shields are formed by coatings of a compound with a low electrical conductivity which completely encase the coils and adhere to the surfaces of the insulating spools adjacent to the coils. Coatings of the compound also line axial bores of the spools, thereby forming electrostatic shields separating the spools from legs of a ferromagnetic core extending through the bores. The transformer is able to isolate a high constant potential applied to one of its coils, without the occurrence of sparking or corona, by coupling the coatings, lining the axial bores to the ferromagnetic core and by coupling one terminal of each coil to the respective coating encasing the coil. 4. Transforming social contracts DEFF Research Database (Denmark) Mohr, Sebastian; Koch, Lene 2016-01-01 The introduction of IVF in Denmark was accompanied by social transformations: contestations of medical authority, negotiations of who might access reproductive biomedicine and changes in individual and social identity due to reproductive technologies. Looking at the making of Danish IVF......, this article sketches its social and cultural history by revisiting the legal, medical, technological and social developments that characterized the introduction of IVF in Denmark as well as by contextualizing the social research on the uses and impacts of IVF carried out in the 1980s and 1990s within...... these developments. The making of Danish IVF is presented as a transformative event in so far as it changed Denmark from being a society concerned about the social consequences of reproductive technologies to a moral collective characterized by a joined sense of responsibility for Denmark's procreative future.... 5. Transformer ratio enhancement experiment International Nuclear Information System (INIS) Gai, W.; Power, J. G.; Kanareykin, A.; Neasheva, E.; Altmark, A. 2004-01-01 Recently, a multibunch scheme for efficient acceleration based on dielectric wakefield accelerator technology was outlined in J.G. Power, W. Gai, A. Kanareykin, X. Sun. PAC 2001 Proceedings, pp. 114-116, 2002. In this paper we present an experimental program for the design, development and demonstration of an Enhanced Transformer Ratio Dielectric Wakefield Accelerator (ETR-DWA). The principal goal is to increase the transformer ratio R, the parameter that characterizes the energy transfer efficiency from the accelerating structure to the accelerated electron beam. We present here an experimental design of a 13.625 GHz dielectric loaded accelerating structure, a laser multisplitter producing a ramped bunch train, and simulations of the bunch train parameters required. Experimental results of the accelerating structure bench testing and ramped pulsed train generation with the laser multisplitter are shown as well. Using beam dynamic simulations, we also obtain the focusing FODO lattice parameters 6. Transformational plane geometry CERN Document Server Umble, Ronald N 2014-01-01 Axioms of Euclidean Plane Geometry The Existence and Incidence Postulates The Distance and Ruler Postulates The Plane Separation Postulate The Protractor Postulate The Side-Angle-Side Postulate and the Euclidean Parallel Postulate Theorems of Euclidean Plane Geometry The Exterior Angle Theorem Triangle Congruence Theorems The Alternate Interior Angles Theorem and the Angle Sum Theorem Similar Triangles Introduction to Transformations, Isometries, and Similarities Transformations Isometries and SimilaritiesAppendix: Proof of Surjectivity Translations, Rotations, and Reflections Translations Rotations Reflections Appendix: Geometer's Sketchpad Commands Required by Exploratory Activities Compositions of Translations, Rotations, and Reflections The Three Points Theorem Rotations as Compositions of Two Reflections Translations as Compositions of Two Halfturns or Two Reflections The Angle Addition Theorem Glide Reflections Classification of Isometries The Fundamental Theorem and Congruence Classification of Isometr... 7. Matrices and linear transformations CERN Document Server Cullen, Charles G 1990-01-01 ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first 8. Laminated piezoelectric transformer Science.gov (United States) Vazquez Carazo, Alfredo (Inventor) 2006-01-01 A laminated piezoelectric transformer is provided using the longitudinal vibration modes for step-up voltage conversion applications. The input portions are polarized to deform in a longitudinal plane and are bonded to an output portion. The deformation of the input portions is mechanically coupled to the output portion, which deforms in the same longitudinal direction relative to the input portion. The output portion is polarized in the thickness direction relative its electrodes, and piezoelectrically generates a stepped-up output voltage. 9. Transformation of industrial territories Science.gov (United States) Plotnikova, N. I.; Kolocova, I. I. 2017-08-01 The problem of removing industrial enterprises from the historical center of the city and the subsequent effective use of the territories has been relevant for Western countries. Nowadays, the problem is crucial for Russia, its megacities and regional centers. The paper analyzes successful projects of transforming industrial facilities into cultural, business and residential objects in the world and in Russia. The patterns of the project development have been determined and presented in the paper. 10. Transformational Leadership, Innovation & Creativity OpenAIRE Bista, Sashida; Bhattarai, Sandhya; Reza, Sakib; Ogot, Norine 2016-01-01 The aim of this research was basically to find the influence of transformational leadership in employees creativity and organizational innovation. In most of the business houses manager perceive that their leadership styles are best suited for the organization but their styles might have a different perspective from their subordinates point of view. So it is interesting to know and understand how management and the subordinates perceive the styles for generation of creativity and organization... 11. Approximating the Analytic Fourier Transform with the Discrete Fourier Transform OpenAIRE Axelrod, Jeremy 2015-01-01 The Fourier transform is approximated over a finite domain using a Riemann sum. This Riemann sum is then expressed in terms of the discrete Fourier transform, which allows the sum to be computed with a fast Fourier transform algorithm more rapidly than via a direct matrix multiplication. Advantages and limitations of using this method to approximate the Fourier transform are discussed, and prototypical MATLAB codes implementing the method are presented. 12. GOOD GOVERNANCE AND TRANSFORMATION Directory of Open Access Journals (Sweden) Hans-Jürgen WAGENER 2005-12-01 Full Text Available Transformation of a totalitarian, basically administratively coordinated system into a democratic one that is coordinated predominantly by markets and competition has been triggered by, among others, the perception of a serious deficit in welfare and happiness. Public policy has a special task transforming the economic order by liberalisation, privatisation, stabilisation and the installation of institutions that are supportive for competition. After 15 years since transformation began, there are sufficiently differentiated success stories to test the hypothesis: it was good governance that is responsible for success and bad governance for failure. The empirical results support the “Lorenzetti hypothesis”: where freedom, security and trust prevail, the economy flourishes, where they are lacking, the costs of long-term investment are too high. The initial conditions of transition countries seem to be quite similar, nevertheless, even there one can discern good and bad governance. The extent of socialist lawfulness, planning security, cronyism and corruption differed widely between East Berlin and Tashkent. And a good deal of such variations can be found in the pre-socialist history of these countries. However, the main conclusion is that the co-evolution hypothesis states that both, welfare and good governance, go together. 13. Zirconia - the cinderella transformation International Nuclear Information System (INIS) Hannink, R.H.J. 1999-01-01 Zirconia and its alloys have formed a turning point in mechanical property developments of engineering ceramics. This can be stated primarily because zirconia alloys were one of the first ceramic systems in which it was demonstrated that the mechanical properties could be tailored using careful control of composition, powder processing and thermal treatment. For the improved mechanical properties to be captured in zirconia-based or containing ceramics, control of the tetragonal to monoclinic transformation is required. Through microstructural control, zirconia-based ceramics can be tailored to form some of the strongest and toughest ceramics yet developed. By carefully controlling the use of various dopants (alloying additions), a variety of microstructures can be produced all of which may exhibit transformation toughening. While success in capturing the benefits of transformation toughening relies on adequate powder processing techniques, this review is restricted to outlining the phase control and behaviour that make zirconia and its alloys such a scientifically fascinating and rewarding system for study and a commercially appealing ceramic material 14. Transformational Leadership, Integrity, and Power Science.gov (United States) Harrison, Laura M. 2011-01-01 Transformational leadership enjoys widespread appeal among student affairs professionals. National Association of Student Personnel Administrators (NASPA) and American College Personnel Association (ACPA) conferences frequently feature speakers who promote transformational leadership's two primary tenets: (1) change is the central purpose of… 15. Spotlight on modern transformer design CERN Document Server Georgilakis, Pavlos S 2009-01-01 Increasing competition in the global transformer market has put tremendous responsibilities on the industry to increase reliability while reducing cost. This book introduces an approach to transformer design using artificial intelligence (AI) techniques in combination with finite element method (FEM). 16. Canonical transformations and generating functionals NARCIS (Netherlands) Broer, L.J.F.; Kobussen, J.A. 1972-01-01 It is shown that canonical transformations for field variables in hamiltonian partial differential equations can be obtained from generating functionals in the same way as classical canonical transformations from generating functions. A simple proof of the relation between infinitesimal invariant 17. The Transformation of Disabilities Organizations Science.gov (United States) Schalock, Robert L.; Verdugo, Miguel-Angel 2013-01-01 This article summarizes the five major characteristics of the transformation era and describes how intellectual and closely related developmental disabilities organizations can apply specific transformation strategies associated with each characteristic. Collectively, the characteristics and strategies provide a framework for transformation… 18. Higher derivatives in gauge transformations International Nuclear Information System (INIS) Gogilidze, S.A.; Sanadze, V.V.; Tkebuchava, F.G. 1992-01-01 The mechanism of appearance of highher derivatives of coordinates in the symmetry transformation law of the second Noether's theorem is established. It is shown that the corresponding transformations are canonical in the extended phase space. 15 refs 19. Life cycle of transformer oil OpenAIRE Đurđević Ksenija R.; Vojinović-Miloradov Mirjana; Sokolović Slobodan M. 2008-01-01 The consumption of electric power is constantly increasing due to industrialization and population growth. This results in much more severe operating conditions of transformers, the most important electrical devices that make integral parts of power transmission and distribution systems. The designed operating life of the majority of worldwide transformers has already expired, which puts the increase of transformer reliability and operating life extension in the spotlight. Transformer oil pla... 20. Canonical transformations of Kepler trajectories International Nuclear Information System (INIS) Mostowski, Jan 2010-01-01 In this paper, canonical transformations generated by constants of motion in the case of the Kepler problem are discussed. It is shown that canonical transformations generated by angular momentum are rotations of the trajectory. Particular attention is paid to canonical transformations generated by the Runge-Lenz vector. It is shown that these transformations change the eccentricity of the orbit. A method of obtaining elliptic trajectories from the circular ones with the help of canonical trajectories is discussed. 1. Thin-Film Power Transformers Science.gov (United States) Katti, Romney R. 1995-01-01 Transformer core made of thin layers of insulating material interspersed with thin layers of ferromagnetic material. Flux-linking conductors made of thinner nonferromagnetic-conductor/insulator multilayers wrapped around core. Transformers have geometric features finer than those of transformers made in customary way by machining and mechanical pressing. In addition, some thin-film materials exhibit magnetic-flux-carrying capabilities superior to those of customary bulk transformer materials. Suitable for low-cost, high-yield mass production. 2. Chemical effects of nuclear transformations Energy Technology Data Exchange (ETDEWEB) Bulbulian, S 1982-06-01 A brief survey of the present state of knowledge on the chemical effects of nuclear transformations is presented. The recoil energy produced by these transformations in the nuclide is often sufficiently high to disrupt the chemical ligands between these particular atoms affected by the nuclear transformations, while the rest of their molecules. It also contains a discussion of the different annealing processes that produce the cancellation of the chemical change produced by the nuclear transformation. 3. Level Design as Model Transformation NARCIS (Netherlands) Dormans, Joris 2011-01-01 This paper frames the process of designing a level in a game as a series of model transformations. The transformations correspond to the application of particular design principles, such as the use of locks and keys to transform a linear mission into a branching space. It shows that by using rewrite 4. Inverse problem in transformation optics DEFF Research Database (Denmark) Novitsky, Andrey 2011-01-01 The straightforward method of transformation optics implies that one starts from the coordinate transformation and determines the Jacobian matrix, the fields and material parameters of the cloak. However, the coordinate transformation appears as an optional function: it is not necessary to know it... 5. Enhancing Understanding of Transformation Matrices Science.gov (United States) Dick, Jonathan; Childrey, Maria 2012-01-01 With the Common Core State Standards' emphasis on transformations, teachers need a variety of approaches to increase student understanding. Teaching matrix transformations by focusing on row vectors gives students tools to create matrices to perform transformations. This empowerment opens many doors: Students are able to create the matrices for… 6. Genetic Transformation of Streptococcus mutans OpenAIRE Perry, Dennis; Kuramitsu, Howard K. 1981-01-01 Three strains of Streptococcus mutans belonging to serotypes a, c, and f were transformed to streptomycin resistance by deoxyribonucleic acids derived from homologous and heterologous streptomycin-resistant strains of S. mutans and Streptococcus sanguis strain Challis. Homologous transformation of S. mutans was less efficient than heterologous transformation by deoxyribonucleic acids from other strains of S. mutans. 7. Quality as Transformation: Educational Metamorphosis Science.gov (United States) Cheng, Ming 2014-01-01 The notion of "quality as transformation" has been widely used in the higher education sector. However, both quality and transformation are elusive terms. There is little research exploring how quality could be equated to transformation in the learning process. This paper will provide an insight into the relationship between quality and… 8. Parallel plate transmission line transformer NARCIS (Netherlands) Voeten, S.J.; Brussaard, G.J.H.; Pemen, A.J.M. 2011-01-01 A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the 9. Fractional Laplace Transforms - A Perspective Directory of Open Access Journals (Sweden) Rudolf A. Treumann 2014-06-01 Full Text Available A new form of the Laplace transform is reviewed as a paradigm for an entire class of fractional functional transforms. Various of its properties are discussed. Such transformations should be useful in application to differential/integral equations or problems in non-extensive statistical mechanics. 10. On an integral transform Directory of Open Access Journals (Sweden) D. Naylor 1988-01-01 Full Text Available A formula of inversion is established for an integral transform whose kernel is the Bessel function Ju(kr where r varies over the finite interval (0,a and the order u is taken to be the eigenvalue parameter. When this parameter is large the Bessel function behaves for varying r like the power function ru and by relating the Bessel functions to their corresponding power functions the proof of the inversion formula can be reduced to one depending on the Mellin inversion theorem. 11. Instruments of Transformative Governance DEFF Research Database (Denmark) Borrás, Susana production and distribution channels. PDPs aim at overcoming current market and government failures by pooling resources in the attempt to solve this global social challenge. Thus, PDPs are a case of instruments of transformative research and innovation, operating in a transnational governance context....... They exhibit three novelties: they address strategic long-term problems in a holistic manner, set substantive output-oriented goals, and are implemented through new organizational structures. After characterizing the different types of current PDPs and the context in which they emerged, the paper examines... 12. Inverse Satake transforms OpenAIRE Sakellaridis, Yiannis 2014-01-01 Let H be a split reductive group over a local non-archimedean field, and let H^ denote its Langlands dual group. We present an explicit formula for the generating function of an unramified L-function associated to a highest weight representation of the dual group, considered as a series of elements in the Hecke algebra of H. This offers an alternative approach to a solution of the same problem by Wen-Wei Li. Moreover, we generalize the notion of "Satake transform" and perform the analogous ca... 13. From Abstraction to Transformation DEFF Research Database (Denmark) Ribers, Bjørn the other subjects ‘being in existence’ a transformation of the personality takes place and a deeper understanding and compassion for ‘the other’ is integrated. Ethical judgment is a demanding and inescapable aspect of social work; hence, the educational institutions are compelled to prioritise questions......This paper presents perspectives on ethical dimensions and social dilemmas encountered in the professional work of social welfare practitioners in Denmark. The paper draws on examples from an ethnographic study of the subjective experiences of professionals with practice and the educational setting... 14. Fourier transforms in spectroscopy CERN Document Server Kauppinen, Jyrki 2000-01-01 This modern approach to the subject is clearly and logically structured, and gives readers an understanding of the essence of Fourier transforms and their applications. All important aspects are included with respect to their use with optical spectroscopic data. Based on popular lectures, the authors provide the mathematical fundamentals and numerical applications which are essential in practical use. The main part of the book is dedicated to applications of FT in signal processing and spectroscopy, with IR and NIR, NMR and mass spectrometry dealt with both from a theoretical and practical poi 15. Solid phase transformations CERN Document Server Čermák, J 2008-01-01 This special-topic book, devoted to ""Solid Phase Transformations"" , covers a broad range of phenomena which are of importance in a number of technological processes. Most commercial alloys undergo thermal treatment after casting, with the aim of imparting desired compositions and/or optimal morphologies to the component phases. In spite of the fact that the topic has lain at the center of physical metallurgy for a long time, there are numerous aspects which are wide open to potential investigative breakthroughs. Materials with new structures also stimulate research in the field, as well as n 16. Metasurface transformation optics International Nuclear Information System (INIS) Mencagli, M jr; Martini, E; González-Ovejero, D; Maci, S 2014-01-01 Transformation optics has been recently proposed as a powerful method to manipulate electromagnetic fields by using anisotropic inhomogeneous volumetric media. This method can be extended to design anisotropic modulated metasurfaces (MTSs) able to control the propagation path of surface waves. In this paper, this extension is formalized by defining a systematic procedure that can be applied to design a large number of planar devices, with a significant technological simplification with respect to the realization based on volumetric media. Practical MTS designs are also presented. (paper) 17. Transforming Public Space DEFF Research Database (Denmark) Navarro, Dora 2009-01-01 Among processes towards democratisation, it has been asserted that alternative radio has a central role in the citizen making of the poor. However, it is important to analyse in detail what possibilities an alternative or citizens' radio has to strengthen ideas of citizenship and transform...... the public space into a critical and deliberative public in urban sites. I focus on one local Catholic radio station in Huaycan, a shantytown in the outskirts of Lima, Peru. I describe the radios' journalistic work, showing examples of how they mobilise local leaders and monitor democratic processes... 18. High pressure phase transformations revisited. Science.gov (United States) Levitas, Valery I 2018-04-25 High pressure phase transformations play an important role in the search for new materials and material synthesis, as well as in geophysics. However, they are poorly characterized, and phase transformation pressure and pressure hysteresis vary drastically in experiments of different researchers, with different pressure transmitting media, and with different material suppliers. Here we review the current state, challenges in studying phase transformations under high pressure, and the possible ways in overcoming the challenges. This field is critically compared with fields of phase transformations under normal pressure in steels and shape memory alloys, as well as plastic deformation of materials. The main reason for the above mentioned discrepancy is the lack of understanding that there is a fundamental difference between pressure-induced transformations under hydrostatic conditions, stress-induced transformations under nonhydrostatic conditions below yield, and strain-induced transformations during plastic flow. Each of these types of transformations has different mechanisms and requires a completely different thermodynamic and kinetic description and experimental characterization. In comparison with other fields the following challenges are indicated for high pressure phase transformation: (a) initial and evolving microstructure is not included in characterization of transformations; (b) continuum theory is poorly developed; (c) heterogeneous stress and strain fields in experiments are not determined, which leads to confusing material transformational properties with a system behavior. Some ways to advance the field of high pressure phase transformations are suggested. The key points are: (a) to take into account plastic deformations and microstructure evolution during transformations; (b) to formulate phase transformation criteria and kinetic equations in terms of stress and plastic strain tensors (instead of pressure alone); (c) to develop multiscale continuum 19. High pressure phase transformations revisited Science.gov (United States) Levitas, Valery I. 2018-04-01 High pressure phase transformations play an important role in the search for new materials and material synthesis, as well as in geophysics. However, they are poorly characterized, and phase transformation pressure and pressure hysteresis vary drastically in experiments of different researchers, with different pressure transmitting media, and with different material suppliers. Here we review the current state, challenges in studying phase transformations under high pressure, and the possible ways in overcoming the challenges. This field is critically compared with fields of phase transformations under normal pressure in steels and shape memory alloys, as well as plastic deformation of materials. The main reason for the above mentioned discrepancy is the lack of understanding that there is a fundamental difference between pressure-induced transformations under hydrostatic conditions, stress-induced transformations under nonhydrostatic conditions below yield, and strain-induced transformations during plastic flow. Each of these types of transformations has different mechanisms and requires a completely different thermodynamic and kinetic description and experimental characterization. In comparison with other fields the following challenges are indicated for high pressure phase transformation: (a) initial and evolving microstructure is not included in characterization of transformations; (b) continuum theory is poorly developed; (c) heterogeneous stress and strain fields in experiments are not determined, which leads to confusing material transformational properties with a system behavior. Some ways to advance the field of high pressure phase transformations are suggested. The key points are: (a) to take into account plastic deformations and microstructure evolution during transformations; (b) to formulate phase transformation criteria and kinetic equations in terms of stress and plastic strain tensors (instead of pressure alone); (c) to develop multiscale continuum International Nuclear Information System (INIS) Fenimore, E.E.; Weston, G.S. 1981-01-01 In many fields (e.g., spectroscopy, imaging spectroscopy, photoacoustic imaging, coded aperture imaging) binary bit patterns known as m sequences are used to encode (by multiplexing) a series of measurements in order to obtain a larger throughput. The observed measurements must be decoded to obtain the desired spectrum (or image in the case of coded aperture imaging). Decoding in the past has used a technique called the fast Hadamard transform (FHT) whose chief advantage is that it can reduce the computational effort from N 2 multiplies of N log 2 N additions or subtractions. However, the FHT has the disadvantage that it does not readily allow one to sample more finely than the number of bits used in the m sequence. This can limit the obtainable resolution and cause confusion near the sample boundaries (phasing errors). Both 1-D and 2-D methods (called fast delta Hadamard transforms, FDHT) have been developed which overcome both of the above limitations. Applications of the FDHT are discussed in the context of Hadamard spectroscopy and coded aperture imaging with uniformly redundant arrays. Special emphasis has been placed on how the FDHT can unite techniques used by both of these fields into the same mathematical basis 1. Fast Fourier transform telescope International Nuclear Information System (INIS) Tegmark, Max; Zaldarriaga, Matias 2009-01-01 We propose an all-digital telescope for 21 cm tomography, which combines key advantages of both single dishes and interferometers. The electric field is digitized by antennas on a rectangular grid, after which a series of fast Fourier transforms recovers simultaneous multifrequency images of up to half the sky. Thanks to Moore's law, the bandwidth up to which this is feasible has now reached about 1 GHz, and will likely continue doubling every couple of years. The main advantages over a single dish telescope are cost and orders of magnitude larger field-of-view, translating into dramatically better sensitivity for large-area surveys. The key advantages over traditional interferometers are cost (the correlator computational cost for an N-element array scales as Nlog 2 N rather than N 2 ) and a compact synthesized beam. We argue that 21 cm tomography could be an ideal first application of a very large fast Fourier transform telescope, which would provide both massive sensitivity improvements per dollar and mitigate the off-beam point source foreground problem with its clean beam. Another potentially interesting application is cosmic microwave background polarization. 2. Transformational silicon electronics KAUST Repository Rojas, Jhonathan Prieto 2014-02-25 In today\\'s traditional electronics such as in computers or in mobile phones, billions of high-performance, ultra-low-power devices are neatly integrated in extremely compact areas on rigid and brittle but low-cost bulk monocrystalline silicon (100) wafers. Ninety percent of global electronics are made up of silicon. Therefore, we have developed a generic low-cost regenerative batch fabrication process to transform such wafers full of devices into thin (5 μm), mechanically flexible, optically semitransparent silicon fabric with devices, then recycling the remaining wafer to generate multiple silicon fabric with chips and devices, ensuring low-cost and optimal utilization of the whole substrate. We show monocrystalline, amorphous, and polycrystalline silicon and silicon dioxide fabric, all from low-cost bulk silicon (100) wafers with the semiconductor industry\\'s most advanced high-κ/metal gate stack based high-performance, ultra-low-power capacitors, field effect transistors, energy harvesters, and storage to emphasize the effectiveness and versatility of this process to transform traditional electronics into flexible and semitransparent ones for multipurpose applications. © 2014 American Chemical Society. 3. Transforming Innovation for Sustainability Directory of Open Access Journals (Sweden) Melissa Leach 2012-06-01 Full Text Available The urgency of charting pathways to sustainability that keep human societies within a "safe operating space" has now been clarified. Crises in climate, food, biodiversity, and energy are already playing out across local and global scales and are set to increase as we approach critical thresholds. Drawing together recent work from the Stockholm Resilience Centre, the Tellus Institute, and the STEPS Centre, this commentary article argues that ambitious Sustainable Development Goals are now required along with major transformation, not only in policies and technologies, but in modes of innovation themselves, to meet them. As examples of dryland agriculture in East Africa and rural energy in Latin America illustrate, such "transformative innovation" needs to give far greater recognition and power to grassroots innovation actors and processes, involving them within an inclusive, multi-scale innovation politics. The three dimensions of direction, diversity, and distribution along with new forms of "sustainability brokering" can help guide the kinds of analysis and decision making now needed to safeguard our planet for current and future generations. 4. Fundamentals of algebraic graph transformation CERN Document Server Ehrig, Hartmut; Prange, Ulrike; Taentzer, Gabriele 2006-01-01 Graphs are widely used to represent structural information in the form of objects and connections between them. Graph transformation is the rule-based manipulation of graphs, an increasingly important concept in computer science and related fields. This is the first textbook treatment of the algebraic approach to graph transformation, based on algebraic structures and category theory. Part I is an introduction to the classical case of graph and typed graph transformation. In Part II basic and advanced results are first shown for an abstract form of replacement systems, so-called adhesive high-level replacement systems based on category theory, and are then instantiated to several forms of graph and Petri net transformation systems. Part III develops typed attributed graph transformation, a technique of key relevance in the modeling of visual languages and in model transformation. Part IV contains a practical case study on model transformation and a presentation of the AGG (attributed graph grammar) tool envir... 5. Transformations of emotional experience. Science.gov (United States) de Cortiñas, Lia Pistiner 2013-06-01 In this paper the author approaches mental pain and the problems in a psychoanalytic treatment of patients with difficulties in the psychic transformation of their emotional experiences. The author is interested in the symbolic failure related to the obstruction of development of phantasies, dreams, dream-thoughts, etc. She differentiates symbolization disturbances related to hypertrophic projective identification from a detention of these primitive communications and emotional isolation. She puts forward the conjecture that one factor in the arrest of this development is the detention of projective identifications and that, when this primitive means of communication is re-established in a container-contained relationship of mutual benefit, this initiates the development of a symbolization process that can replace the pathological 'protection'. Another hypothesis she develops is that of inaccessible caesuras that, associated with the detention of projective identification, obstruct any integrative or interactive movement. This caesura and the detention of projective identifications affect mental functions needed for dealing with mental pain. The personality is left with precarious mental equipment for transforming emotional experiences. How can a psychoanalytical process stimulate the development of creative symbolization, transforming the emotional experiences and leading towards mental growth? The author approaches the clinical problem with the metaphor of the psychic birth of emotional experience. The modulation of mental pain in a container-contained relationship is a central problem for the development of the human mind. For discovering and giving a meaning to emotional experience, the infant depends on reverie, a function necessary in order to develop an evolved consciousness capable of being aware, which is different from the rudimentary consciousness that perceives but does not understand. The development of mature mental equipment is associated with the 6. Distinctive transforming genes in x-ray-transformed mammalian cells International Nuclear Information System (INIS) Borek, C.; Ong, A.; Mason, H. 1987-01-01 DNAs from hamster embryo cells and mouse C3H/10T1/2 cells transformed in vitro by x-irradiation into malignant cells transmit the radiation transformation phenotype by producing transformed colonies (transfectants) in two mouse recipient lines, the NIH 3T3 and C3H/101/2 cells, and in a rat cell line, the Rat-2 cells. DNAs from unirradiated cells or irradiated and visibly untransformed cells do not produce transformed colonies. The transfectant grow in agar and form tumors in nude mice. Treatment of the DNAs with restriction endonucleases prior to transfection indicates that the same transforming gene (oncogene) is present in each of the transformed mouse cells and is the same in each of the transformed hamster cells. Southern blot analysis of 3T3 or Rat-2 transfectants carrying oncogenes from radiation-transformed C3H/10T1/2 or hamster cells indicates that the oncogenes responsible for the transformation of 3T3 cells are not the Ki-ras, Ha-ras, N-ras genes, nor are they neu, trk, raf, abl, or fms. The work demonstrates that DNAs from mammalian cells transformed into malignancy by direct exposure in vitro to radiation contain genetic sequences with detectable transforming activity in three recipient cell lines. The results provide evidence that DNA is the target of radiation carcinogenesis induced at a cellular level in vitro. The experiments indicate that malignant radiogenic transformation in vitro of hamster embryo and mouse C3H/10T1/2 cells involves the activation of unique non-ras transforming genes, which heretofore have not been described 7. Adaption of optical Fresnel transform to optical Wigner transform International Nuclear Information System (INIS) Lv Cuihong; Fan Hongyi 2010-01-01 Enlightened by the algorithmic isomorphism between the rotation of the Wigner distribution function (WDF) and the αth fractional Fourier transform, we show that the optical Fresnel transform performed on the input through an ABCD system makes the output naturally adapting to the associated Wigner transform, i.e. there exists algorithmic isomorphism between ABCD transformation of the WDF and the optical Fresnel transform. We prove this adaption in the context of operator language. Both the single-mode and the two-mode Fresnel operators as the image of classical Fresnel transform are introduced in our discussions, while the two-mode Wigner operator in the entangled state representation is introduced for fitting the two-mode Fresnel operator. 8. INFORMATION MODEL OF SOCIAL TRANSFORMATIONS Directory of Open Access Journals (Sweden) Мария Васильевна Комова 2013-09-01 Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the 9. A piezoelectric transformer Science.gov (United States) Won, C. C. 1993-01-01 This work describes a modeling and design method whereby a piezoelectric system is formulated by two sets of second-order equations, one for the mechanical system, and the other for the electrical system, coupled through the piezoelectric effect. The solution to this electromechanical coupled system gives a physical interpretation of the piezoelectric effect as a piezoelectric transformer that is a part of the piezoelectric system, which transfers the applied mechanical force into a force-controlled current source, and short circuit mechanical compliance into capacitance. It also transfers the voltage source into a voltage-controlled relative velocity input, and free motional capacitance into mechanical compliance. The formulation and interpretation simplify the modeling of smart structures and lead to physical insight that aids the designer. Due to its physical realization, the smart structural system can be unconditional stable and effectively control responses. This new concept has been demonstrated in three numerical examples for a simple piezoelectric system. 10. Recurrent Spatial Transformer Networks DEFF Research Database (Denmark) Sønderby, Søren Kaae; Sønderby, Casper Kaae; Maaløe, Lars 2015-01-01 We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.......9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input... 11. Professionsidentitet under transformation DEFF Research Database (Denmark) Højbjerg, Karin 2016-01-01 Point of departure is the increase of differentiation we see within knowledge and labor. Consequently, more educations are established and pressumably the new professionals will consolidate as professions with own jurisdiction. This article aims to focus on the transfor- mation, a profession...... identity undergoes in the process of consolidation. Empirical data orig- inating from ethnographic field studies of the clinical teacher’s teaching practices within the practical part of nurse education are analyzed as a case, and here seen as a kind of wel- fare organization. With inspiration from Stuart...... Hall and his cultural study tradition the analysis shows how the clinical teacher experiences setbacks when striving for a better po- sition. However, these ‘diasporic experiences’ determine that the transformation of profes- sion identity contains both elements of emancipation and change but also... 12. Lost in transformation? DEFF Research Database (Denmark) Norlyk, Annelise; Haahr, Anita; Dreyer, Pia 2017-01-01 and values from evidence-based medicine are being lost in the transformation into the current evidence-based hospital culture which potentially leads to a McDonaldization of nursing practice reflected as ‘one best way’. We argue for reviving ethics of care perspectives in today’s evidence practice...... as the fundamental values of nursing may potentially bridge conflicts between evidence-based practice and the ideals of patient participation thus preventing a practice of ‘McNursing’. Key words: nursing practice, evidence-based practice, nursing theory, nursing theorists, ethics of care, hospital culture, patient......Drawing on our previous empirical research, we provide an exemplary narrative to illustrate how patients have experienced hospital care organized according to evidence-based fast-track programmes. The aim of this paper is to analyse and discuss if and how it is possible to include patients... 13. Fourier Transform Spectrometer System Science.gov (United States) Campbell, Joel F. (Inventor) 2014-01-01 A Fourier transform spectrometer (FTS) data acquisition system includes an FTS spectrometer that receives a spectral signal and a laser signal. The system further includes a wideband detector, which is in communication with the FTS spectrometer and receives the spectral signal and laser signal from the FTS spectrometer. The wideband detector produces a composite signal comprising the laser signal and the spectral signal. The system further comprises a converter in communication with the wideband detector to receive and digitize the composite signal. The system further includes a signal processing unit that receives the composite signal from the converter. The signal processing unit further filters the laser signal and the spectral signal from the composite signal and demodulates the laser signal, to produce velocity corrected spectral data. 14. Built-Up Area Detection from High-Resolution Satellite Images Using Multi-Scale Wavelet Transform and Local Spatial Statistics Science.gov (United States) Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z. 2018-04-01 Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection. 15. The linear canonical transformation : definition and properties NARCIS (Netherlands) Bastiaans, Martin J.; Alieva, Tatiana; Healy, J.J.; Kutay, M.A.; Ozaktas, H.M.; Sheridan, J.T. 2016-01-01 In this chapter we introduce the class of linear canonical transformations, which includes as particular cases the Fourier transformation (and its generalization: the fractional Fourier transformation), the Fresnel transformation, and magnifier, rotation and shearing operations. The basic properties 16. Performance of nonsynchronous noncommensurate impedance transformers in comparison to tapered line transformers DEFF Research Database (Denmark) Kim, Kseniya; Zhurbenko, Vitaliy; Johansen, Tom Keinicke 2012-01-01 to a traditional tapered line impedance transformer. The increase in bandwidth of nonsynchronous noncommensurate impedance transformers typically leads to shortening the transformer length, which makes the transformer attractive for applications, where a wide operating band and high transformation ratios... 17. Piezoelectric Transformers: An Historical Review OpenAIRE Alfredo Vazquez Carazo 2016-01-01 Piezoelectric transformers (PTs) are solid-state devices that transform electrical energy into electrical energy by means of a mechanical vibration. These devices are manufactured using piezoelectric materials that are driven at resonance. With appropriate design and circuitry, it is possible to step up and step down the voltages between the input and output sections of the piezoelectric transformer, without making use of magnetic materials and obtaining excellent conversion efficiencies. The... 18. The Foldy-Wouthuysen transformation International Nuclear Information System (INIS) Costella, J.P.; McKellar, B.H.J. 1994-01-01 The Foldy-Wouthuysen transformation of Dirac Hamiltonian is generally taught as a mathematical trick that allows one to obtain a two-component theory in the low-energy limit. It is not often emphasised that the transformed representation is the only one in which one can take meaningful classical limit, in terms of particles and antiparticles. The history and physics of this transformation are briefly revised. 12 refs 19. The Foldy--Wouthuysen transformation International Nuclear Information System (INIS) Costella, J.P.; McKellar, B.H.J. 1995-01-01 The Foldy--Wouthuysen transformation of the Dirac Hamiltonian is generally taught as simply a mathematical trick that allows one to obtain a two-component theory in the low-energy limit. It is not often emphasized that the transformed representation is the only one in which one can take a meaningful classical limit, in terms of particles and antiparticles. We briefly review the history and physics of this transformation. copyright 1995 American Association of Physics Teachers 20. Twistor Transform for Spinning Particle International Nuclear Information System (INIS) Fedoruk, S. 2005-01-01 Twistorial formulation of a particle of arbitrary spin has been constructed. The twistor formulation is deduced from a space-time formulation of the spinning particle by introducing pure gauge Lorentz harmonics in this system. Canonical transformations and gauge fixing conditions, excluding space-time variables, produce the fundamental conditions of twistor transform relating the space-time formulation and twistor one. Integral transformations, relating massive twistor fields with usual space-time fields, have been constructed 1. Transform analysis of generalized functions CERN Document Server Misra, O P 1986-01-01 Transform Analysis of Generalized Functions concentrates on finite parts of integrals, generalized functions and distributions. It gives a unified treatment of the distributional setting with transform analysis, i.e. Fourier, Laplace, Stieltjes, Mellin, Hankel and Bessel Series.Included are accounts of applications of the theory of integral transforms in a distributional setting to the solution of problems arising in mathematical physics. Information on distributional solutions of differential, partial differential equations and integral equations is conveniently collected here.The volume will 2. Scalar perturbations and conformal transformation International Nuclear Information System (INIS) Fabris, J.C.; Tossa, J. 1995-11-01 The non-minimal coupling of gravity to a scalar field can be transformed into a minimal coupling through a conformal transformation. We show how to connect the results of a perturbation calculation, performed around a Friedman-Robertson-Walker background solution, before and after the conformal transformation. We work in the synchronous gauge, but we discuss the implications of employing other frames. (author). 16 refs 3. Ethical aspects of transformational leadership. Science.gov (United States) Cassidy, V R; Koroll, C J 1994-10-01 The requirements of leadership in the current environment of health care reform necessitate a clear distinction between leadership and management, an alteration in traditional leadership roles, and an evaluation of the knowledge and skills needed to address the ethical issues that arise from such reform. Transformational leadership is well suited to the current climate in health care because of the manner in which it actively embraces and encourages innovation and change. The article explores the elements of transformational leadership, describes the need for transformational leaders to be cognizant of the ethical aspects of their roles, and outlines the responsibilities of transformational leaders as moral agents. 4. Generalized field-transforming metamaterials International Nuclear Information System (INIS) Tretyakov, Sergei A; Nefedov, Igor S; Alitalo, Pekka 2008-01-01 In this paper, we introduce a generalized concept of field-transforming metamaterials, which perform field transformations defined as linear relations between the original and transformed fields. These artificial media change the fields in a prescribed fashion in the volume occupied by the medium. We show what electromagnetic properties of transforming medium are required. The coefficients of these linear functions can be arbitrary scalar functions of position and frequency, which makes the approach quite general and opens a possibility to realize various unusual devices. 5. Transforming Norwegian Special Operation Forces National Research Council Canada - National Science Library Robertsen, Tom A 2006-01-01 This paper explores the transformation of Norwegian Special Operation Forces (NORSOF), raising the hypothesis that its current organizational structure is inconsistent with its future roles and missions... 6. Microfabricated Bulk Piezoelectric Transformers Science.gov (United States) Barham, Oliver M. Piezoelectric voltage transformers (PTs) can be used to transform an input voltage into a different, required output voltage needed in electronic and electro- mechanical systems, among other varied uses. On the macro scale, they have been commercialized in electronics powering consumer laptop liquid crystal displays, and compete with an older, more prevalent technology, inductive electromagnetic volt- age transformers (EMTs). The present work investigates PTs on smaller size scales that are currently in the academic research sphere, with an eye towards applications including micro-robotics and other small-scale electronic and electromechanical sys- tems. PTs and EMTs are compared on the basis of power and energy density, with PTs trending towards higher values of power and energy density, comparatively, indicating their suitability for small-scale systems. Among PT topologies, bulk disc-type PTs, operating in their fundamental radial extension mode, and free-free beam PTs, operating in their fundamental length extensional mode, are good can- didates for microfabrication and are considered here. Analytical modeling based on the Extended Hamilton Method is used to predict device performance and integrate mechanical tethering as a boundary condition. This model differs from previous PT models in that the electric enthalpy is used to derive constituent equations of motion with Hamilton's Method, and therefore this approach is also more generally applica- ble to other piezoelectric systems outside of the present work. Prototype devices are microfabricated using a two mask process consisting of traditional photolithography combined with micropowder blasting, and are tested with various output electri- cal loads. 4mm diameter tethered disc PTs on the order of .002cm. 3 , two orders smaller than the bulk PT literature, had the followingperformance: a prototype with electrode area ratio (input area / output area) = 1 had peak gain of 2.3 (+/- 0.1), efficiency of 33 (+/- 0 7. Power transformers - Part 11: Dry-type transformers CERN Document Server International Electrotechnical Commission. Geneva 2004-01-01 Applies to dry-type power transformers (including auto-transformers) having values of highest voltage for equipment up to and including 36 kV and at least one winding operating at greater than 1,1 kV. Applies to all construction technologies. 8. Logarithmic Transformations in Regression: Do You Transform Back Correctly? Science.gov (United States) Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P. 2009-01-01 The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response… 9. Life cycle of transformer oil Directory of Open Access Journals (Sweden) Đurđević Ksenija R. 2008-01-01 Full Text Available The consumption of electric power is constantly increasing due to industrialization and population growth. This results in much more severe operating conditions of transformers, the most important electrical devices that make integral parts of power transmission and distribution systems. The designed operating life of the majority of worldwide transformers has already expired, which puts the increase of transformer reliability and operating life extension in the spotlight. Transformer oil plays a very important role in transformer operation, since it provides insulation and cooling, helps extinguishing sparks and dissolves gases formed during oil degradation. In addition to this, it also dissolves moisture and gases from cellulose insulation and atmosphere it is exposed to. Further and by no means less important functions of transformer are of diagnostic purpose. It has been determined that examination and inspection of insulation oil provide 70% of information on transformer condition, which can be divided in three main groups: dielectric condition, aged transformer condition and oil degradation condition. By inspecting and examining the application oil it is possible to determine the condition of insulation, oil and solid insulation (paper, as well as irregularities in transformer operation. All of the above-mentioned reasons and facts create ground for the subject of this research covering two stages of transformer oil life cycle: (1 proactive maintenance and monitoring of transformer oils in the course of utilization with reference to influence of transformer oil condition on paper insulation condition, as well as the condition of the transformer itself; (2 regeneration of transformer oils for the purpose of extension of utilization period and paper insulation revitalization potential by means of oil purification. The study highlights advantages of oil-paper insulation revitalization over oil replacement. Besides economic, there are 10. Bioenergy and African transformation. Science.gov (United States) Lynd, Lee R; Sow, Mariam; Chimphango, Annie Fa; Cortez, Luis Ab; Brito Cruz, Carlos H; Elmissiry, Mosad; Laser, Mark; Mayaki, Ibrahim A; Moraes, Marcia Afd; Nogueira, Luiz Ah; Wolfaardt, Gideon M; Woods, Jeremy; van Zyl, Willem H 2015-01-01 Among the world's continents, Africa has the highest incidence of food insecurity and poverty and the highest rates of population growth. Yet Africa also has the most arable land, the lowest crop yields, and by far the most plentiful land resources relative to energy demand. It is thus of interest to examine the potential of expanded modern bioenergy production in Africa. Here we consider bioenergy as an enabler for development, and provide an overview of modern bioenergy technologies with a comment on application in an Africa context. Experience with bioenergy in Africa offers evidence of social benefits and also some important lessons. In Brazil, social development, agricultural development and food security, and bioenergy development have been synergistic rather than antagonistic. Realizing similar success in African countries will require clear vision, good governance, and adaptation of technologies, knowledge, and business models to myriad local circumstances. Strategies for integrated production of food crops, livestock, and bioenergy are potentially attractive and offer an alternative to an agricultural model featuring specialized land use. If done thoughtfully, there is considerable evidence that food security and economic development in Africa can be addressed more effectively with modern bioenergy than without it. Modern bioenergy can be an agent of African transformation, with potential social benefits accruing to multiple sectors and extending well beyond energy supply per se. Potential negative impacts also cut across sectors. Thus, institutionally inclusive multi-sector legislative structures will be more effective at maximizing the social benefits of bioenergy compared to institutionally exclusive, single-sector structures. 11. Network Transformations in Economy Directory of Open Access Journals (Sweden) Bolychev O. 2014-09-01 Full Text Available In the context of ever-increasing market competition, networked interactions play a special role in the economy. The network form of entrepreneurship is increasingly viewed as an effective organizational structure to create a market value embedded in innovative business solutions. The authors study the characteristics of a network as an economic category and emphasize certain similarities between Rus sian and international approaches to identifying interactions of economic systems based on the network principle. The paper focuses on the types of networks widely used in the economy. The authors analyze the transformation of business networks along two lines: from an intra- to an inter-firm network and from an inter-firm to an inter-organizational network. The possible forms of network formation are described depending on the strength of connections and the type of integration. The drivers and reasons behind process of transition from a hierarchical model of the organizational structure to a network type are identified. The authors analyze the advantages of creating inter-firm networks and discuss the features of inter-organizational networks as compares to inter-firm ones. The article summarizes the reasons for and advantages of participation in inter-rganizational networks and identifies the main barriers to the formation of inter-organizational network. 12. Memory of Power Transformed Directory of Open Access Journals (Sweden) Kalina Maleska 2014-11-01 Full Text Available This essay is focused on the phenomenon of power. Special attention is paid to the past understanding, research and explanation of what power is, and how it has been understood throughout history. Traditionally, power has referred to authority, influence, control. The research of literary works, however, has led me to the realization that the notion of power is understood in different terms in literature in comparison to how it is explained in philosophy and the social sciences. In order to contribute to the broader understanding of power from a literary point of view, this essay examines many questions concerning this phenomenon, such as: how does the past understanding of power determine how it is accepted and interpreted in the present? How are the success of the present efforts and initiatives affected by the memory of power? The essay attempts to show that the memory of the notion of power is not and cannot be fixed and given once and for all. Therefore, the literary examples provided demonstrate how the definitions of power given in the past are transformed and transfigured by present literary works, which show how we may “forget” what we know about this phenomenon, and define it from a new perspective. 13. Towards Transformative Leadership in Education Science.gov (United States) van Oord, Lodewijk 2013-01-01 This article argues that an educational organization's type of leadership will to a very large extent determine the quality of personal transformation it instigates among its stakeholders. Focusing on the importance of transformative leadership, such leadership will be viewed as a critical and collaborative process in which school-based… 14. Cayley transform on Stiefel manifolds Science.gov (United States) Macías-Virgós, Enrique; Pereira-Sáez, María José; Tanré, Daniel 2018-01-01 The Cayley transform for orthogonal groups is a well known construction with applications in real and complex analysis, linear algebra and computer science. In this work, we construct Cayley transforms on Stiefel manifolds. Applications to the Lusternik-Schnirelmann category and optimization problems are presented. 15. Organizational Learning through Transformational Leadership Science.gov (United States) 2016-01-01 Purpose: The transformation of firms from resource-based-view to knowledge-based-view has extended the importance of organizational learning. Thus, this study aims to develop an organizational learning model through transformational leadership with indirect effect of knowledge management process capability and interactive role of… 16. Dependency Parsing with Transformed Feature Directory of Open Access Journals (Sweden) Fuxiang Wu 2017-01-01 Full Text Available Dependency parsing is an important subtask of natural language processing. In this paper, we propose an embedding feature transforming method for graph-based parsing, transform-based parsing, which directly utilizes the inner similarity of the features to extract information from all feature strings including the un-indexed strings and alleviate the feature sparse problem. The model transforms the extracted features to transformed features via applying a feature weight matrix, which consists of similarities between the feature strings. Since the matrix is usually rank-deficient because of similar feature strings, it would influence the strength of constraints. However, it is proven that the duplicate transformed features do not degrade the optimization algorithm: the margin infused relaxed algorithm. Moreover, this problem can be alleviated by reducing the number of the nearest transformed features of a feature. In addition, to further improve the parsing accuracy, a fusion parser is introduced to integrate transformed and original features. Our experiments verify that both transform-based and fusion parser improve the parsing accuracy compared to the corresponding feature-based parser. 17. Anharmonic oscillator and Bogoliubov transformation International Nuclear Information System (INIS) Pattnayak, G.C.; Torasia, S.; Rath, B. 1990-01-01 The anharmonic oscillator occupies a cornerstone in many problems in physics. It was observed that none of the authors have tested Bogoliubov transformation to study anharmonic oscillator. The groundstate energy of the anharmonic oscillator is studied using Bogoliubov transformation and the results presented. (author) 18. Efficient Plastid Transformation in Arabidopsis. Science.gov (United States) Yu, Qiguo; Lutz, Kerry Ann; Maliga, Pal 2017-09-01 Plastid transformation is routine in tobacco ( Nicotiana tabacum ) but 100-fold less frequent in Arabidopsis ( Arabidopsis thaliana ), preventing its use in plastid biology. A recent study revealed that null mutations in ACC2 , encoding a plastid-targeted acetyl-coenzyme A carboxylase, cause hypersensitivity to spectinomycin. We hypothesized that plastid transformation efficiency should increase in the acc2 background, because when ACC2 is absent, fatty acid biosynthesis becomes dependent on translation of the plastid-encoded ACC β-carboxylase subunit. We bombarded ACC2 -defective Arabidopsis leaves with a vector carrying a selectable spectinomycin resistance ( aadA ) gene and gfp , encoding the green fluorescence protein GFP. Spectinomycin-resistant clones were identified as green cell clusters on a spectinomycin medium. Plastid transformation was confirmed by GFP accumulation from the second open reading frame of a polycistronic messenger RNA, which would not be translated in the cytoplasm. We obtained one to two plastid transformation events per bombarded sample in spectinomycin-hypersensitive Slavice and Columbia acc2 knockout backgrounds, an approximately 100-fold enhanced plastid transformation frequency. Slavice and Columbia are accessions in which plant regeneration is uncharacterized or difficult to obtain. A practical system for Arabidopsis plastid transformation will be obtained by creating an ACC2 null background in a regenerable Arabidopsis accession. The recognition that the duplicated ACCase in Arabidopsis is an impediment to plastid transformation provides a rational template to implement plastid transformation in related recalcitrant crops. © 2017 American Society of Plant Biologists. All Rights Reserved. 19. tt: Treelet transform with Stata DEFF Research Database (Denmark) Gorst-Rasmussen, Anders 2012-01-01 for much of the variation in the original data. However, in contrast to principal component analysis, the treelet transform produces sparse components. This can greatly simplify interpretation. I describe the tt Stata add-on for performing the treelet transform. The add-on includes a Mata implementation... 20. Sarcomatous transformation of nasopharyngeal angiofibroma International Nuclear Information System (INIS) Chen, K.T.; Bauer, F.W. 1982-01-01 A case of fibrosarcoma arising in a recurrent nasopharyngeal angiofibroma 18 years after radiation therapy is described. A review of the medical literature revealed two other documented cases of sarcomatous transformation of angiofibroma, and in both, the angiofibromas had also been irradiated before the sarcomatous transformation. These occurrences should caution against the indiscriminate application of radiation therapy in nasopharyngeal angiofibromas
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6098935604095459, "perplexity": 3077.9410482505964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746847.97/warc/CC-MAIN-20181120231755-20181121013755-00337.warc.gz"}
https://balbhartisolutions.com/maharashtra-board-class-11-physics-solutions-chapter-13/
Balbharti Maharashtra State Board 11th Physics Textbook Solutions Chapter 13 Electromagnetic Waves and Communication System Textbook Exercise Questions and Answers. Maharashtra State Board 11th Physics Solutions Chapter 13 Electromagnetic Waves and Communication System 1. Choose the correct option. Question 1. The EM wave emitted by the Sun and responsible for heating the Earth’s atmosphere due to the greenhouse effect is (B) X-ray (C) Microwave (D) Visible light Question 2. Earth’s atmosphere is richest in (A) UV (B) IR (C) X-ray (D) Microwaves (B) IR Question 3. How does the frequency of a beam of ultraviolet light change when it travels from air into glass? (A) depends on the values of p and e (B) increases (C) decreases (D) remains same (D) remains same Question 4. The direction of EM wave is given by (A) $$\bar{E}$$ × $$\bar{B}$$ (B) $$\bar{E}$$.$$\bar{B}$$ (C) along $$\bar{E}$$ (D) along $$\bar{B}$$ (A) $$\bar{E}$$ × $$\bar{B}$$ Question 5. The maximum distance upto which TV transmission from a TV tower of height h can be received is proportional to (A) h½ (B) h (C) h3/2 (D) h² (A) h½ Question 6. The waves used by artificial satellites for communication purposes are (A) Microwave (D) X-rays (A) Microwave Question 7. If a TV telecast is to cover a radius of 640 km, what should be the height of transmitting antenna? (A) 32000 m (B) 53000 m (C) 42000 m (D) 55000 m (A) 32000 m Question 1. State two characteristics of an EM wave. i. The electric and magnetic fields, $$\vec{E}$$ and $$\vec{B}$$ are always perpendicular to each other and also to the direction of propagation of the EM wave. Thus, the EM waves are transverse waves. ii. The cross product ($$\vec{E}$$ × $$\vec{B}$$) gives the direction in which the EM wave travels. ($$\vec{E}$$ × $$\vec{B}$$) also gives the energy carried by EM wave. Question 2. Why are microwaves used in radar? Microwaves are used in radar systems for identifying the location of distant objects like ships, aeroplanes etc. Question 3. What are EM waves? Waves that are caused by the acceleration of charged particles and consist of electric and magnetic fields vibrating sinusoidally at right angles to each other and to the direction of propagation are called EM waves or EM radiation. Question 4. How are EM waves produced? 1. According to quantum theory, an electron, while orbiting around the nucleus in a stable orbit does not emit EM radiation even though it undergoes acceleration. 2. It will emit an EM radiation only when it falls from an orbit of higher energy to one of lower energy. 3. EM waves (such as X-rays) are produced when fast moving electrons hit a target of high atomic number (such as molybdenum, copper, etc.). 4. An electric charge at rest has an electric field in the region around it but has no magnetic field. 5. When the charge moves, it produces both electric and magnetic fields. 6. If the charge moves with a constant velocity, the magnetic field will not change with time and hence, it cannot produce an EM wave. 7. But if the charge is accelerated, both the magnetic and electric fields change with space and time and an EM wave is produced. 8. Thus, an oscillating charge emits an EM wave which has the same frequency as that of the oscillation of the charge. Question 5. Can we produce a pure electric or magnetic wave in space? Why? No. In vacuum, an electric field cannot directly induce another electric field so a “pure” electric field wave cannot exist and same can be said for a “pure” magnetic wave. Question 6. Does an ordinary electric lamp emit EM waves? Yes, ordinary electric lamp emits EM waves. Question 7. Why light waves travel in vacuum whereas sound wave cannot? Light waves are electromagnetic waves which can travel in vacuum whereas sound waves travel due to the vibration of particles of medium. Without any particles present (like in a vacuum) no vibrations can be produced. Hence, the sound wave cannot travel through the vacuum. Question 8. What are ultraviolet rays? Give two uses. Production: 1. Ultraviolet rays can be produced by the mercury vapour lamp, electric spark and carbon arc lamp. 2. They can also be obtained by striking electrical discharge in hydrogen and xenon gas tubes. 3. The Sun is the most important natural source of ultraviolet rays, most of which are absorbed by the ozone layer in the Earth’s atmosphere. Uses: 1. Ultraviolet rays destroy germs and bacteria and hence they are used for sterilizing surgical instruments and for purification of water. 2. Used in burglar alarms and security systems. 3. Used to distinguish real and fake gems. Question 9. What are radio waves? Give its two uses. 1. Radio waves are produced by accelerated motion of charges in a conducting wire. The frequency of waves produced by the circuit depends upon the magnitudes of the inductance and the capacitance. 2. Thus, by choosing suitable values of the inductance and the capacitance, radio waves of desired frequency can be produced. Uses: 1. Radio waves are used for wireless communication purpose. 2. They are used for radio broadcasting and transmission of TV signals. 3. Cellular phones use radio waves to transmit voice communication in the ultra high frequency (UHF) band. Question 10. Name the most harmful radiation entering the Earth’s atmosphere from the outer space. Question 11. Give reasons for the following: ii. Satellites are used for long distance TV transmission. i. Long distance radio broadcast uses short wave bands because electromagnetic waves only in the frequency range of short wave bands only are reflected by the ionosphere. ii. a. It is necessary to use satellites for long distance TV transmissions because television signals are of high frequencies and high energies. Thus, these signals are not reflected by the ionosphere. b. Hence, satellites are helpful in long distance TV transmission. Question 12. Name the three basic units of any communication system. Three basic (essential) elements of every communication system are transmitter, communication channel and receiver. Question 13. What is a carrier wave? The high frequency waves on which the signals to be transmitted are superimposed are called carrier waves. Question 14. Why high frequency carrier waves are used for transmission of audio signals? An audio signal has low frequency (<20 kHz) and low frequency signals cannot be transmitted over large distances. Because of this, a high frequency carrier waves are used for transmission. Question 15. What is modulation? The signals in communication system (e.g. music, speech etc.) are low frequency signals and cannot be transmitted over large distances. In order to transmit the signal to large distances, it is superimposed on a high frequency wave (called carrier wave). This process is called modulation. Question 16. What is meant by amplitude modulation? When the amplitude of carrier wave is varied in accordance with the modulating signal, the process is called amplitude modulation. Question 17. What is meant by noise? 1. A random unwanted signal is called noise. 2. The source generating the noise may be located inside or outside the system. 3. Efforts should be made to minimize the noise level in a communication system. Question 18. What is meant by bandwidth? The bandwidth of an electronic circuit is the range of frequencies over which it operates efficiently. Question 19. What is demodulation? The process of regaining signal from a modulated wave is called demodulation. This is the reverse process of modulation. Question 20. What type of modulation is required for television broadcast? Amplitude modulation is required for television broadcast. Question 21. How does the effective power radiated by an antenna vary with wavelength? 1. To transmit a signal, an antenna or an aerial is needed. 2. Power radiated from a linear antenna of length l is, P ∝ ($$\frac {l}{λ}$$)² where, λ is the wavelength of the signal. Question 22. Why should broadcasting programs use different frequencies? If broadcasting programs run on same frequency, then the information carried by these waves will get mixed up with each other. Hence, different broadcasting programs should run on different frequencies. Question 23. Explain the necessity of a carrier wave in communication. 1. Without a carrier wave, the input signals could be carried by very low frequency electromagnetic waves but it will need quite a bit of amplification in order to transmit those very low frequencies. 2. The input signals themselves do not have much power and need a fairly large antenna in order to transmit the information. 3. Hence, it is necessary to impose the input signal on carrier wave as it requires less power in order to transmit the information. Question 24. Why does amplitude modulation give noisy reception? i. In amplitude modulation, carrier is varied in accordance with the message signal. ii. The higher the amplitude, the greater is magnitude of the signal. So even if due to any reason, the magnitude of the signal changes, it will lead to variation in the amplitude of the signal. So its easy for noise to disturb the amplitude modulated signal. Question 25. Explain why is modulation needed. Modulation helps in avoiding mixing up of signals from different transmitters as different carrier wave frequencies can be allotted to different transmitters. Without the use of these waves, the audio signals, if transmitted directly by different transmitters, would get mixed up. 3. Solve the numerical problem. Question 1. Calculate the frequency in MHz of a radio wave of wavelength 250 m. Remember that the speed of all EM waves in vacuum is 3.0 × 108 m/s. Given: λ = 250 m, c = 3 × 108 m/s To find: Frequency (v) Formula: c = v8 Calculation: From formula, v = $$\frac {c}{λ}$$ = $$\frac {3×10^8}{250}$$ = 1.2 × 106 Hz = 1.2 MHz Question 2. Calculate the wavelength in nm of an X-ray wave of frequency 2.0 × 1018 Hz. Solution: Given: c = 3 × 108, v = 2 × 1018 Hz To find: Wavelength (λ) Formula: c = vλ Calculation. From formula, λ = $$\frac {c}{v}$$ = $$\frac {3×10^8}{2×10^{18}}$$ = 1.5 × 10-10 = 0.15 nm Question 3. The speed of light is 3 × 108 m/s. Calculate the frequency of red light of wavelength of 6.5 × 10-7 m. Given: c = 3 × 108 m/s, λ = 6.5 × 10-7 m To find: Frequency (v) Formula: c = vλ Calculation: From formula, v = $$\frac {c}{λ}$$ = $$\frac {3×10^8}{6.5×10^{-7}}$$ = 4.6 × 1014 Hz Question 4. Calculate the wavelength of a microwave of frequency 8.0 GHz. Given: v = 8 GHz = 8 × 109 Hz, c = 3 × 108 m/s To find: Wavelength (λ) Formula: c = vλ Calculation: From formula, λ = $$\frac {c}{λ}$$ = $$\frac {3×10^8}{8×10^9}$$ = 3.75 × 10-2 = 3.75 cm Question 5. In a EM wave the electric field oscillates sinusoidally at a frequency of 2 × 1010 What is the wavelength of the wave? Given: v = 2 × 1010 Hz, c = 3 × 108 m To find: Wavelength (λ) Formula: c = vλ Calculation: From formula, λ = $$\frac {c}{λ}$$ = $$\frac {3×10^8}{2×10^{10}}$$ = 1.5 × 10-2 Question 6. The amplitude of the magnetic field part of a harmonic EM wave in vacuum is B0 = 5 X 10-7 T. What is the amplitude of the electric field part of the wave? Given: B0 = 5 × 10-7 T, c = 3 × 108 To find: Amplitude of electric field (E0) Formula: c = $$\frac {E_0}{B_0}$$ Calculation /From formula, E0 = c × B0 = 3 × 108 × 5 × 10-7 = 150 V/m Question 7. A TV tower has a height of 200 m. How much population is covered by TV transmission if the average population density around the tower is 1000/km²? (Radius of the Earth = 6.4 × 106 m) Given: h = 200 m, Population density (n) = 1000/km² = 1000 × 10-6/m² = 10-3/m² R = 6.4 ×106 m To find: Population covered Formulae: i. A = πd² = π($$\sqrt{2Rh}$$)² = 2πRh ii. Population covered = nA Calculation /From formula (i), A = 2πRh = 2 × 3.142 × 6.4 × 106 × 200 ≈ 8 × 109 From formula (ii), Population covered = nA = 10-3 × 8 × 109 = 8 × 106 Question 8. Height of a TV tower is 600 m at a given place. Calculate its coverage range if the radius of the Earth is 6400 km. What should be the height to get the double coverage area? Given: h = 600 m, R = 6.4 × 106 m To find: Range (d) Height to get the double coverage (h’) Formula: d = $$\sqrt{2hR}$$ Calculation: From formula, d = $$\sqrt{2×600×6.4×10^6}$$ = 87.6 × 10³ = 87.6 km Now, for A’ = 2A π(d’)² = 2 (πd²) ∴ (d’)² = 2d² From formula, h’ = $$\frac{(d’)^2}{2R}$$ = $$\frac{2d^2}{2R}$$ = 2 × h ……….. (∵ h = $$\frac{d^2}{2R}$$) = 2 × 600 =1200 m Question 9. A transmitting antenna at the top of a tower has a height 32 m and that of the receiving antenna is 50 m. What is the maximum distance between them for satisfactory communication in line of sight mode? Given radius of Earth is 6.4 × 106 m. Given: ht = 32 m, hr = 50 m, R = 6.4 × 106 m To find: Maximum distance or range (d) Formula: d = $$\sqrt{2Rh}$$ Calculation: From formula, dt = $$\sqrt{2Rh_t}$$ = $$\sqrt{2×6.4×10^6×32}$$ = 20.238 × 10³ m = 20.238 km dr = $$\sqrt{2Rh_t}$$ = $$\sqrt{2×6.4×10^6×50}$$ = 25.298 × 10³ m = 25.298 km Now, d = dt + dr = 20.238 + 25.298 = 45.536 km 11th Physics Digest Chapter 13 Electromagnetic Waves and Communication System Intext Questions and Answers Can you recall? (Textbookpage no. 229) Question 1. i. What is a wave? Wave is an oscillatory disturbance which travels through a medium without change in its form. ii. What is the difference between longitudinal and transverse waves? a. Transverse wave: A wave in which particles of the medium vibrate in a direction perpendicular to the direction of propagation of wave is called transverse wave. b. Longitudinal wave: A wave in which particles of the medium vibrate in a direction parallel to the direction of propagation of wave is called longitudinal wave. iii. What are electric and magnetic fields and what are their sources? a. Electric field is the force experienced by a test charge in presence of the given charge at the given distance from it. b. A magnetic field is produced around a magnet or around a current carrying conductor. iv. By which mechanism heat is lost by hot bodies? Hot bodies lose the heat in the form of radiation. Question 2. What are Lenz’s law, Ampere’s law and Faraday’s law?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821017861366272, "perplexity": 1458.8778262347487}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00201.warc.gz"}
https://math.stackexchange.com/questions/1252051/uniformly-distributed-random-varibles
# Uniformly Distributed random varibles Question:Suppose $X$ is a uniformly distributed random variable with possible values $1,2, \ldots, 10$. Compute the expected value and variance of $X$. I have started with making a column ($x$ on the left and $y:=P(X=x)$ on the right); EXPECTED VALUE: $$\begin{matrix} X & Y &\ \\ 1 & \frac{1}{10} & (1\times.10) +\\ 2 & \frac{1}{10} & (2\times.10) +\\ \vdots &\vdots&\vdots\\ 10 & \frac{1}{10}& (10\times.10)=\\ \ & \ & =5.5\end{matrix}$$ VARIANCE: $$((.10)-5.5)^2 + ((.20)-5.5)^2 +\cdots+ ((1)-5.5)^2) = 24.59$$ Is this the correct way of handling uniformly distributed random variables? • What would "Y" be? – user228113 Apr 26 '15 at 1:28 • Also, you might check here to learn the proper way to typeset your questions. – user228113 Apr 26 '15 at 1:30 • Y is the probability; therefore 1/10 since uniformly distributed – user234475 Apr 26 '15 at 1:32 If $m$ is the expected value of $X$, the variance of $X$ is defined to be the expected value of $(X-m)^2$. In your case $$(1-5.5)^2\cdot0.10+(2-5.5)^2\cdot 0.10 + \cdots +(10-5.5)^2\cdot0.10=\cdots$$ Anoter way to calculate the variance of $X$ is calculating the expected value of $X^2$, say $k:=1^2\cdot0.10+2^2\cdot0.10+\cdots+10^2\cdot0.10$, and considering $k-m^2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790703058242798, "perplexity": 197.08740569162902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577363.98/warc/CC-MAIN-20190923150847-20190923172847-00244.warc.gz"}
http://link.springer.com/article/10.2165%2F00019053-200017010-00003
, Volume 17, Issue 1, pp 37-52 # Economic Evaluation of Specific Immunotherapy Versus Symptomatic Treatment of Allergic Rhinitis in Germany $49.95 / €39.95 / £34.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract Objective: To use published data to compare the economic consequences of specific immunotherapy (SIT) lasting 3 years with those of continuous symptomatic treatment in patients with either pollen or mite allergy. Design and setting: The evaluation was conducted from the following 3 perspectives in Germany: (i) society; (ii) healthcare system; and (iii) statutory health insurance (SHI) provider. A modelling approach was used which was based on secondary analysis of existing data. The follow-up period was 10 years. The break-even point of cumulated costs, their difference per patient and the additional cost per additional patient free from asthma symptoms [incremental costeffectiveness ratio (ICER)] were used as target variables, each from the viewpoint of SIT. The types of costs were direct and indirect (society), direct (healthcare system) and those incurred by SHI (i.e. expenses). In the base-case analysis, the average values of the clinical parameters and average case-related costs/expenses were applied. Main outcome measures and results: The break-even point was reached between year 6 and year 8 after the start of therapy, resulting in net savings of between 650 and 1190 deutschmarks (DM) per patient after 10 years. The ICERs of SIT were between -DM3640 and -DM7410, depending on study perspective and nature of the allergy (1990 values for symptomatic treatment and treatment of asthma, 1995 values for SIT; DM1 ≈$US0.58). The sensitivity analysis demonstrated the robustness of the model and its results. First, all the independent variables of the model were varied. Secondly, the influence of the model variables was quantified using a deterministic model. SIT was more likely to result in net savings than in additional costs. An economic parameter (cost for symptomatic treatment) had the highest influence on the results. Conclusions: This evaluation showed that SIT for 3 years is economically advantageous in patients who are allergic to pollen or mites and whose symptoms are inadequately controlled by continuous symptomatic treatment. After 10 years, the administration of SIT leads to net savings from the perspectives of society, the healthcare system and SHI (third-party payer) in Germany.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29701802134513855, "perplexity": 2419.4674589813644}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
https://pos.sissa.it/299/094/
PoS(CENet2017)094 Interval T-S fuzzy modeling based on minimizing 1-norm on approximation error X. Liu, S. Zhou, Z. Xiong Contribution: pdf Abstract As the obtained data in many practical applications tend to be uncertain or inaccurate, the conventional modeling methods characterized by the deterministic model for this type of data have become undesirable. Taking linear programming, the T-S fuzzy model and some ideas from 1-norm minimization into consideration, a novel method identifying interval fuzzy model (INFUMO) consisted of the upper and lower T-S fuzzy model (referred to as f U and f L) has been studied in this paper. In order to solve the INFUMO, optimization problems based on minimizing 1-norm with respect to the approximation error corresponding to f U and f Lare constructed. Finally, the optimization problems are solved by the linear programming and INFUMO is thus constructed. To demonstrate its effectiveness, the proposed method is applied to identify the interval T-S model of static and dynamic nonlinear model with noise. The proposed method can not only deal with uncertain data to be usually modeled as the deterministic model, but also has better robustness.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288823366165161, "perplexity": 881.0927857402098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696182.97/warc/CC-MAIN-20170926141625-20170926161625-00584.warc.gz"}
http://math.stackexchange.com/questions/131392/why-one-would-want-to-normalize-a-matrix-by-dividing-it-by-its-frobenius-norm?answertab=active
# Why one would want to normalize a matrix by dividing it by its Frobenius norm? I am currently reading a scientific paper about clustering of brain signals, which consist on long time series across many channels (each signal is a matrix of C channels by T time samples). In the preprocessing of their datas, the authors normalize each signal matrix by dividing it with its Frobenius norm. My problem is that they don't even say why they do so... is this so obvious that I can't see it? Any thought? Thanks! - This is difficult to answer without knowing more about what they do with the matrix once they normalize it. – Rahul Apr 13 '12 at 18:08 Clustering ! :) Precisely, ascendant hierarchical clustering using Ward's method. – CTZStef Apr 13 '12 at 18:15 My educated guess is that they want to compare relative values, not absolute ones, for purposes of clustering. Perhaps they chose the Frobenius norm because it is easy to calculate. edit: So the Frobenius norm was chosen out of computational considerations. It is easy to compute because it does not necessitate SVD, as required of the spectral norm. Moreover, it is easy to update when expanding or reducing the matrix. Once you normalize something by its norm, it is obviously going to have a unit norm (i.e., lie on the unit sphere). - I quote the paper : "Finally, each signal matrix was normalized through division by the Frobenius norm of the matrix. This operation transform each matrix to a point on the surface of the unit sphere in vector space R^CxT" Frobenius is only for easiness of computation? Why is it easier? – CTZStef Apr 13 '12 at 18:20 @CTZStef, Emre: Another good reason to use the Frobenius norm is that operator norms like the spectral norm make sense for linear transformations, but the matrix in question doesn't really represent a linear transformation. It's just a collection of $C\times T$ samples. So it makes sense to compute its norm as a vector in $\mathbb R^{C\times T}$, which happens to be the Frobenius norm of the corresponding matrix. – Rahul Apr 13 '12 at 22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823462188243866, "perplexity": 282.46956330175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161718.0/warc/CC-MAIN-20160205193921-00022-ip-10-236-182-209.ec2.internal.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-319-52153-4_9
CT-RSA 2017: Topics in Cryptology – CT-RSA 2017 pp 149-164 # Surnaming Schemes, Fast Verification, and Applications to SGX Technology • Dan Boneh • Shay Gueron Conference paper Part of the Lecture Notes in Computer Science book series (LNCS, volume 10159) ## Abstract We introduce a new cryptographic primitive that we call surnaming, which is closely related to digital signatures, but has different syntax and security requirements. While surnaming can be constructed from a digital signature, we show that a direct construction can be somewhat simpler. We explain how surnaming plays a central role in Intel’s new Software Guard Extensions (SGX) technology, and present its specific surnaming implementation as a special case. These results explain why SGX does not require a PKI or pinned keys for authorizing enclaves. SGX motivates an interesting question in digital signature design: for reasons explained in the paper, it requires a digital signature scheme where verification must be as fast as possible, the public key must be short, but signature size is less important. We review the RSA-based method currently used in SGX and evaluate its performance. Finally, we propose a new hash-based signature scheme where verification time is much faster than the RSA scheme used in SGX. Our scheme can be scaled to provide post-quantum security, thus offering a viable alternative to the current SGX surnaming system, for a time when post-quantum security becomes necessary. ## Keywords Digital signatures Fast verification Software Guard Extensions (SGX) technology Post-quantum secure signatures ## Notes ### Acknowledgments The first author is supported by NSF, DARPA, the Simons foundation, and a grant from ONR. Opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA. The second author is supported by the PQCRYPTO project, which is partially funded by the European Commission Horizon 2020 research Programme, grant #645622, by the Blavatnik Interdisciplinary Cyber Research Center (ICRC) at the Tel Aviv University, and by the ISRAEL SCIENCE FOUNDATION (grant No. 1018/16). ## References 1. 1. An attack on RSA digital signature. A NIST document (2006). http://csrc.nist.gov/groups/ST/toolkit/documents/dss/RSAstatement_10-12-06.pdf 2. 2. Intel$$^{\textregistered }$$ Software Guard Extensions Programming Reference (2014). https://software.intel.com/en-us/isa-extensions/intel-sgx 3. 3. Anati, I., Gueron, S., Johnson, S., Scarlata, V.: Innovative technology for CPU based attestation and sealing. In: Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy, vol. 13 (2013)Google Scholar 4. 4. Bellare, M., Rogaway, P.: The exact security of digital signatures-how to sign with RSA and Rabin. In: Maurer, U. (ed.) EUROCRYPT 1996. LNCS, vol. 1070, pp. 399–416. Springer, Heidelberg (1996). doi: 5. 5. Bernstein, D.J., Hopwood, D., Hülsing, A., Lange, T., Niederhagen, R., Papachristodoulou, L., Schneider, M., Schwabe, P., Wilcox-O’Hearn, Z.: SPHINCS: practical stateless hash-based signatures. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 368–397. Springer, Heidelberg (2015). doi: Google Scholar 6. 6. Bleichenbacher, D., Maurer, U.: On the efficiency of one-time digital signatures. In: Kim, K., Matsumoto, T. (eds.) ASIACRYPT 1996. LNCS, vol. 1163, pp. 145–158. Springer, Heidelberg (1996). doi: 7. 7. Boneh, D., Gueron, S.: Surnaming schemes, fast verification, and applications to SGX technology (2016). http://crypto.stanford.edu/~dabo/pubs/abstracts/surnaming.html 8. 8. Boneh, D., Lynn, B., Shacham, H.: Short signatures from the Weil pairing. J. Cryptol. 17(4), 297–319 (2004) 9. 9. Buchmann, J., Dahmen, E., Ereth, S., Hülsing, A., Rückert, M.: On the security of the Winternitz one-time signature scheme. In: Nitaj, A., Pointcheval, D. (eds.) AFRICACRYPT 2011. LNCS, vol. 6737, pp. 363–378. Springer, Heidelberg (2011). doi: 10. 10. Gueron, S.: Quick verification of RSA signatures. In: 2011 Eighth International Conference on Information Technology: New Generations (ITNG), pp. 382–386, April 2011Google Scholar 11. 11. Gueron, S.: A memory encryption engine suitable for general purpose processors. Cryptology ePrint Archive, Report 2016/204 (2016). http://eprint.iacr.org/ 12. 12. Gueron, S., Krasnov, V.: Improved P256 ECC performance by means of a dedicated function for modular inversion modulo the P256 group order. OpenSSL patch (2015). https://mta.openssl.org/pipermail/openssl-dev/2015-December/003821.html 13. 13. Gueron, S., Mouha, N.: Simpira v2: a family of efficient permutations using the AES round function. Cryptology ePrint Archive, Report 2016/122 (2016)Google Scholar 14. 14. Halevi, S., Krawczyk, H.: Strengthening digital signatures via randomized hashing. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 41–59. Springer, Heidelberg (2006). doi: 15. 15. Hoekstra, M., Lal, R., Pappachan, P., Phegade, V., Del Cuvillo, J.: Using innovative instructions to create trustworthy software solutions. In: Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy, HASP 2013, p. 11:1. ACM, New York (2013)Google Scholar 16. 16. Johnson, S., Scarlata, V., Rozas, C., Brickell, E., Mckeen, F.: Extensions, Intel$$^{\textregistered }$$ Software Guard: EPID provisioning and attestation services. White Paper (2016)Google Scholar 17. 17. Kaliski, B.S.: Public-Key Cryptography Standards (PKCS) #1: RSA CryptographySpecifications Version 2.1. RFC 3447, October 2015Google Scholar 18. 18. McKeen, F., Alexandrovich, I., Berenzon, A., Rozas, C.V., Shafi, H., Shanbhogue, V., Savagaonkar, U.R.: Innovative instructions and software model for isolated execution. In: Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy, HASP 2013, p. 10:1. ACM, New York (2013)Google Scholar 19. 19. Menezes, A.: Another look at HMQV. Cryptology ePrint Archive, Report 2005/205 (2005). http://eprint.iacr.org/ 20. 20. Merkle, R.C.: A certified digital signature. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 218–238. Springer, Heidelberg (1990). doi: 21. 21. Nyberg, K., Rueppel, A.: A new signature scheme based on the DSA giving message recovery. In: Proceedings of the 1st ACM Conference on Computer and Communications Security, CCS 1993 (1993)Google Scholar 22. 22. Reyzin, L., Reyzin, N.: Better than BiBa: short one-time signatures with fast signing and verifying. In: Batten, L., Seberry, J. (eds.) ACISP 2002. LNCS, vol. 2384, pp. 144–153. Springer, Heidelberg (2002). doi: 23. 23. Rivest, R.L., Hellman, M.E., Anderson, J.C., Lyons, J.W.: Responses to NIST’s proposal. Commun. ACM 35(7), 41–54 (1992)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.641656219959259, "perplexity": 20108.327684157994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157028.10/warc/CC-MAIN-20180921092215-20180921112615-00202.warc.gz"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/E02/e02bdc.html
e02 Chapter Contents e02 Chapter Introduction NAG C Library Manual # NAG Library Function Documentnag_1d_spline_intg (e02bdc) ## 1  Purpose nag_1d_spline_intg (e02bdc) computes the definite integral of a cubic spline from its B-spline representation. ## 2  Specification #include #include void nag_1d_spline_intg (Nag_Spline *spline, double *integral, NagError *fail) ## 3  Description nag_1d_spline_intg (e02bdc) computes the definite integral of the cubic spline $s\left(x\right)$ between the limits $x=a$ and $x=b$, where $a$ and $b$ are respectively the lower and upper limits of the range over which $s\left(x\right)$ is defined. It is assumed that $s\left(x\right)$ is represented in terms of its B-spline coefficients ${c}_{i}$, for $\mathit{i}=1,2,\dots ,\stackrel{-}{n}+3$ and (augmented) ordered knot set ${\lambda }_{i}$, for $\mathit{i}=1,2,\dots ,\stackrel{-}{n}+7$, with ${\lambda }_{i}=a$, for $\mathit{i}=1,2,3,4$ and ${\lambda }_{i}=b$, for $\mathit{i}=\stackrel{-}{n}+4,\dots ,\stackrel{-}{n}+7$, (see nag_1d_spline_fit_knots (e02bac)), i.e., $sx=∑i=1qciNix.$ Here $q=\stackrel{-}{n}+3$, $\stackrel{-}{n}$ is the number of intervals of the spline and ${N}_{i}\left(x\right)$ denotes the normalized B-spline of degree $3$ (order $4$) defined upon the knots ${\lambda }_{i},{\lambda }_{i+1},\dots ,{\lambda }_{i+4}$. The method employed uses the formula given in Section 3 of Cox (1975). nag_1d_spline_intg (e02bdc) can be used to determine the definite integrals of cubic spline fits and interpolants produced by nag_1d_spline_interpolant (e01bac), nag_1d_spline_fit_knots (e02bac) and nag_1d_spline_fit (e02bec). ## 4  References Cox M G (1975) An algorithm for spline interpolation J. Inst. Math. Appl. 15 95–108 ## 5  Arguments 1:     splineNag_Spline * Pointer to structure of type Nag_Spline with the following members: nIntegerInput On entry: $\stackrel{-}{n}+7$, where $\stackrel{-}{n}$ is the number of intervals of the spline (which is one greater than the number of interior knots, i.e., the knots strictly within the range $a$ to $b$) over which the spline is defined. Constraint: $\mathbf{spline}\mathbf{\to }\mathbf{n}\ge 8$. On entry: a pointer to which memory of size $\mathbf{spline}\mathbf{\to }\mathbf{n}$ must be allocated. $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[j-1\right]$ must be set to the value of the $j$th member of the complete set of knots, ${\lambda }_{j}$ for $j=1,2,\dots ,\stackrel{-}{n}+7$. Constraint: the ${\lambda }_{j}$ must be in non-decreasing order with $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-4\right]>\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]$ and satisfy $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[0\right]=\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[1\right]=\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[2\right]=\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]$ and $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-4\right]=\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-3\right]=\text{}\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-2\right]=\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-1\right]$ cdouble *Input On entry: a pointer to which memory of size $\mathbf{spline}\mathbf{\to }\mathbf{n}-4$ must be allocated. $\mathbf{spline}\mathbf{\to }\mathbf{c}$ holds the coefficient ${c}_{i}$ of the B-spline ${N}_{i}\left(x\right)$, for $i=1,2,\dots ,\stackrel{-}{n}+3$. 2:     integraldouble *Output On exit: the value of the definite integral of $s\left(x\right)$ between the limits $x=a$ and $x=b$, where $a={\lambda }_{4}$ and $b={\lambda }_{\stackrel{-}{n}+4}$. 3:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_INT_ARG_LT On entry, $\mathbf{spline}\mathbf{\to }\mathbf{n}=〈\mathit{\text{value}}〉$. Constraint: $\mathbf{spline}\mathbf{\to }\mathbf{n}\ge 8$. NE_KNOTS_CONS On entry, the knots must satisfy the following constraints: $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[\mathbf{spline}\mathbf{\to }\mathbf{n}-4\right]>\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[3\right]$, $\mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[j\right]\ge \mathbf{spline}\mathbf{\to }\mathbf{lamda}\left[j-1\right]$, for $j=1,2,\dots ,\mathbf{spline}\mathbf{\to }\mathbf{n}-1$, with equality in the cases $j=1,2,3$, $\mathbf{spline}\mathbf{\to }\mathbf{n}-3$, $\mathbf{spline}\mathbf{\to }\mathbf{n}-2$ and $\mathbf{spline}\mathbf{\to }\mathbf{n}-1$. ## 7  Accuracy The rounding errors are such that the computed value of the integral is exact for a slightly perturbed set of B-spline coefficients ${c}_{i}$ differing in a relative sense from those supplied by no more than . Under normal usage, the call to nag_1d_spline_intg (e02bdc) will follow a call to nag_1d_spline_interpolant (e01bac), nag_1d_spline_fit_knots (e02bac) or nag_1d_spline_fit (e02bec). In that case, the structure spline will have been set up correctly for input to nag_1d_spline_intg (e02bdc). The time taken is approximately proportional to $\stackrel{-}{n}+7$. ## 9  Example This example determines the definite integral over the interval $0\le x\le 6$ of a cubic spline having $6$ interior knots at the positions $\lambda =1$, $3$, $3$, $3$, $4$, $4$, the $8$ additional knots $0$, $0$, $0$, $0$, $6$, $6$, $6$, $6$, and the $10$ B-spline coefficients $10$, $12$, $13$, $15$, $22$, $26$, $24$, $18$, $14$, $12$. The input data items (using the notation of Section 5) comprise the following values in the order indicated: $\stackrel{-}{n}+7$ ${\mathbf{spline}}\mathbf{.}\mathbf{lamda}\left[j-1\right]$, for $j=1,2,\dots ,{\mathbf{spline}}\mathbf{.}\mathbf{n}$ ${\mathbf{spline}}\mathbf{.}\mathbf{c}\left[j-1\right]$, for $j=1,2,\dots ,{\mathbf{spline}}\mathbf{.}\mathbf{n}-3$ The example program is written in a general form that will enable the definite integral of a cubic spline having an arbitrary number of knots to be computed. Any number of datasets may be supplied. The only changes required to the program relate to the size of ${\mathbf{spline}}\mathbf{.}\mathbf{lamda}$ and the storage allocated to ${\mathbf{spline}}\mathbf{.}\mathbf{c}$ within the structure spline. ### 9.1  Program Text Program Text (e02bdce.c) ### 9.2  Program Data Program Data (e02bdce.d) ### 9.3  Program Results Program Results (e02bdce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 92, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981346726417542, "perplexity": 714.107186191936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928501.75/warc/CC-MAIN-20150521113208-00277-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/434964/relate-to-gp-1-3-9-differentiating-x-i-1-dots-x-i-k-result-spane-i
# Relate to GP 1.3.9 - Differentiating $x_{i_1}, \dots, x_{i_k}$ result span($e_{i_1}, \dots, e_{i_k}$)? I start to think of this is question when I attempt exercise 1.3.9 on Guillemin and Pollack's Differential Topology Consider the projection $\varphi: \mathbb{R}^N \rightarrow \mathbb{R}^k$: $$(x_1, \dots, x_N) \mapsto (x_{i_1}, \dots, x_{i_k})$$ Differentiating: $$d \varphi: T_x(X) \mapsto \text{ span}(e_{i_1}, \dots, e_{i_k})$$ So just to confirm - differentiating $x_{i_1}, \dots, x_{i_k}$ result span($e_{i_1}, \dots, e_{i_k}$)? A projection $\pi: \mathbb{R}^n \rightarrow \mathbb{R}^k$ defined by $$\pi (x) = \sum_{j=1}^k [x \cdot e_{i_j}]e_j$$ will push $\frac{\partial}{\partial x^{i_j}}$ to $\frac{\partial}{\partial y^{i_j}}$ under the differential $d\pi$. Let $y$ be the coordinate system $(y^{i_j})$ for $j=1, \dots k$. Consider: $$d\pi (\frac{\partial}{\partial x^{i_j}}) = \sum_{l=1}^k \frac{\partial y^{i_l}}{\partial x^{i_j}}\frac{\partial}{\partial y^{i_l}}$$ where $y = \pi (x)$ hence $y^l = x \cdot e_{i_l}= x^{i_l}$ and $\frac{\partial y^l}{\partial x^{i_j}}=\frac{\partial x^{i_l}}{\partial x^{i_j}} = \delta_{i_j,i_l} = \delta_{jl}$ thus, $$d\pi (\frac{\partial}{\partial x^{i_j}}) = \sum_{l=1}^k\delta_{jl}\frac{\partial}{\partial y^{i_l}} = \frac{\partial}{\partial y^{i_j}}$$ I think you want to identify $\frac{\partial}{\partial y^{i_j}}$ with $e_j$. Under that assumption I suppose your claim is true. On the other hand, if you wish to view that span as the copy of $\mathbb{R}^k$ embedded in $\mathbb{R}^n$ in the natural manner by setting the complement of the $x^{i_j}$ coordinates to zero then the span is literally accurate. However, I'm not sure what you intend so I wrote this post. I suppose, the formula for the embedded case is just $$\pi (x) = (0,...0,x^{i_1},0,...,0,x^{i_k},0,...0) \in \mathbb{R}^n.$$ • Your notation is very physical - I like it but have some trouble understand it. At the beginning, what do you mean why $e_{ij}$? Thank you very very much. – WishingFish Jul 3 '13 at 2:40 • @WishingFish let me explain, $x \cdot e_{i_j}$ (the dot-product selects the $i_j$-th cartesian component of $x$ then multiplying by the standard basis element $e_j \in \mathbb{R}^k$ places it in the $j$-th component of $\mathbb{R}^k$. This for me is the main issue, if you say the projection goes to $\mathbb{R}^k$ then this forces a non-standard coordinate system on $\mathbb{R}^k$. Not a big deal, it's just a relabeling of standard numbered $1,2,...,k$ with $i_1,i_2,...i_k$. – James S. Cook Jul 3 '13 at 11:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978341817855835, "perplexity": 234.584288453463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00070.warc.gz"}
http://ikariam.wikia.com/wiki/Happiness
# Happiness 1,216pages on this wiki Every town has a "Happiness" level which indicates the satisfaction of citizens. A new town's happiness count will start out at a medium level, 196, yielding a population growth rate of 3.12 per hour. As the town's number of citizens increases, the happiness is diluted and the population growth rate gradually decreases. Without happiness, your population may remain stagnant and can even decrease. Population growth rate is governed by happiness, with the rate being a function of your happiness score. ## Buildings that affect happiness The Tavern and the Museum play a major role of the growth of your happiness. • The first building that you get that affects your happiness is the Tavern. The Tavern gives +12 happiness per level. Using the Tavern, you can provide wine to your citizens. The more wine you give, the more happiness you provide, with +60 happiness per extra load of provided. • The second building you can build to increase happiness is the Museum. The Museum gives +20 happiness per level. Using the museum, you can make a Cultural Asset Treaty with another player to exchange exhibits, with each exchange to give +50 happiness. • You can have Museum in each town. • You can not have more treaties than the total number of levels of every Museum combined. • The Governor's Residence reduces the corruption in colonies. Corruption affects happiness, by reducing the effect of wine and Cultural Asset Treaties on happiness, so your first aim when you make a new colony is to get rid of any corruption that it will gain when built. ## Researches that affect happiness • Researches that directly improve happiness are: ### Notes: 1. 1.0 1.1 In the Town hall of your Capital, this bonus shows up in the "Satisfaction" section with the crown symbol, not the microscope symbol. 2. 2.0 2.1 In your Town hall, this bonus shows up in the "Satisfaction" section with the microscope symbol. ## Governments that affect happiness ### Governments that can be good for your happiness • Aristocracy - Reduces happiness in your colonies only by 3% because of corruption This corruption can be removed if you raise your Governor's Residence up to a level that is higher than what is required for the number of colonies that you own. • Dictatorship - Reduces happiness in all cities by -75. • Oligarchy - Reduces happiness in your cities by 3% because of corruption. This corruption can be removed if you raise your Palace and/or Building:Governor's Residence up to a level that is higher than what is required for the number of colonies that you own. • Theocracy - Reduces happiness in every city, that does not have a Temple, by 20. ## Happiness Levels There are five levels of satisfaction in your town. The higher the satisfaction, the more population you gain per hour. The levels are: Happiness Level Corresponding Icon Growth Rate Visual effect Euphoric 6 or higher Citizens playing on the beach Happy 1.00 to 5.99 Citizens playing on the beach Neutral 0 to 0.99 Citizens are inside sulking Unhappy -0.01 to -1.00 Citizens are inside sulking Angry -1.01 or lower Citizens striking in front of your Town hall ## Formulas ### Happiness (Satisfaction) As you play the game you will encounter a number of things that change this value. These events, good or bad effect the growth rate of the town. • The formula for happiness is: Happiness = Basic Bonuses ( 196 ) +/- ( Government Bonuses/Penalties ) + ( Research Bonuses ) + ( Wine [1] = Tavern Base + Bonus ) + ( Culture[2] = Museum Base + Bonus ) - ( Population ) - ( Corruption = Corruption Rate * Bonuses ) • Or, in short: Happiness = Bonuses - ( Population + Corruption Rate * Bonuses ) or Happiness = ( 1 - Corruption Rate ) * Bonuses - Population The formula showing happiness over time, if bonuses and corruption remain constant during this time and the population keeps growing, is the following: ${ h(t) = h_0 \times e^{ - \frac { t } { 50 } } }$, where • ${ h_0 }$ is the happiness at an arbitrary starting point of time • ${ h(t) }$ is the happiness after ${ t }$ hours • ${ e }$ is Euler's constant. #### Notes: 1. The Tavern Base is +12 happiness per level of expansion, while the Bonus is +60 happiness per extra load of distributed. 2. The Museum Base is +20 happiness per level of expansion, while the Bonus is +50 happiness per extra Cultural Good gained. ### Population growth The formula that shows the connection between happiness and population growth generally is: • Growth Rate = Happiness * 0.02 Additionally, the formula showing growth rate over time, if bonuses and corruption remain constant during this time and the population keeps growing, is the following: ${ g(t) = g_0 \times e^{ - \frac { t } { 50 } } }$, where • ${ g_0 }$ is the growth rate at an arbitrary starting point of time • ${ g(t) }$ is the growth rate after ${ t }$ hours • ${ e }$ is Euler's constant. ### Population after certain time The formula that shows what the population will be after that amount of time if the Happiness doesn't change (ie., Museum or Tavern are not improved, population doesn't reach the maximum, etc) is: ${ p(t) = p_0 + h_0( 1-e^{ - \frac{ t }{ 50 } } ) }$, where • ${ p_0 }$ is the population at an arbitrary starting point of time • ${ p(t) }$ is the population after ${ t }$ hours • ${ h_0 }$ is initial happiness at the same starting point of time as ${ p_0 }$ • ${ e }$ is Euler's constant. ### Time before the Town hall gets filled You can calculate the amount of hours prior to your Town hall to get full as: ${ t=50( \ln ( h_i ) - \ln ( h_f ) ) }$ or ${ t=50 \ln \left( \frac{ h_i }{ h_f } \right ) }$, where • ${ \ln }$ is the natural logarithm function • ${ h_i }$ is the initial happiness • ${ h_f }$ is the final happiness - which can additionally be calculated as (current happiness + current population - town capacity) If the town capacity is greater than or equal to ${ B_c = Bonuses * ( 1 - Corruption Rate ) }$ [1], the Town hall will never be full, and the formula will result in an error. This is because generally the town's population can approach up (but never be exactly equal to) the marginal number ${ B_c }$. Thus it will never fill it's town capacity(TC), if the last is greater than (or equal to) this marginal number ${ B_c }$. i.e.: ${ TC \ge B_c \Rightarrow \lim_ { t \to \infty } p(t) = B_c \And p(t) < B_c \le TC, \forall t \ge 0 }$ #### Note: 1. If both bonuses and corruption remain constant for a given time interval, then current happiness + current population is constant and always equal to ${ B_c = Bonuses * ( 1 - Corruption Rate ) }$ for every time instant in this time interval. Abbreviating the above we get this formula: ${ B_c = h(t) + p(t) }$, where • ${ B_c }$  = Bonuses * ( 1 - Corruption Rate), • ${ h(t) }$ = current happiness at time ${ t }$ and • ${ p(t) }$ = current population at time ${ t }$. ### Time until happiness halves If there are no changes on happiness, it halves every 34 hours, 39 minutes and 26 seconds (roughly, 1 day and 11 hours), while the population grows in the same amount in which happiness diminishes. This is a very good guide to know which of your towns needs your attention first.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687997579574585, "perplexity": 2904.98877844583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00157-ip-10-233-31-227.ec2.internal.warc.gz"}
https://gmatclub.com/forum/is-a-0-1-a-3-a-0-2-1-a-86749.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 16 Jan 2019, 15:11 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### The winning strategy for a high GRE score January 17, 2019 January 17, 2019 08:00 AM PST 09:00 AM PST Learn the winning strategy for a high GRE score — what do people who reach a high score do differently? We're going to share insights, tips and strategies from data we've collected from over 50,000 students who used examPAL. • ### Free GMAT Strategy Webinar January 19, 2019 January 19, 2019 07:00 AM PST 09:00 AM PST Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. # Is a>0? (1) a^3-a<0 (2) 1-a^2<0 Author Message TAGS: ### Hide Tags Manager Joined: 02 Oct 2009 Posts: 79 Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 12 Nov 2009, 06:48 1 8 00:00 Difficulty: 65% (hard) Question Stats: 59% (01:55) correct 41% (02:09) wrong based on 250 sessions ### HideShow timer Statistics Is a>0? (1) a^3-a<0 (2) 1-a^2<0 Math Expert Joined: 02 Sep 2009 Posts: 52161 Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 12 Nov 2009, 07:17 1 4 Is a>0? (1) $$a^3-a<0$$ --> $$a*(a^2-1)<0$$ Two cases: A. Both multiples are positive: $$a<0$$ and $$a^2-1>0$$, (which means $$a<-1$$ or $$a>1$$). As $$a<0$$ then the range would be $$a<-1$$. B. Both multiples are negative: $$a>0$$ and $$a^2-1<0$$, (which means $$-1<a<1$$). As $$a>0$$ then the range would be $$0<a<1$$. So from this statement we have that $$a<-1$$ OR $$0<a<1$$. Not sufficient. (2) $$1-a^2<0$$, which is the same as $$a^2>1$$ --> $$|a|>1$$ --> $$a<-1$$ or $$a>1$$. Again two possible answers to the question whether $$a$$ is positive. Not sufficient. (1)+(2) Intersection of the ranges form (1) and (2) gives unique range $$a<-1$$ (From (1): $$a<-1$$ OR $$0<a<1$$ AND from (2): $$a<-1$$ or $$a>1$$). So the answer to the question is a positive is NO. sufficient. _________________ ##### General Discussion VP Joined: 05 Mar 2008 Posts: 1399 ### Show Tags 12 Nov 2009, 07:34 Bunuel wrote: KocharRohit wrote: Is a>0? ________________________________________ Is a>0? 1. a^3-a<0 2. 1-a^2<0 (1) $$a^3-a<0$$ --> $$a*(a^2-1)<0$$ Two cases: A. $$a<0$$ and $$a^2-1>0$$, (which means $$a<-1$$ or $$a>1$$). As $$a<0$$ then the range would be $$a<-1$$. B. $$a>0$$ and $$a^2-1<0$$, (which means $$-1<a<1$$). As $$a>0$$ then the range would be $$0<a<1$$. So from this statement we have that $$a<-1$$ OR $$0<a<1$$. Not sufficient. (2) $$1-a^2<0$$, which is the same as $$a^2-1>0$$ --> $$a<-1$$ or $$a>1$$. Again two possible answers to the question is $$a$$ positive. Not sufficient. (1)+(2) Intersection of the ranges form (1) and (2) gives unique range $$a<-1$$ (From (1): $$a<-1$$ OR $$0<a<1$$ AND from (2): $$a<-1$$ or $$a>1$$). So the answer to the question is a positive is NO. sufficient. Question: on statement 1, why couldn't you say: a^3-a <0 =a^3 < a =a^2<1 = -1<a<1 Math Expert Joined: 02 Sep 2009 Posts: 52161 Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 12 Nov 2009, 07:47 3 lagomez wrote: Question: on statement 1, why couldn't you say: a^3-a <0 =a^3 < a =a^2<1 = -1<a<1 This is not correct as when you get $$a^2<1$$ from $$a^3 < a$$, you are reducing (dividing) by a. We cannot divide the inequality by the variable (or by any expression) sign of which is unknown. This is one of the most frequently used catch from GMAT. NEVER EVER divide inequality by the variable (or expression with variable) with unknown sign. _________________ VP Joined: 05 Mar 2008 Posts: 1399 ### Show Tags 12 Nov 2009, 07:52 Bunuel wrote: lagomez wrote: Question: on statement 1, why couldn't you say: a^3-a <0 =a^3 < a =a^2<1 = -1<a<1 This is not correct as when you get $$a^2<1$$ from $$a^3 < a$$, you are reducing (dividing) by a. We can not divide the inequality by the variable (or by any expression) sign of which is unknown. This is one of the most frequently used catch from GMAT. NEVER EVER divide inequality by the variable (or expression with variable) with unknown sign. Yes, good answer. that gets me every time what if we are told a>0 in the question then we can divide? Math Expert Joined: 02 Sep 2009 Posts: 52161 ### Show Tags 12 Nov 2009, 07:56 lagomez wrote: Yes, good answer. that gets me every time what if we are told a>0 in the question then we can divide? Not only if we were told that a is positive, but even if we were told that a is negative we could divide. But in this case the question wouldn't make any sense as the question exactly asks if a is positive. _________________ Intern Joined: 07 May 2011 Posts: 30 Re: Is a>0? 1. a^3-a<0 2. 1-a^2<0  [#permalink] ### Show Tags 29 May 2012, 15:31 I wanted to offer my thoughts in case my line of reasoning resonates with someone and helps them better understand. Ok, so we need to know if a>0. given, 1. a^3-a<0 i could factor out a from both terms but that may make it complicated, but I ma told that the left hand side is negative, so whatever the value of a, whether positive or negative, I ought to be able to say that a is bigger than a^3, otherwise a^3-a would not be less than 0. so, a^3<a. if something is cubed and it is still less than the original something, then something is funky, and not your regular positive whole numbers. it could be fraction, positive or negative, i don't know yet, it could be a negative whole number too i suppose, but i would need to test that. let me do that real quick. clearly, one of the possibilities has to be a positive fraction between 0 and 1 because then it would be say (1/2)^3<1/2. that statement holds. so a can be positive. i just need to check if it can be negative too. what if a is a negative fraction? a=-1/2 say. then -1/8<-1/2. that's not right, so a is not a negative fraction, could it be a negative whole number? instantly, i can see -2 cubed would be -8 is less than -2. so a could be a negative whole or a positive fraction. that doesn't conclusively answer the question. so crossing out statement 1, noting that a could be a positive fraction or negative whole. actually, i am gonna draw it on the number line. ok. circle the 0 to 1 portion and less than -1 portion. moving on. 2.1-a^2>0 ok, instantly i can see that 1 has to be greater than a^2 for their difference to be positive. so without worrying about what sign mumbojumbo, i can say a^2<1. well, i have learned from mgmat advanced quant book that anytime i see a^2<1, i can simply write it as |a|<1 and that i can write it as -1<a<1. so a is between -1 and 1. so a could be positive or negative and doesn't conclusively answer the question. so insufficient. BUT drawing this statement 2 relation on a number line and comparing it to the earlier number line i drew for statement 1, i can see that there is a common region from 0 to 1. so taken together, i can see that a lies between 0 and 1 which are all positive. and it sufficiently answers the original question, Yes, a>0. Hope my line of reasoning is not a wrong way that still got the right answer, and I hope it at least makes someone follow and understand the solution to this problem. Manager Joined: 29 Mar 2010 Posts: 120 Location: United States GMAT 1: 590 Q28 V38 GPA: 2.54 WE: Accounting (Hospitality and Tourism) Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 30 May 2012, 01:18 For statement 1 can't you just factor out an a and make it a(a^2-1) where a has to equal zero making it not sufficient? _________________ 4/28 GMATPrep 42Q 36V 640 Manager Joined: 08 Apr 2012 Posts: 118 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 30 May 2012, 04:07 hfbamafan wrote: For statement 1 can't you just factor out an a and make it a(a^2-1) where a has to equal zero making it not sufficient? Hi, My 2 cents: Bunuel's explanation is great. Hats off. To put all confusion to rest note that we cannot divide both sides both sides of the statement (1) by a, as we don't know whether a is positive or negative. Hence, bunuel's approach is the logical one. Regards, Shouvik. _________________ Shouvik http://www.Edvento.com Intern Joined: 19 Mar 2012 Posts: 20 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 07 Jun 2012, 09:09 I'm confused... Maybe someone can show me where I'm going wrong. This how I approached (I actually came up with A). a^3-a<0 a(a^2-1)<0 factors out to a(a-1)(a+1)<0 - so it seems here we have factors that are consecutive, (a-1)(a)(a+1)<0 The only product that would be less than 0 would require all 3 to be negative, so I determined a as negative, which would answer the question "no." Where am I going wrong? Math Expert Joined: 02 Sep 2009 Posts: 52161 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 07 Jun 2012, 09:19 joshhowatt wrote: I'm confused... Maybe someone can show me where I'm going wrong. This how I approached (I actually came up with A). a^3-a<0 a(a^2-1)<0 factors out to a(a-1)(a+1)<0 - so it seems here we have factors that are consecutive, (a-1)(a)(a+1)<0 The only product that would be less than 0 would require all 3 to be negative, so I determined a as negative, which would answer the question "no." Where am I going wrong? The product of three multiples to be negative either all three must be negative or one must be negative and other two must be positive. For more on how to solve such kind of inequalities check: x2-4x-94661.html#p731476 inequalities-trick-91482.html everything-is-less-than-zero-108884.html?hilit=extreme#p868863 Hope it helps. _________________ Intern Joined: 19 Mar 2012 Posts: 20 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 07 Jun 2012, 09:28 Bunuel wrote: joshhowatt wrote: I'm confused... Maybe someone can show me where I'm going wrong. This how I approached (I actually came up with A). a^3-a<0 a(a^2-1)<0 factors out to a(a-1)(a+1)<0 - so it seems here we have factors that are consecutive, (a-1)(a)(a+1)<0 The only product that would be less than 0 would require all 3 to be negative, so I determined a as negative, which would answer the question "no." Where am I going wrong? The product of three multiples to be negative either all three must be negative or one must be negative and other two must be positive. For more on how to solve such kind of inequalities check: x2-4x-94661.html#p731476 inequalities-trick-91482.html everything-is-less-than-zero-108884.html?hilit=extreme#p868863 Hope it helps. Exactly... So if they are consecutive numbers we have a couple of options: Say the numbers are -1,0,1. This won't work because one is zero, therefore the answer would not be less than 0. So all three must be negative no? Math Expert Joined: 02 Sep 2009 Posts: 52161 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 07 Jun 2012, 09:35 joshhowatt wrote: Bunuel wrote: joshhowatt wrote: I'm confused... Maybe someone can show me where I'm going wrong. This how I approached (I actually came up with A). a^3-a<0 a(a^2-1)<0 factors out to a(a-1)(a+1)<0 - so it seems here we have factors that are consecutive, (a-1)(a)(a+1)<0 The only product that would be less than 0 would require all 3 to be negative, so I determined a as negative, which would answer the question "no." Where am I going wrong? The product of three multiples to be negative either all three must be negative or one must be negative and other two must be positive. For more on how to solve such kind of inequalities check: x2-4x-94661.html#p731476 inequalities-trick-91482.html everything-is-less-than-zero-108884.html?hilit=extreme#p868863 Hope it helps. Exactly... So if they are consecutive numbers we have a couple of options: Say the numbers are -1,0,1. This won't work because one is zero, therefore the answer would not be less than 0. So all three must be negative no? We are not told that the $$a$$ is an integer so $$a-1$$, $$a$$ and $$a+1$$ are not necessarily consecutive integers. Please read the solutions above and plug some numbers from the correct ranges to check whether inequality holds true for them. For example if $$a=\frac{1}{2}$$ then $$a^2-a=\frac{1}{8}-\frac{1}{2}=-\frac{3}{8}<0$$. _________________ Intern Joined: 12 Feb 2013 Posts: 7 Location: India Concentration: Finance, Marketing Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 07 Aug 2014, 08:31 Statement 1 : a^3-a<0 => a(a^2-1)<0, which can be possible in two cases: Case 1 : a<0 , a^2 - 1 > 0 Case 2 : a>0 , a^2 - 1 < 0 . Hence, we can't say whether a > 0 . Statement 2 : 1 - a^2 < 0 => a^2 > 1 , which is again possible in two cases case 1 : a>0 Case 2: a<0 . Hence, we can't say whether a > 0. Taking Staement 1 and 2 together : From statement 2 we got to know a^2 > 1 from which using Statement 1 [case 1] we can confirm a<0 Hence Both statements are needed to conclude this. Director Joined: 23 Jan 2013 Posts: 560 Schools: Cambridge'16 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 30 Sep 2015, 23:32 Is a>0? Y/N St1. a^3-a<0 means a(a^2-1)<0 so two options: a>0 & a^2<1 OR a<0 & a^2>1. INSUFF St2. 1-a^2<0 means a^2>1, so can be a<>0. INSUFF St1+St2 means that a<0 & a^2>1. SUFF C Intern Joined: 08 Aug 2016 Posts: 18 Re: Is a > 0 ?  [#permalink] ### Show Tags 05 Jan 2017, 19:09 Is a positive? Statement 1: a^3-a<0 a^3<a So either a<-1 or 0<a<1 NOT SUFFICIENT Statement 2: 1-a^2>0 1>a^2 So -1<a<1 NOT SUFFICIENT Statement 1 & 2 together : 0<a<1 SUFFICIENT Sent from my Redmi Note 3 using GMAT Club Forum mobile app _________________ कर्मणयेवाधिकारस्ते मा फलेषु कदाचन| Director Affiliations: CrackVerbal Joined: 03 Oct 2013 Posts: 533 Location: India GMAT 1: 780 Q51 V46 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 06 Jan 2017, 00:34 Top Contributor Hi kanusha, Questions like these can be easily solved if you know the process of solving a quadratic inequality. You can go through an article on quadratic inequalities here http://gmatclub.com/forum/inequalities-quadratic-inequalities-231326.html#p1781849 Is a > 0? Statement 1 : a^3 - a < 0 Simplifying we get a(a^2 - 1) < 0 a(a - 1)(a + 1) < 0 The critical points here a -1, 0 and +1. Plotting them on the number line and take the regions which are negative. Attachment: Number Line 1.png [ 2.88 KiB | Viewed 1021 times ] So 0 < a < 1 and a < -1 Now a here can either be greater than 0 or less than 0. Insufficient. Statement 2 : 1 - a^2 > 0 Multiplying throughout by -1 (we always need to keep the variable positive) we get a^2 - 1 < 0 (a - 1) (a + 1) < 0 The critical points here are -1 and 1. Plotting them on the number line and take the negative regions Attachment: Number Line 2.png [ 2.69 KiB | Viewed 1021 times ] So -1 < a < 1 Here again a can be greater than 0 or less than 0. Insufficient. Combining 1 and 2 : From statement 1 we have 0 < a < 1 and a < -1 From statement 2 we have -1 < a < 1 The only range of a that will satisfy both statements combined is 0 < a < 1. Sufficient. Attachment: Number line 3.png [ 4.32 KiB | Viewed 1023 times ] _________________ Register for the Free GMAT Video Training Course : https://crackverbal.com/MBA-Through-GMAT-2019-Registration Non-Human User Joined: 09 Sep 2013 Posts: 9421 Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0  [#permalink] ### Show Tags 12 Jan 2018, 17:08 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Is a>0? (1) a^3-a<0 (2) 1-a^2<0 &nbs [#permalink] 12 Jan 2018, 17:08 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7878073453903198, "perplexity": 2119.489918319087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00522.warc.gz"}
https://www.the-cryosphere.net/12/2249/2018/tc-12-2249-2018.html
Journal cover Journal topic The Cryosphere An interactive open-access journal of the European Geosciences Union Journal topic The Cryosphere, 12, 2249-2266, 2018 https://doi.org/10.5194/tc-12-2249-2018 The Cryosphere, 12, 2249-2266, 2018 https://doi.org/10.5194/tc-12-2249-2018 Research article 12 Jul 2018 Research article | 12 Jul 2018 # Simulated retreat of Jakobshavn Isbræ since the Little Ice Age controlled by geometry Simulated retreat of Jakobshavn Isbræ since the Little Ice Age controlled by geometry Nadine Steiger1, Kerim H. Nisancioglu2,3, Henning Åkesson2,a, Basile de Fleurian2, and Faezeh M. Nick4 Nadine Steiger et al. • 1Geophysical Institute, University of Bergen and the Bjerknes Centre for Climate Research, Bergen, Norway • 2Department of Earth Science, University of Bergen and the Bjerknes Centre for Climate Research, Bergen, Norway • 3Centre for Earth Evolution and Dynamics, University of Oslo, Oslo, Norway • 4Department of Arctic Geology, University Centre in Svalbard, Longyearbyen, Norway • anow at: Department of Geological Sciences, Stockholm University, Bolin Centre for Climate Research, Stockholm, Sweden Abstract Rapid retreat of Greenland's marine-terminating glaciers coincides with regional warming trends, which have broadly been used to explain these rapid changes. However, outlet glaciers within similar climate regimes experience widely contrasting retreat patterns, suggesting that the local fjord geometry could be an important additional factor. To assess the relative role of climate and fjord geometry, we use the retreat history of Jakobshavn Isbræ, West Greenland, since the Little Ice Age (LIA) maximum in 1850 as a baseline for the parameterization of a depth- and width-integrated ice flow model. The impact of fjord geometry is isolated by using a linearly increasing climate forcing since the LIA and testing a range of simplified geometries. We find that the total length of retreat is determined by external factors – such as hydrofracturing, submarine melt and buttressing by sea ice – whereas the retreat pattern is governed by the fjord geometry. Narrow and shallow areas provide pinning points and cause delayed but rapid retreat without additional climate warming, after decades of grounding line stability. We suggest that these geometric pinning points may be used to locate potential sites for moraine formation and to predict the long-term response of the glacier. As a consequence, to assess the impact of climate on the retreat history of a glacier, each system has to be analyzed with knowledge of its historic retreat and the local fjord geometry. 1 Introduction Marine-terminating glaciers export ice from the interior of the Greenland Ice Sheet (GrIS) through deep valleys terminating in fjords . Mass loss from the GrIS has increased significantly during the last two decades, contributing increasingly to sea-level rise . The observed increase in mass loss has broadly been associated with large-scale atmospheric and oceanic warming . About half of the current mass loss from the GrIS is due to dynamic ice discharge , which is impacted by several processes partly linked to air and ocean temperatures. A warmer atmosphere enhances surface runoff, which may cause crevasses to penetrate deeper through hydrofracturing, which in turn can promote iceberg calving . A warmer ocean strengthens submarine melt below ice shelves and floating tongues , which can potentially destabilize the glacier via longitudinal dynamic coupling and upstream propagation of thinning . Increased air and fjord temperatures can additionally weaken sea ice and ice mélange in fjords, affecting calving through altering the stress balance at the glacier front . Most of these processes are still poorly understood, as well as heavily spatially and temporally undersampled (Straneo and Cenedese2015; Straneo et al.2013). Despite widespread acceleration and retreat around the GrIS, individual glaciers correlate poorly with regional trends . For example, four glaciers alone have accounted for 50 % of the total dynamic mass loss since 2000; Jakobshavn Isbræ in West Greenland is the largest contributor . Even if exposed to the same climate, individual glaciers can respond differently, because inland mass loss can be regulated by individual glacier geometry . It is well known that grounding line stability and ice discharge is highly dependent on trough geometry, with retrograde glacier beds potentially causing unstable, irreversible retreat (Gudmundsson et al.2012; Jamieson et al.2012; Schoof2007). The impact of glacier width, however, is less studied. Lateral buttressing and topographic bottlenecks have been suggested to stabilize grounding lines on reverse bedrock slopes. Despite these studies showing the importance of geometry, limited knowledge is available of the interplay between bedrock geometry, channel-width variations and external controls on glacier retreat. A poor understanding of the heterogeneous response of individual glaciers inhibits robust projections of sea-level rise due to mass loss from ice sheets. So far, there has been a strong emphasis on the role of ice–ocean interactions as a key control on the retreat of marine-terminating glaciers, disregarding the influence of trough geometry (Cook et al.2016; Fürst et al.2015; Holland et al.2008; Joughin et al.2012; Straneo and Heimbach2013). Also, studies that focus on the control of geometry so far only model synthetic glaciers (Enderlin et al.2013; Schoof2007), prohibiting validation and justification of model parameters. In this paper, we therefore use a real-world glacier geometry to study the geometric controls on glacier retreat. Several attempts to model Jakobshavn Isbræ have been made to understand the dynamics behind the observed acceleration and retreat . These studies focus on the time period after 1985 and partly into the future. However, given the current exceptional rapid changes, our understanding and model capacity should span long (centennial) timescales if we are to predict changes into the future. Jakobshavn Isbræ has a history of stepwise and nonlinear retreat. We aim to understand this history, by comparing our model results with observations starting with the Little Ice Age maximum (LIA; ca. 1850) and into the present. Since the deglaciation of Disko Bugt between 10 500 and 10 000 years before present , Jakobshavn Isbræ has experienced alternating periods of fast and slow retreat with the formation of large moraine systems (Weidick and Bennike2007). Most observations exist after the LIA (Fig. 1), when the glacier reached a temporal maximum extent followed by a retreat. From 2001 until May 2003 it accelerated significantly after the disintegration of its 15 km long floating tongue . Today, it is the fastest flowing glacier in Greenland , with a maximum velocity of 18 km yr−1 (Joughin et al.2014) and ice discharge rates of about 27–50 km3 yr−1 . With a contribution of 4 % to global sea-level rise in the 20th century (IPCC, 2001), Jakobshavn Isbræ is the largest contributor in Greenland . It is also one of the most vulnerable glaciers in Greenland, with recent thinning potentially propagating as far inland as one third of the distance across the entire ice sheet . Combining these centennial observations with dynamic ice flow modeling is crucial for putting the recent dramatic changes into a long-term perspective, as well as for interpreting records of the past and projections for the future. Figure 1Glacier front positions of Jakobshavn Isbræ from (1850–1985) and CCI products derived from ERS, Sentinel-1 and Landsat data by ENVO (1990–2016). The background map is a Landsat-8 image from 16 August 2016 (from the U.S. Geological Survey). Location names that occur in the text are marked. The inset shows the location of Jakobshavn Isbræ in Greenland. The aim of this study is to investigate the external, glaciological and geometric controls on Jakobshavn Isbræ in response to a linear forcing on a centennial timescale. We use a simple numerical ice flow model (Nick et al.2010; Vieli and Payne2005) with a fully dynamic treatment of the calving front to assess the relative impact of fjord geometry and climate forcing on the retreat of Jakobshavn Isbræ from the LIA maximum to the present day. Geometric controls are isolated by (a) using a linear forcing to avoid complex responses and (b) artificially straightening the trough width and depth. The model experiments are run over several centuries to account for internal glacier adjustment. The application of the model on a real glacier enables a comparison of model results with long-term observed velocities and front positions, but also ensures the use of realistic values for the width–depth ratio and the model parameters. Section 2 documents the numerical ice flow model, followed by an outline of the specific model setup used for the simulations in Sect. 3. Section 4 describes the results of the experiments with varying climate forcing and fjord geometry. The importance of trough width versus depth and forcing is discussed in Sect. 5, followed the limitations of the model and the implications of our results for understanding the past. 2 Modeling approach We use a dynamic depth- and width-integrated numerical ice flow model constructed for marine-terminating glaciers . Despite many assumptions required, this model is well suited to study the long-term (centennial) retreat pattern of an outlet glacier with high basal motion (such as Jakobshavn Isbræ). It is based on mass continuity and a balance between the driving stress, longitudinal stress gradient, and basal and lateral drag. The model benefits from a robust treatment of the grounding line consistent with and a fully dynamic marine boundary . It is also more efficient than complex models , which enables multiple model runs covering several centuries. The physical calving law applied in the model has been successfully tested on several outlet glaciers where there are observational data available . The calving law also has the advantage of allowing for a dynamic and free migration of the glacier terminus, given changes in climate forcing. The climate forcing is implemented as a slow linear change in surface mass balance (SMB), crevasse water depth, submarine melt and buttressing by sea ice – model parameters that represent the impact of changes in temperature. In this section, the physical approach, parameterizations and the implementation of climate forcing are described. ## 2.1 Numerical ice flow model The numerical ice flow model as described in calculates the time-varying ice thickness H from the along-flow ice flux and mass balance, using a depth- and width-integrated continuity equation: $\begin{array}{}\text{(1)}& \frac{\partial H}{\partial t}=-\frac{\mathrm{1}}{W}\frac{\partial \left(HUW\right)}{\partial x}+\stackrel{\mathrm{˙}}{B}.\end{array}$ U is the width- and depth-averaged velocity, t the time and x the along-flow component. The width W is assumed to be symmetric around the central flow line. The mass balance $\stackrel{\mathrm{˙}}{B}$ includes the surface mass balance and submarine melt below the floating tongue (described in Sect. 2.3). The ice flux is controlled by a balance of lateral and basal resistance, along-flow longitudinal stress gradient and driving stress. Lateral resistance is parameterized using a width-integrated horizontal shear stress and we use a Weertman-type basal sliding law based on effective pressure (Fowler2010). The longitudinal stress gradient is dependent on the effective viscosity ν, which is nonlinearly dependent on the longitudinal strain rate ${\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{xx}$ and the rate factor A . The stress balance is calculated as $\begin{array}{l}\mathrm{2}\frac{\partial }{\partial x}\left(H\mathit{\nu }\frac{\partial U}{\partial x}\right)-{A}_{\mathrm{s}}{\left[\left(H-\frac{{\mathit{\rho }}_{s}}{{\mathit{\rho }}_{\mathrm{i}}}D\right)U\right]}^{\mathrm{1}/m}\\ \text{(2)}& -\frac{\mathrm{2}H}{W}{\left(\frac{\mathrm{5}U}{EAW}\right)}^{\mathrm{1}/n}={\mathit{\rho }}_{\mathrm{i}}gH\frac{\partial s}{\partial x},\end{array}$ where s is the surface elevation; g is the gravitational acceleration; D is the depth of the glacier below sea level; and ρi and ρs are the densities of ice and ocean water, respectively. n and m are the exponents for Glen's flow law and sliding relations, respectively. The lateral enhancement factor E, controlling the lateral resistance, and the basal sliding parameter As are model parameters that are adjusted to roughly match the observed ice flow and thickness for the present fjord geometry. Both parameters are constant along the flow line and in time. The dependency of the basal resistance on effective pressure is accounted for through the term $H-\frac{{\mathit{\rho }}_{s}}{{\mathit{\rho }}_{\mathrm{i}}}D$. The grounding line position is calculated with a flotation criterion based on hydrostatic balance . Its treatment relies on a moving grid: at each time step the grid adjusts freely to the new glacier length, continuously keeping a node at the calving front . This allows for a precise simulation of the glacier front and grounding line position using high grid resolution. The grid size is Δx= 302 m initially and reduces to Δx= 292 m at the present-day position due to the use of a stretched grid. At the marine terminus, a dynamic crevasse-depth calving criterion is used as described in Sect. 2.2. Table 1List of variables, physical parameters and constants used in the model. The forcing parameters with their initial (LIA) values are given in the lower part. Parameter values used for the glacier retreat experiments are listed in Table 2. ## 2.2 Calving law The fully dynamic crevasse-depth criterion calculates calving where the sum of surface and basal crevasse depth (ds and db, respectively) penetrates the whole glacier thickness . The depth of the surface crevasses is given by The depth of the surface crevasses is calculated from the tensile deviatoric stress Rxx and the pressure from melt water filling up the crevasses (Eq. 3) . Note that the water depth in crevasses dw is not a physical quantity, but a forcing parameter within the calving model that links calving rates to climate. ρw is the density of freshwater. The tensile deviatoric stress is the difference between tensile stresses that pull a fracture open and the ice overburden pressure. It is calculated via Glen's flow law from the longitudinal stretching rate ${\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{xx}$, which is responsible for the opening of crevasses by $\begin{array}{}\text{(4)}& {\stackrel{\mathrm{˙}}{\mathit{ϵ}}}_{xx}=\frac{\partial U}{\partial x}={f}_{\mathrm{i}}A{\left[\frac{{\mathit{\rho }}_{\mathrm{i}}g}{\mathrm{4}}\left(H-\frac{{\mathit{\rho }}_{s}}{{\mathit{\rho }}_{\mathrm{i}}}\frac{{D}^{\mathrm{2}}}{H}\right)\right]}^{n},\end{array}$ which depends on a sea ice factor fi, accounting for reduced buttressing due to weakening of ice mélange. The depth of basal crevasses is calculated from tensile deviatoric stresses and the height above buoyancy : $\begin{array}{}\text{(5)}& {d}_{\mathrm{b}}=\frac{{\mathit{\rho }}_{\mathrm{i}}}{{\mathit{\rho }}_{s}-{\mathit{\rho }}_{\mathrm{i}}}\left(\frac{{R}_{xx}}{{\mathit{\rho }}_{\mathrm{i}}g}-\left(H-\frac{{\mathit{\rho }}_{s}}{{\mathit{\rho }}_{\mathrm{i}}}D\right)\right).\end{array}$ Water in crevasses and sea ice buttressing are both model parameters that impact the glacier response by changing the calving rate. Because the parameters are linked to different processes, they are kept separate in the model to enable a distinct forcing. ## 2.3 Atmosphere and ocean forcing The model SMB, a, is derived from observed monthly mean SMB data at Jakobshavn Isbræ (Box2013). The SMB data are based on a combination of meteorological station records, ice cores, regional climate model output and a positive degree-day model. Its implementation in our model consists of a piecewise linear function of surface elevation separated by a transition height s0: in the steep lower part of the glacier, the SMB increases with elevation; and, in the flat upper part of the glacier, where the precipitation is low, the SMB decreases with elevation (Eq. 6). Figure 2SMB profiles along Jakobshavn Isbræ's main flow line at the LIA (1840–1850 average) and present day (2002–2012 average) from observations by Box (2013) and the linear fit used in the model. Thin dotted lines show position of the equilibrium line altitude (ELA) for the present-day and LIA fit. Figure 2 shows the observed and estimated linear SMB profiles for the LIA (1840–1850 average) and for the present day. The corresponding values for the vertical gradients Gl and Gu as well as the SMB a0 at the height s0 are given in Tables 1 and 2. Submarine melt is implemented in the model as a vertical melt rate that decreases the glacier thickness seaward of the grounding line and is assumed to be spatially uniform. The induced artificial step decrease in ice thickness at the grounding line is smoothed out in the model by a sufficiently small time step. The submarine melt rates are one order of magnitude smaller than the grounding line flux. Sensitivity analyses with along-flow variations in submarine melt show similar results, as long as the constant submarine melt rate is comparable to the along-flow averaged submarine melt rate. ## 2.4 Lateral ice flow The model domain covers the full drainage basin towards the ice divide at about 520 km upstream of the present-day position. For the lowermost 77 km, we restrict the model width to the pronounced narrow channel seen in bed topography data to realistically account for lateral and basal stresses. Lateral ice flow into this narrow channel from the surrounding ice sheet and tributary glaciers is implemented as an additional SMB similar to previous studies , giving a realistic mass flux into the lower channel. This lateral influx QL,0 is initially calculated as the sum of the northern and southern lateral fluxes. These are given by the observed ice velocity UL,0 and thickness HL,0 at each grid point along the lateral boundary of the narrow main channel, divided by the width of the main trough WJI (Eq. 7). The strength of the initial influx is indicated by the arrows in Fig. 3 and locally accounts for about 100 times the SMB, with a maximum of 120 m yr−1. We assume that the relative contribution of the lateral flux to the overall flux is constant in time; therefore, we scale it with the change in the overall flux with time (Eq. 8). $\begin{array}{}\text{(7)}& {Q}_{\mathrm{L},\mathrm{0}}\left(x\right)& =\frac{{U}_{L,\mathrm{0}}\left(x\right)\cdot {H}_{L,\mathrm{0}}\left(x\right)}{{W}_{\mathrm{JI}}\left(x\right)}\text{(8)}& {Q}_{L,t}\left(x\right)& ={Q}_{\mathrm{L},\mathrm{0}}\left(x\right)\cdot \frac{{Q}_{\mathrm{JI},t}\left(x\right)}{{Q}_{\mathrm{JI},\mathrm{0}}\left(x\right)}\end{array}$ QJI,0 and QJI,t are thereby the initial overall flux through the main trunk and the flux after time t, respectively. Note that the constant relative contribution by side fluxes is a rough approximation. A thinning of the main trunk could initiate a speed-up in the tributary glaciers due to increased surface slope. 3 Model setup Despite the general focus of this study on the external versus geometric controls on glacier retreat, we apply the model to Jakobshavn Isbræ – a well-studied glacier on west Greenland. The intention is to use a realistic along-flow glacier geometry to compare modeled ice thickness, length and velocity with observations. Observations of ice velocities, calving front positions, ice thickness and ice discharge are used to tune model parameters. In the following, we distinguish between constant parameters (basal sliding, rate factor and lateral enhancement factor) and climate-related perturbation parameters (SMB, submarine melt rate, crevasse water depth and sea ice buttressing). For the model experiments, the perturbation parameters are changed linearly from their LIA values to simulate increasing temperatures. Importantly, the calving front and grounding line evolve freely during retreat. Only combinations of forcing parameters that simulate a total retreat rate matching the observed retreat of about 43 km from the LIA to 2015 are considered. In the following, the choice of tuning parameters and the perturbations are described, together with relevant observations. ## 3.1 Model glacier geometry Jakobshavn Isbræ extends 520 km inland towards the ice divide and can be distinguished from the surrounding ice sheet by its high velocities along the deep trough. The geometry of the model glacier consists of a narrow (in average about 5.4 km wide) and deep (1.3 km at the deepest) trough; further upstream, it widens gradually with a relatively flat and shallow bottom. The fjord width in today's ice-free area is obtained from satellite images (Fig. 1). The channel width in the fast-flowing part (77 km upstream of the 2015 position) is defined as the trough width at the present-day sea level from topography data by . Further upstream, where the catchment widens gradually, the width is defined following . For the one-dimensional glacier depth in the deep trough and fjord, we use the along-flow bed topography profile as it is presented in . The fjord bathymetry is obtained from Operation IceBridge gravity data and the subglacial trough profile from high-sensitivity radar data by . For the bed in the wider catchment area, 150 m resolution data by are averaged over the glacier width. ## 3.2 Constant parameters Most observations only exist for the present day. Parameters that are constant in time (basal resistance, lateral enhancement factor and rate factor) are tuned with observations to obtain a steady-state glacier corresponding to the observed present-day glacier geometry. After tuning the constant parameters, the climate-related perturbation parameters are reduced to colder temperatures to achieve an initial steady state corresponding to the observed LIA front position. For the LIA steady state, the only constraints are given by the LIA front position and the height of the LIA trimline found at the GPS station KAGA (Jeffries2014) by . Basal sliding – as implemented in the model – influences ice flow and hence the surface slope and thickness. The basal sliding parameter ${A}_{\mathrm{s}}=\mathrm{120}\phantom{\rule{0.125em}{0ex}}\mathrm{Pa}\phantom{\rule{0.125em}{0ex}}{\mathrm{m}}^{-\mathrm{2}/\mathrm{3}}\phantom{\rule{0.125em}{0ex}}{\mathrm{s}}^{-\mathrm{1}/\mathrm{m}}$ is chosen to achieve an observed present-day thickness of 3065 m at the ice divide ; the present-day thickness in the interior is also valid for the LIA initialization as the ice sheet is assumed to be in a steady state above 2000 m of elevation within this time period (Krabill2000). We keep the basal sliding parameter constant in time, because the impact of increased melt on basal sliding on interannual timescales is still unclear . Also, the model takes into account the dependency of basal sliding on the effective pressure, which is calculated explicitly. The actual degree of basal resistance at the bed of Jakobshavn Isbræ is highly debated, with some studies explaining high surface velocities as reflecting a slippery bed , whereas other studies ascribe the high velocities to weakened shear margins (van der Veen et al.2011), or an interplay of both processes . The surface profile and ice velocity are determined by the lateral resistance and the rate factor. A uniform lateral enhancement factor is applied along the entire glacier, controlling the strength of the transmission of lateral drag to the sides. A value of E=10 gives a simulated present-day surface with a best fit to observations . The rate factor for Glen's flow law is to a first approximation a function of ice temperature. Here it is set to values corresponding to temperatures of −20C at the ice divide, linearly increasing to −5C at the terminus . This gives a good fit of simulated present-day glacier surface and ice velocities to observations . The rate factor is kept constant in time. ## 3.3 Forcing experiments and perturbation parameters The climate-related perturbation parameters are tuned for the LIA steady state to simulate the observed glacier length and velocities, or the ice discharge. Starting from the initial LIA glacier configuration, a retreat is triggered by simultaneous linear changes in SMB, crevasse water depth, submarine melt rate and sea ice buttressing. The parameter perturbations are combined in order to obtain a total retreat of 43 km from 1850 to 2015, corresponding to the observed retreat. Nine different parameter combinations that satisfy the observational data, and cover a wide range of perturbations, are presented here. Table 2 shows the parameter values reached by the year 2015 in the nine different model runs. SMB is the only purely physical and well-known variable both for LIA and today (Box2013). The piecewise linear function presented in Sect. 2.3 is a good approximation to the observed profiles (Fig. 2) and is therefore used here. All model experiments use the same gradual changes of the SMB gradients and maximal SMB from the LIA values to present-day values (Table 2). Submarine melt is influenced by ocean temperatures. In Disko Bugt, ocean temperatures have increased from about 1.5 C in 1980 to 3 C in 2010 , including a 1 C warming in 1997 . estimates a doubling of melt rates underneath the floating tongue of Jakobshavn Isbræ (depending on initial conditions and the way in which melting is applied), when considering a 1 C warming and steepening of the glacier front. Submarine melt rates may be additionally enhanced by increased subglacial ice discharge , although this may be a local effect and negligible when width averaged . Observations of submarine melt rates beneath Jakobshavn Isbræ's floating tongue suggest an annual melt rate of 228 ± 49 m yr−1 between 1984 and 1985 and 2.98 m d−1 (1087 m yr−1) averaged over the melt seasons in 2002 and 2003 . Since the submarine melt rate is otherwise poorly constrained, especially further back in time, we employ a large range of linear forcing, from no increase to a 2-fold increase in the LIA value of 175 to 340 m yr−1 in 2015. Note that the model neglects submarine melt at the vertical calving front. The crevasse-water depth has not been measured and is applied as a nonphysical model parameter regulating discharge fluxes. It is likely to be exaggerated in the model, accounting for the lack of submarine melt at the vertical glacier front. For the LIA steady state, the crevasse water depth is set to 160 m, which produces a calving rate of 34 km3 yr−1 in 1985 after the applied linear forcing. This is the same order of magnitude as the observed calving rate of 26.5 km3 yr−1 in 1985 , as well as the more recent values between 24 and 50 km3 yr−1 . The increase in crevasse water depth with time is unknown, but may be related to runoff, which has increased by 63 % since the LIA (Box2013). To account for such a large range, we increase the crevasse water depth from its LIA value to values between 185 and 395 m in 2015. The crevasse water depth is tuned depending on the combination of sea ice buttressing and submarine melt rate to reach the observed retreat (Table 2). Ice mélange in the fjord can apply a buttressing stress to the calving front of about 30–60 kPa, or one-tenth of the driving stress . With increasing air and ocean temperatures, ice mélange can weaken or break up, thereby influencing iceberg calving (Reeh et al.2001; Sohn et al.1998). However, the correlation between ice mélange and iceberg calving is poorly known. Breakup of ice mélange is thought to impact frontal migration on a daily to seasonal timescale, leaving annual fluxes unaffected . We conduct experiments with unchanged buttressing by sea ice (fs=1; also used for the LIA steady state), as well as decreased buttressing by a factor of 2 and 3 compared to the LIA value in 2015. Table 2Nine combinations of the perturbation parameters used in this study. Values shown here are those reached in 2015 after a linear perturbation from their LIA value shown in Table 1. Values for the SMB are perturbed to the same 2015 values for all model runs: ${G}_{\mathrm{l}}=\mathrm{0.0019}\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}$, ${G}_{\mathrm{u}}=-\mathrm{0.00013}\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}$, ${a}_{\mathrm{0}}=\mathrm{0.64}\phantom{\rule{0.125em}{0ex}}\mathrm{m}\phantom{\rule{0.125em}{0ex}}\mathrm{w}.\mathrm{e}.\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}$. Run 5 (in bold) is presented in more detail in the paper. The observed retreat position in 2015 is reached with all the parameter combinations presented in Table 2. The 2015 values for each parameter depend on the values for the other parameters. This means, for example, that in the case of reduced sea ice buttressing and a small crevasse water depth, a low submarine melt rate is needed. Similarly, if sea ice buttressing is high and submarine melt is low, the crevasse water depth must be large. In addition to experiments with linearly increased parameters, we also conduct one experiment with a step increase in the four parameters starting from the LIA maximum. The step increases in sea ice buttressing, submarine melt rate, crevasse water depth and SMB applied starting at 1850 are comparable to those reached in the model year 2015 in run 5, with slightly different values to reach the right front position in 2015. All experiments shown in Table 2 are run until 2100 in order to test the temporal and spatial response to the underlying geometry. Despite a relatively high number of frontal observations since the LIA (Fig. 1), only the observed calving front positions in 1850 and 2015 are used to tune the model parameters; in between these two time slices, the forcing parameters increase linearly and the glacier length evolves freely. We present the time evolution of the simulated front positions together with observations. To obtain one-dimensional observed front positions, we assume the trough to be approximately east–west oriented. We calculate the mean latitudinal coordinate of each observed calving front (Fig. 1) with the corresponding longitudinal position at that latitude. The positions of the resulting one-dimensional front positions lie approximately in the center of the trough. The uncertainty of the front positions is calculated as the maximal spread of each front in cross-trough direction. ## 3.4 Geometric experiments In addition to the effect of climate forcing, we investigate the effect of fjord geometry and the relative importance of bed topography versus channel width. The experiments are designed with a smoothed width and depth in the deep and narrow trough. Four different geometry combinations are constructed and shown in Fig. 3. Figure 3Different model geometries used to investigate the impact of topography on ice dynamics. (a) Original geometry, (b) straight width, (c) straight bed, and (d) straight width and bed. Arrows indicate the tributary ice flux, with their length representative for the influx volume. 1. Original geometry: observed width and depth of the trough as described in Sect. 3.1. 2. Straight width: the width until 80 km inland of today's front is set to a constant value of 5.4 km. Only at the LIA front position, a wide section is kept in order to reach a steady state with the same parameters. The depth is kept as in a. 3. Straight bed: the bed of the deep trough to 120 km inland of today's front is smoothed to get an almost straight bed, linearly rising inland. The width is kept as in a. 4. Straight width and bed: both the width and the bed are straight. The runs with simplified geometry start from a steady state at the LIA front position with the same parameters and forcing as for the original model setup (Table 2). Due to the changed topographies, the glacier surfaces and velocities differ from the original geometry and the LIA front position is slightly changed. 4 Results In this section, we present the steady-state glacier at the LIA maximum extent and the glacier retreat simulated with run 5 (Table 2) as an example. In addition, the response to different forcing parameter combinations, more simplified geometries and a step forcing are presented. ## 4.1 Jakobshavn Isbræ at the LIA maximum The initial steady-state glacier as shown in Figs. 3a and 5a is reached with the parameters in Table 2. The glacier has an uneven surface that reflects the trough geometry, which is common for fast-flowing ice streams . At the position of KAGA, the surface elevation reaches about 400 m compared to the 300 m of the LIA-trimline height (Csatho et al.2008); however, the side margins are expected to be lower than the centerline and the model glacier has a – probably overestimated – surface bump at this position. The LIA glacier terminates with a 9 km long floating tongue, where it has a velocity of 5 km yr−1 and a grounding line flux of 35 km yr−1. The modeled width-averaged basal shear stress for the LIA is about 128 kPa at 40 km inland of the present-day front position and the driving stress is 290 kPa at the same location, when applying a 3 km moving average to smooth the surface bumps. In comparison, other modeling studies obtain lower basal resistance and data assimilation methods imply basal stresses at the bed of the deep trough of about 65 kPa at 50 km upstream of the calving front, equivalent to only 20 % of the driving stress . However, these estimates are from the present day and it is unknown how much the relative contribution of the stresses has changed over the time period. During the speed-up, the basal shear stress might have been reduced in the lowermost 7 km, and not changed further upstream . Note also that the stresses provided by the model are width averaged. ## 4.2 Nonlinear glacier response to linear forcing Figure 4Modeled retreat of Jakobshavn Isbræ in response to a gradual change of the forcing parameters (run 5 in Table 2). Yearly profiles are shown for (a) the along-flow glacier profile and the elevation of the KAGA LIA trimline in green, (b) the front positions in a top view and (c) the along-glacier annual velocities including the yearly grounding line (GL) flux (gray circles from dark to light with time). Observed yearly velocities are plotted at the calving front from 1985 to 2003 and at seven different points upstream from the glacier front from 2009 to 2013 . Figure 4a and b show that the modeled front position retreats nonlinearly in response to the linear external forcing (shown here is run 5 in Table 2). It retreats 21 km during the first 163 years, after which a 16 km long floating tongue forms. During the break-off of the tongue in 2013 to 2014, the front retreats a further 23 km. Throughout the retreat, the glacier terminus alternates between a floating tongue and a grounded front. The front velocities (Fig. 4c) only increase by 3 km yr−1 during the first 163 years and more than double from 8 to 19 km yr−1 when the floating tongue breaks off. This acceleration is overestimated, as the simulated tongue breaks off faster than observed. However, velocity observations by shown in Fig. 4c are smaller than that simulated in the early 1990s but are in between the simulated velocities before and after the break-off. The model simulations show that the acceleration continues until the retreat of the front slows down. The grounding line flux, calculated as the grounding line velocity times the grounding line gate area, increases from 35 to 65 km3 yr−1 from the LIA until 2015 compared to observed values of about 32–50 km3 yr−1 between 2005 and 2012 . Beyond 2015 it increases to 100 km3 yr−1 and finally stagnates at a flux of 77 km3 yr−1. The various parameter combinations presented in Table 2 – and many more that are in between those presented here – reproduce the observed total retreat since the LIA. Figure 5 shows the retreat of the glacier front and grounding line with time for the nine parameter combinations applied. The simulated temporal retreat pattern of the glacier front is similar for all experiments and shows the strong nonlinearity of the frontal retreat – despite the linear forcing (Fig. 5a). The response to the different forcing experiments differs mainly in the timing of the phases of rapid retreat, especially the final retreat just after 2050. All model runs show a very abrupt retreat of at least 23 km within a few years, which corresponds to the observed retreat of 19 km after year 2000. The simulated frontal positions differ from those observed, which is expected due to the strong simplification of the forcing. The aim here is to study the geometric controls on rapid retreat, rather than tuning the model until the simulated retreat fits the observations. The reasons for the deviation of the simulations from the observations are discussed in Sect. 5. The grounding line retreats in a more stepwise manner (Fig. 5b) compared to the glacier front. Before 2015, it stabilizes at distances of 32, 25 and 20 km from the 2015 frontal position for all experiments. It retreats more gradually beyond 2015 with short still-stands at 8, 12 and 18 km upstream of the present-day position. The forcing parameter combination thereby determines the timing of the grounding line displacement. Figure 5Simulated position of (a) the front and (b) the grounding line (GL) for nine different gradual forcing combinations presented in Table 2. The colors for the different model runs are random. Black dots show the observed front positions at the centerline with a spread (gray shading) corresponding to the across-fjord variation of each front position (Fig. 1). ## 4.3 Control of fjord geometry on front and grounding line retreat The residence time of the grounding line is analyzed for the different geometries introduced in Fig. 3. Residence time is thereby quantified by the amount of time that the grounding line rests within a distance of 1 km. Figure 6a shows the original geometry with the most pronounced pinning points at distances of 32, 25, −10 and −13 km from the 2015 position. Only the length of grounding line still-stand thereby varies among the nine different model runs (Table 2), whereas the pinning point locations coincide (also seen in Fig. 5b). Artificially straightening the width removes the pinning points at 25 km and those beyond the 2015 position (Fig. 6b). Instead, the glacier rests at the present-day position. The geometry with the straightened bed causes a similar response to the linear forcing as with the original geometry, only with a wider spread of pinning points (Fig. 6c). Straightening the bed and the width removes all pinning points (Fig. 6d) and leads to a linear retreat. Note that all geometries have an initial pinning point at the LIA position to allow a steady state at the LIA position. Generally, a reduction in the complexity of the fjord geometry, for example, straightening the bed and/or width, reduces the number of pinning points. Figure 6Residence time of the grounding line (GL) for the different geometries presented in Sect. 3.4: (a) the original geometry, (b) straightened width, (c) straightened bed, and (d) straightened width and bed. The bars represent the time that the grounding line rests within 1 km, and the colors correspond to the model runs in Table 2. Only residence times of more than 2 years are included. ## 4.4 Delayed abrupt glacier response In addition to the linear increase in climate forcing, the response to a step forcing (Table 2) is presented in Fig. 7. With the step forcing, the glacier front remains at a distance of 22 km for 60 years, before it retreats rapidly to its new pinning point. This unprovoked rapid retreat – after centuries of constant forcing – demonstrates the long response time of the glacier . The long response time is caused by a slow adjustment of the glacier volume to external changes. The corresponding accumulated volume loss, also shown in Fig. 7, adjusts steadily to the initial changes in forcing, despite the constant grounding line position. During the rapid frontal retreat, the volume decreases by 300 Gt and continues even after the grounding line reaches a still-stand. This emphasizes that a constant grounding line position does not imply a steady state. Similarly, an observed rapid retreat of a marine-terminating glacier might be the delayed response to historic temperature changes. 5 Discussion For the example of Jakobshavn Isbræ, our results show the importance of lateral and basal topography and their implications for the evolution of glacier retreat in fjords. This knowledge can be used for a better understanding of the recent observed retreat history; however, it is hard to isolate the relative impact of changes in ocean forcing, SMB and internal factors including the fjord geometry. Here, we discuss the impact of fjord geometry on glacier front retreat and compare the simulated glacier response to the recorded long-term glacier retreat history. In addition, we explore the implications of our results for the future response of Jakobshavn Isbræ to changes in climate. We argue that fjord geometry, and in particular fjord width, to a large degree dictates the retreat history of marine-terminating glaciers. Nevertheless, changes to the external forcing of the glacier are important, because their magnitude controls the onset and overall rate of the retreat (Fig. 5). ## 5.1 Geometric control on glacier stability Our simulations show that once a glacier retreat is triggered, through changes at the marine boundary, or at the glacier surface, a nonlinear response unfolds due to variations in the fjord geometry with a complexity given by the bed topography and the trough width. For a retrograde bed, where water depth increases as the glacier retreats, the ice discharge increases, leading to further unstable glacial retreat in the case of constant lateral stresses . Previous studies show that changes in the width of a glaciated fjord impact the lateral resistance as well as the ice flow, thereby stabilizing the glacier where narrow sections occur . These findings are corroborated by our model results. However, most of these earlier studies use synthetic glaciers that do not allow for a validation of the model against observations. Further, the shorter time periods considered neglect the long-term adjustment of the glaciers. Figure 7 shows that the timescale of glacier adjustment can be several decades. However, in reality temperature changes are likely smaller and less abrupt than we have imposed. Nevertheless, our study demonstrates that the observed recent retreat could have been triggered and sustained by an earlier warm event. This finding is consistent with , who studied Antarctic ice stream retreat on millennial timescales. Depending on the local geometry of the underlying bed, individual glaciers exhibit different response times and spatial extensions of dynamic thinning . The geometry experiments in Fig. 6 assess the relative role of glacier width versus glacier length on Jakobshavn Isbræ. The width seems to be the leading factor for grounding line still-stand, as artificially straightening the fjord removes the pinning points that cause a slowdown of the grounding line retreat. Flattening the bed topography is less efficient in linearizing the grounding line retreat compared to straightening the fjord. It has to be considered that the glacier trough is an order of magnitude wider than it is deep, with larger variations in the width compared to the bed, increasing the importance of the glacier width. ## 5.2 Relative role of forcing parameters Only certain parameter combinations simulate the observed retreat pattern of Jakobshavn Isbræ since the LIA (Table 2). If the submarine melt rate is increased, the crevasse water depth has to be reduced and/or the sea ice buttressing increased. Similarly, if the sea ice buttressing is reduced, the crevasse water depth and submarine melt rate have to be smaller (Table 2). Importantly, none of the forcing parameters can trigger the retreat alone, unless the change to the parameter is unreasonably large relative to its LIA value. Changed individually, the submarine melt rate would have to reach 650 m yr−1 in 2015 – an increase of 370 % from the LIA, the crevasse water depth has to increase to 400 m (250 % larger than the LIA value), and the sea ice buttressing factor has to be more than quadrupled (value 4.2 relative to LIA factor of 1) in 2015 to force a strong enough retreat. Absolute values for the parameters have to be taken with caution, as they do not necessarily correspond to physical variables. For example, to reach the observed grounding line flux, the value for the crevasse water depth is likely too high in our study. This is because it is a parameter for calving that has to balance the neglected submarine melt along the calving front in the model. The change in parameters required to trigger the retreat is also dependent on the initial parameter choices and what forcing is needed to unpin the grounding line from the initial pinning point. As shown by , non-unique parameter combinations can exist for the same front positions. This implies that real-world observations are vital to reduce uncertainty in transient model simulations. Note that the SMB contribution to the frontal retreat is insignificant, even if the lower SMB gradient Gl is doubled and the SMB curve is lowered by 50 %. Taken together this gives a SMB at the terminus of −6 compared to −1.1$\mathrm{m}\phantom{\rule{0.125em}{0ex}}\mathrm{w}.\mathrm{e}.\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}$ during the LIA (cf. Fig. 2). In our model of Jakobshavn Isbræ, variations in air temperatures contribute mainly through runoff and the filling of crevasses with water, rather than directly through surface ablation. For the specific geometry of Jakobshavn Isbræ, the influx of ice at the lateral boundaries is a factor of 100 larger than the local SMB and could be important for the sensitivity of the glacier to changes in climate forcing. However, the lateral influx is an order of magnitude smaller than the flux through the main trough and a sensitivity study shows that the lateral flux has a minor impact on the retreat pattern (not shown here). If all other parameters are kept fixed, the lateral influx has to decrease by nearly 70 % from its LIA value in order to simulate the observed retreat. ## 5.3 Model limitations and comparison to observations In order to isolate the effect of geometry on glacier retreat, a relatively simple – but physically based – model is forced with a linearly changing external forcing. Notwithstanding a number of assumptions, the model is well suited, as it is computationally inexpensive and allows for a large set of ensemble simulations starting from the LIA in 1850. Studying long time periods is vital in order to capture internal glacier adjustments to changes in external forcing beyond the last few decades. Unfortunately, few observations exist to validate the model for such a long time period, which supports our chosen idealized model setup. The model parameters are calibrated with the few observations that exist, and the modeled retreat of Jakobshavn Isbræ is compared to the observed retreat. Both modeled and observed calving front positions show a highly nonlinear retreat and the rapid disintegration of a several-kilometer-long floating tongue (Fig. 5). The model results show a robust dependency of this nonlinear retreat on the trough geometry, especially the trough width. However, the modeled glacier front retreats more slowly (deviating up to 13 km from the observations) and exaggerates the break-off of the floating tongue. For the dynamic interpretation of the nonlinear retreat, a perfect agreement is not essential, especially given the one-dimensionality of the model and the uncertainties in the width-averaged observed front positions. For the interpretation of the model results, the assumptions made in the model have to be considered. The most obvious assumption is the one-dimensionality that does not account for across and vertical variation in geometry. The residence time of the grounding line at pinning points may be partly overestimated due to this width and depth integration. Local bedrock highs that partly pin the floating tongue are not properly represented in a width-averaged setting, and the width is regarded as symmetric around the central flow line. In reality, one lateral margin might narrow down and pin the grounding line while the other lateral margin widens up, causing an asymmetric calving front retreat (Fig. 1). Here, we only focus on the large-scale dynamics; lateral and vertical variations in the ice flow are seen as second-order processes, considering the high basal motion and high velocities in the deep and narrow channel at the lowermost 100 km of the model domain. As the glacier retreats further upstream “into” the ice sheet, the lateral ice flux becomes more significant and the whole drainage area should explicitly be modeled, favoring the use of a three-dimensional model for future projections. Figure 7Simulated front and grounding line (GL) positions with accumulated volume loss for the step forcing (Table 2). The depth and width integration also applies to internal glacier properties; ice temperatures are in reality high at the bottom , so most deformation happens there, whereas the model assumes a vertically constant shearing and a constant rate factor. Along the margins of a real glacier, ice viscosity drops significantly in response to acceleration and calving front migration , and marginal crevasses can form, which are not considered here. However, lateral drag and weakened margins mostly affect the timing and not the details of the retreat, as has been tested in an idealized setting with the same model . Ice viscosity is a response to dynamic changes rather than a cause, and it is therefore not expected to change the retreat pattern significantly. However, ice viscosity may slightly alter the timing and residence time of the grounding line. Several parameterizations of physical processes are used in the model, such as submarine melt and buttressing by ice mélange. This complicates direct model validation with observed values. However, these processes are still crudely implemented, if at all represented in glacier models. For example, many models prescribe the position of the calving front (Bondzio et al.2017), or only focus on grounding line migration, whereas our model uses a physical calving law. Also, few observations exist of submarine melt, calving rates and basal sliding, especially over the long time period studied here. The impact of plume dynamics on submarine melt could be implemented in our model (Jenkins2011), or an along-flow variation in submarine melt rate . However, the number of observations on ocean temperatures is sparse and the model results are similar when using along-flow variations in submarine melt, compared to a constant value along the floating part (not shown here). Also interannual variability of calving rates due to submarine melt, runoff and ice mélange is neglected and not considered as important when looking at centennial timescales. Although many of the model parameters are only indirectly linked to observations, existing observations such as velocities, ice discharge and thickness are used to tune the parameters and to reproduce the glacier behavior as close as possible. Note that the change in forcing parameters required to dislodge the grounding line from its stable LIA position might be overestimated, due to large variations in bed topography and width. Also, many parameter combinations can simulate the same stable position but lead to different glacier retreat . Therefore, we include a large range of parameter perturbations, leading to different residence times for the grounding line, but with no reduction in the importance of the geometry in defining locations of intermittent slowdown in the overall grounding line retreat. The choice of the model is dependent on the questions raised; if the objective is to accurately predict or reconstruct the time evolution of glacier retreat (Muresan et al.2016; Nick et al.2013), a more sophisticated model has to be used. Note that also the observations contain uncertainties. The front position can vary by several kilometers seasonally (Amundson et al.2010) and this position varies by several kilometers across the trough (Fig. 1). For the calculation of the one-dimensional front position, we assume a west–east orientation of the trough, which gives an offset at the most recent calving fronts; however, the deviation is only a few kilometers and within the spread of the across variation of the calving front. Most importantly, the bed topography – especially in the densely ice-covered fjord and a sediment-rich subglacial bed – is challenging to obtain. Due to the strong control of the fjord geometry on the glacier retreat, small uncertainties in the trough geometry can cause a very different retreat pattern. This highlights the importance of detailed knowledge of the underlying bed topography (Durand et al.2011). ## 5.4 Glacier front reconstructions based on trough width Figure 6 illustrates the potential in using the model simulations in a geomorphological context. Marine-terminating glaciers continuously erode their beds and deposit sediments, forming submarine landforms such as moraines. The rate of sediment deposition and resulting proglacial landforms are functions of climatic, geological and glaciological variables, though these functions remain poorly quantified due to sparse observational constraints. Proglacial transverse ridges tend to form during gradual grounded calving front retreat, whereas more pronounced grounding zone wedges are associated with episodic grounding line retreat . The abundance of ice mélange in front of Jakobshavn Isbræ renders studies of submarine geomorphology difficult. Studies of this kind are lacking in the fjord, though evidence of the style of deglacial ice sheet retreat in Disko Bugt does exist . Our study raises generic questions about the links between trough geometry and moraine positions. We suggest that likely locations for moraine formation can be predicted from the glacier width, which largely determines the position of grounding line slowdown. The finding of the very robust influence of width on the retreat patterns (Fig. 6) means that investigating the detailed fjord geometry allows for the location of expected slowdowns or step changes . This is extremely useful for reconstructions and interpreting paleo-records, for example, from adjacent land records, moraines and proglacial lake sediments. To this end, our study clearly highlights the potential of combining long-term modeling studies with geomorphological and sedimentary evidence to understand the nonlinear response of marine ice sheet margins. This needs to be considered when inferring climate information based on glacier retreat reconstructions. 6 Conclusions The rapid retreat of many of Greenland's outlet glaciers during the last decades has been related to increased oceanic and atmospheric temperatures, though individual glaciers display diverse behavior. As an example of a rapidly retreating glacier, we study the centennial-scale retreat of Jakobshavn Isbræ from its Little Ice Age maximum to its present-day position. The numerical model is forced with a linear increase in surface mass balance, submarine melt rate, crevasse water depth and a reduction in sea ice buttressing to isolate the importance of geometry for temporary grounding line stability. The following conclusions are drawn. • The response of Jakobshavn Isbræ to a linear climate forcing is highly nonlinear due to the characteristic trough geometry. The importance of the trough geometry is a robust feature in our study and the modeled nonlinear frontal retreat is consistent with long-term (century-scale) observations. • External changes at the glacier terminus determine the degree and the timing of the glacier retreat: calving and submarine melt act together to trigger the observed retreat of Jakobshavn Isbræ, while surface mass balance plays a negligible role in forcing the glacier retreat. • The fjord geometry, and in particular trough width, determines where the grounding line retreat slows down during retreat. Artificially straightening the trough geometry in the model reduces the nonlinearity of the glacier retreat. • Stabilization of the grounding line at pinning points in the fjord can delay rapid retreat and mask the slow response of dynamic adjustments to past changes in external forcing. We show this for the case of Jakobshavn Isbræ, which might be transferable to similar marine-terminating glaciers in Greenland and other regions with glaciated fjord landscapes. Our findings suggest that the retreat history of Jakobshavn Isbræ following the Little Ice Age has largely been controlled by variations in trough width and bedrock geometry, and that future retreat will be governed by similar factors. Since grounding line stability is fundamentally controlled by the geometry, we also postulate that geometry – notably trough width – is a vital source of information when interpreting paleo-records of marine-terminating glaciers. Code and data availability Code and data availability. The model code is available through Faezeh M. Nick (faezeh.nick@gmail.com). The model output is available online at https://doi.org/10.11582/2018.00018 (Steiger et al., 2018). Author contributions Author contributions. NS, KHN and HÅ designed the research; NS performed the model runs and created the figures with significant input from KHN, HÅ and BdF. FMN provided the model and technical support. NS wrote the paper, with substantial contributions from all authors. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This research was funded by the Fast Track Initiative from Bjerknes Centre for Climate Research and the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement 610055 as part of the ice2ice project. Henning Åkesson was supported by the Research Council of Norway (project no. 229788/E10), as part of the research project Eurasian Ice Sheet and Climate Interactions (EISCLIM). Front positions of Jakobshavn Isbræ since 1990 are obtained from ENVO at http://products.esa-icesheets-cci.org/ (last access: 31 May 2017). We want to thank Mahé Perrette (https://github.com/perrette/webglacier1d, last access: 26 October 2016) for providing a python–javascript project to produce a 1-D profile of bed topography and glacier surface. Thanks to Jason Box for providing SMB data for the GrIS. We also thank Martin Lüthi, Johannes Bondzio, Andreas Vieli and Ellyn Enderlin for constructive reviews that improved the manuscript greatly. Thanks to Anna Hughes for proofreading the manuscript. Edited by: Olaf Eisen Reviewed by: Martin Lüthi, Johannes H. Bondzio, Ellyn Enderlin, and Andreas Vieli References Åkesson, H., Nisancioglu, K. H., and Nick, F. M.: Impact of fjord geometry on grounding line stability, Front. Earth Sci., 6, 1–17, https://doi.org/10.3389/feart.2018.00071, 2018. a, b, c, d Amundson, J. M., Fahnestock, M., Truffer, M., Brown, J., Lüthi, M. P., and Motyka, R. J.: Ice mélange dynamics and implications for terminus stability, Jakobshavn Isbræ, Greenland, J. Geophys. Res.-Earth, 115, 1–12, https://doi.org/10.1029/2009JF001405, 2010. a, b, c Bamber, J. L., Alley, R. B., and Joughin, I.: Rapid response of modern day ice sheets to external forcing, Earth Planet Sc. Lett., 257, 1–13, https://doi.org/10.1016/j.epsl.2007.03.005, 2007. a Benn, D. I., Warren, C. R., and Mottram, R. H.: Calving processes and the dynamics of calving glaciers, Earth-Sci. Rev., 82, 143–179, https://doi.org/10.1016/j.earscirev.2007.02.002, 2007. a Boghosian, A., Tinto, K., Cochran, J. R., Porter, D., Elieff, S., Burton, B. L., and Bell, R. E.: Resolving bathymetry from airborne gravity along Greenland fjords, J. Geophys. Res.-Sol. Ea., 119, 8516–8533, https://doi.org/10.1002/2014JB011381.Received, 2015. a, b Bondzio, J. H., Seroussi, H., Morlighem, M., Kleiner, T., Rückamp, M., Humbert, A., and Larour, E. Y.: Modelling calving front dynamics using a level-set method: application to Jakobshavn Isbræ, West Greenland, The Cryosphere, 10, 497–510, https://doi.org/10.5194/tc-10-497-2016, 2016. a Bondzio, J. H., Morlighem, M., Seroussi, H., Kleiner, T., Rückamp, M., Mouginot, J., Moon, T., Larour, E. Y., and Humbert, A.: The mechanisms behind Jakobshavn Isbræ's acceleration and mass loss: a 3D thermomechanical model study, Geophys. Res. Lett., 44, 6252–6260, 2017. a, b, c, d Box, J. E.: Greenland ice sheet mass balance reconstruction. Part II: Surface mass balance (1840–2010), J. Climate, 26, 6974–6989, https://doi.org/10.1175/JCLI-D-12-00518.1, 2013. a, b, c, d Carr, J. R., Stokes, C. R., and Vieli, A.: Recent progress in understanding marine-terminating Arctic outlet glacier response to climatic and oceanic forcing: Twenty years of rapid change, Prog. Phys. Geog., 37, 436–467, https://doi.org/10.1177/0309133313483163, 2013. a Cassotto, R., Fahnestock, M., Amundson, J. M., Truffer, M., and Joughin, I.: Seasonal and interannual variations in ice melange and its impact on terminus stability, Jakobshavn Isbræ, Greenland, J. Glaciol., 61, 76–88, https://doi.org/10.3189/2015JoG13J235, 2015. a, b, c, d Cook, A., Holland, P., Meredith, M., Murray, T., Luckman, A., and Vaughan, D.: Ocean forcing of glacier retreat in the western Antarctic Peninsula, Science, 353, 283–286, 2016. a Cook, S., Zwinger, T., Rutt, I. C., O'Neel, S., and Murray, T.: Testing the effect of water in crevasses on a physically based calving model, Ann. Glaciol., 53, 90–96, https://doi.org/10.3189/2012AoG60A107, 2012. a Cook, S., Rutt, I. C., Murray, T., Luckman, A., Zwinger, T., Selmes, N., Goldsack, A., and James, T. D.: Modelling environmental influences on calving at Helheim Glacier in eastern Greenland, The Cryosphere, 8, 827–841, https://doi.org/10.5194/tc-8-827-2014, 2014. a Cowton, T., Slater, D., Sole, A., Goldberg, D., and Nienow, P.: Modeling the impact of glacial runoff on fjord circulation and submarine melt rate using a new subgrid-scale parameterization for glacial plumes, J. Geophys. Res.-Oceans, 120, 796–812, https://doi.org/10.1002/2014JC010324, 2015. a Csatho, B., Schenk, T., Van Der Veen, C. J., and Krabill, W. B.: Intermittent thinning of Jakobshavn Isbrae, West Greenland, since the Little Ice Age, J. Glaciol., 54, 131–144, https://doi.org/10.3189/002214308784409035, 2008. a, b, c Csatho, B. M., Schenk, A. F., van der Veen, C. J., Babonis, G., Duncan, K., Rezvanbehbahani, S., Van Den Broeke, M. R., Simonsen, S. B., Nagarajan, S., and van Angelen, J. H.: Laser altimetry reveals complex pattern of Greenland Ice Sheet dynamics, P. Natl. Acad. Sci. USA, 111, 18478–18483, 2014. a Cuffey, K. and Paterson, W.: The Physics of Glaciers, Butterworth-Heinemann/Elsevier, Burlington, MA, 2010. a, b Dowdeswell, J., Canals, M., Jakobsson, M., Todd, B. J., Dowdeswell, E., and Hogan, K.: Atlas of submarine glacial landforms: Modern, Quaternary and Ancient, Geological Society of London, 2016. a Durand, G., Gagliardini, O., Favier, L., Zwinger, T., and Le Meur, E.: Impact of bedrock description on modeling ice sheet dynamics, Geophys. Res. Lett., 38, 6–11, https://doi.org/10.1029/2011GL048892, 2011. a Enderlin, E. M. and Howat, I. M.: Submarine melt rate estimates for floating termini of Greenland outlet glaciers (2000–2010), J. Glaciol., 59, 67–75, 2013. a Enderlin, E. M., Howat, I. M., and Vieli, A.: The sensitivity of flowline models of tidewater glaciers to parameter uncertainty, The Cryosphere, 7, 1579–1590, https://doi.org/10.5194/tc-7-1579-2013, 2013a. a, b Enderlin, E. M., Howat, I. M., and Vieli, A.: High sensitivity of tidewater outlet glacier dynamics to shape, The Cryosphere, 7, 1007–1015, https://doi.org/10.5194/tc-7-1007-2013, 2013b. a, b, c, d Enderlin, E. M., Howat, I. M., Jeong, S., Noh, M. J., Van Angelen, J. H., and Van Den Broeke, M. R.: An improved mass budget for the Greenland ice sheet, Geophys. Res. Lett., 41, 866–872, https://doi.org/10.1002/2013GL059010, 2014. a, b Felikson, D., Bartholomaus, T. C., Catania, G. A., Korsgaard, N. J., Kjær, K. H., Morlighem, M., Noël, B., van den Broeke, M., Stearns, L. A., Shroyer, E. L., Sutherland, D. A., and Nash, J. D.: Inland thinning on the Greenland ice sheet controlled by outlet glacier geometry, Nat. Geosci., 10, 366–369, https://doi.org/10.1038/ngeo2934, 2017. a, b, c, d Fowler, A. C.: Weertman, Lliboutry and the development of sliding theory, J. Glaciol., 56, 965–972, https://doi.org/10.3189/002214311796406112, 2010. a Fürst, J. J., Goelzer, H., and Huybrechts, P.: Ice-dynamic projections of the Greenland ice sheet in response to atmospheric and oceanic warming, The Cryosphere, 9, 1039–1062, https://doi.org/10.5194/tc-9-1039-2015, 2015. a Gogineni, S., Yan, J. B., Paden, J., Leuschen, C., Li, J., Rodriguez-Morales, F., Braaten, D., Purdon, K., Wang, Z., Liu, W., and Gauch, J.: Bed topography of Jakobshavn Isbræ, Greenland, and Byrd Glacier, Antarctica, J. Glaciol., 60, 813–833, https://doi.org/10.3189/2014JoG14J129, 2014. a Gudmundsson, G. H.: Transmission of basal variability to a glacier surface, J. Geopys. Res., 108, 1–19, https://doi.org/10.1029/2002JB002107, 2003. a Gudmundsson, G. H., Krug, J., Durand, G., Favier, L., and Gagliardini, O.: The stability of grounding lines on retrograde slopes, The Cryosphere, 6, 1497–1505, https://doi.org/10.5194/tc-6-1497-2012, 2012. a, b, c Habermann, M., Truffer, M., and Maxwell, D.: Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland, The Cryosphere, 7, 1679–1692, https://doi.org/10.5194/tc-7-1679-2013, 2013. a, b Hansen, M. O., Gissel Nielsen, T., Stedmon, C. A., and Munk, P.: Oceanographic regime shift during 1997 in Disko Bay, Western Greenland, Limnol. Oceanogr., 57, 634–644, https://doi.org/10.4319/lo.2012.57.2.0634, 2012. a Holland, D. M., Thomas, R. H., de Young, B., Ribergaard, M. H., and Lyberth, B.: Acceleration of Jakobshavn Isbræ triggered by warm subsurface ocean waters, Nat. Geosci., 1, 659–664, https://doi.org/10.1038/ngeo316, 2008a. a, b, c, d Holland, P. R., Jenkins, A., and Holland, D. M.: The response of Ice shelf basal melting to variations in ocean temperature, J. Climate, 21, 2558–2572, https://doi.org/10.1175/2007JCLI1909.1, 2008b. Howat, I. M., Ahn, Y., Joughin, I., Van Den Broeke, M. R., Lenaerts, J. T. M., and Smith, B.: Mass balance of Greenland's three largest outlet glaciers, 2000–2010, Geophys. Res. Lett., 38, 1–5, https://doi.org/10.1029/2011GL047565, 2011. a, b, c Howat, I. M., Negrete, A., and Smith, B. E.: The Greenland Ice Mapping Project (GIMP) land classification and surface elevation data sets, The Cryosphere, 8, 1509–1518, https://doi.org/10.5194/tc-8-1509-2014, 2014. a, b, c, d Ingölfsson, Ö., Frich, P., Funder, S., and Humlum, O.: Paleoclimatic implications of an early Holocene glacier advance on Disko Island, West Greenland, Boreas, 19, 297–311, https://doi.org/10.1111/j.1502-3885.1990.tb00133.x, 1990. a Jamieson, S. S., Vieli, A., Livingstone, S. J., Ó Cofaigh, C., Stokes, C., Hillenbrand, C.-D., and Dowdeswell, J. A.: Ice-stream stability on a reverse bed slope, Nat. Geosci., 5, 799–802, https://doi.org/10.1038/NGEO1600, 2012. a, b, c Jamieson, S. S. R., Vieli, A., Cofaigh, C. Ó., Stokes, C. R., Livingstone, S. J., and Hillenbrand, C. D.: Understanding controls on rapid ice-stream retreat during the last deglaciation of Marguerite Bay, Antarctica, using a numerical model, J. Geophys. Res.-Earth, 119, 247–263, https://doi.org/10.1002/2013JF002934, 2014. a, b, c, d Jeffries, S.: KAGA Site Information https://www.unavco.org/instrumentation/networks/status/pbo/overview/KAGA (last access: 13 February 2018), 2014. a Jenkins, A.: Convection-driven melting near the grounding lines of ice shelves and tidewater glaciers, J. Phys. Oceanogr., 41, 2279–2294, https://doi.org/10.1175/JPO-D-11-03.1, 2011. a, b, c Jóhannesson, T., Raymond, C. F., and Waddington, E. D.: A Simple Method for Determining the Response Time of Glaciers, Glac. Quat. G., 6, 343–352, https://doi.org/10.1007/978-94-015-7823-3, 1989. a Joughin, I., Abdalati, W., and Fahnestock, M.: Large fluctuations in speed on Greenland's Jakobshavn Isbræ glacier, Nature, 432, 608–610, https://doi.org/10.1038/nature03130, 2004. a, b, c, d, e, f Joughin, I., Smith, B. E., Howat, I. M., Floricioiu, D., Alley, R. B., Truffer, M., and Fahnestock, M.: Seasonal to decadal scale variations in the surface velocity of Jakobshavn Isbrae, Greenland: Observation and model-based analysis, J. Geophys. Res.-Earth, 117, 1–20, https://doi.org/10.1029/2011JF002110, 2012. a, b, c Joughin, I., Smith, B. E., Shean, D. E., and Floricioiu, D.: Brief Communication: Further summer speedup of Jakobshavn Isbræ, The Cryosphere, 8, 209–214, https://doi.org/10.5194/tc-8-209-2014, 2014. a, b, c, d, e Joughin, I., Smith, B. E., and Howat, I. M.: A complete map of Greenland ice velocity derived from satellite data collected over 20 years, J. Glaciol., 64, 1–11, 2017. a Khan, S. A., Aschwanden, A., Bjørk, A. A., Wahr, J., Kjeldsen, K. K., and Kjær, K. H.: Greenland ice sheet mass balance: a review, Rep. Prog. Phys., 78, 046801, https://doi.org/10.1088/0034-4885/78/4/046801, 2015. a, b, c, d Krabill, W.: Greenland Ice Sheet: High-Elevation Balance and Peripheral Thinning, Science, 289, 428–430, https://doi.org/10.1126/science.289.5478.428, 2000. a Lea, J. M., Mair, D. W. F., Nick, F. M., Rea, B. R., van As, D., Morlighem, M., Nienow, P. W., and Weidick, A.: Fluctuations of a Greenlandic tidewater glacier driven by changes in atmospheric forcing: observations and modelling of Kangiata Nunaata Sermia, 1859–present, The Cryosphere, 8, 2031–2045, https://doi.org/10.5194/tc-8-2031-2014, 2014. a Lloyd, J., Moros, M., Perner, K., Telford, R. J., Kuijpers, A., Jansen, E., and McCarthy, D.: A 100 yr record of ocean temperature control on the stability of Jakobshavn Isbrae, West Greenland, Geology, 39, 867–870, https://doi.org/10.1130/G32076.1, 2011. a, b Long, A. J., Roberts, D. H., and Rasch, M.: New observations on the relative sea level and deglacial history of Greenland from Innaarsuit, Disko Bugt, Quaternary Res., 60, 162–171, https://doi.org/10.1016/S0033-5894(03)00085-1, 2003. a Luckman, A. and Murray, T.: Seasonal variation in velocity before retreat of Jakobshavn Isbræ, Greenland, Geophys. Res. Lett., 32, 1–4, https://doi.org/10.1029/2005GL022519, 2005. a Lüthi, M., Funk, M., Iken, A., Gogineni, S., and Truffer, M.: Mechanisms of fast flow in Jakobshavn Isbrae, West Greenland: Part III. Measurements of ice deformation, temperature and cross-borehole conductivityin boreholes to the bedrock, J. Glaciol., 48, 369–385, https://doi.org/10.3189/172756502781831322, 2002. a, b Moon, T., Joughin, I., Smith, B., and Howat, I.: 21st-Century Evolution of Greenland Outlet Glacier Velocities, Science, 336, 576–578, https://doi.org/10.1126/science.1219985, 2012. a Morlighem, M., Rignot, E., Mouginot, J., Seroussi, H., and Larour, E.: Deeply incised submarine glacial valleys beneath the Greenland ice sheet, Nat. Geosci., 7, 18–22, https://doi.org/10.1038/ngeo2167, 2014. a, b, c Morlighem, M., Bondzio, J., Seroussi, H., Rignot, E., Larour, E., Humbert, A., and Rebuffi, S.: Modeling of Store Gletscher's calving dynamics, West Greenland, in response to ocean thermal forcing, Geophys. Res. Lett., 43, 2659–2666, 2016. a Motyka, R. J., Hunter, L., Echelmeyer, K. A., and Connor, C.: Submarine melting at the terminus of a temperate tidewater glacier, LeConte Glacier, Alaska, USA, Ann. Glaciol., 36, 57–65, https://doi.org/10.3189/172756403781816374, 2003. a Motyka, R. J., Truffer, M., Fahnestock, M., Mortensen, J., Rysgaard, S., and Howat, I.: Submarine melting of the 1985 Jakobshavn Isbrae floating tongue and the triggering of the current retreat, J. Geophys. Res.-Earth, 116, 1–17, https://doi.org/10.1029/2009JF001632, 2011. a, b, c, d Muresan, I. S., Khan, S. A., Aschwanden, A., Khroulev, C., Van Dam, T., Bamber, J., van den Broeke, M. R., Wouters, B., Kuipers Munneke, P., and Kjær, K. H.: Modelled glacier dynamics over the last quarter of a century at Jakobshavn Isbræ, The Cryosphere, 10, 597–611, https://doi.org/10.5194/tc-10-597-2016, 2016. a, b, c Nick, F. M., Vieli, A., Howat, I. M., and Joughin, I.: Large-scale changes in Greenland outlet glacier dynamics triggered at the terminus, Nat. Geosci., 2, 110–114, https://doi.org/10.1038/ngeo394, 2009. a, b, c Nick, F. M., Van Der Veen, C. J., Vieli, A., and Benn, D. I.: A physically based calving model applied to marine outlet glaciers and implications for the glacier dynamics, J. Glaciol., 56, 781–794, https://doi.org/10.3189/002214310794457344, 2010. a, b, c, d, e, f, g, h, i Nick, F. M., Vieli, A., Andersen, M. L., Joughin, I., Payne, A., Edwards, T. L., Pattyn, F., and van de Wal, R. S. W.: Future sea-level rise from Greenland's main outlet glaciers in a warming climate, Nature, 497, 235–238, https://doi.org/10.1038/nature12068, 2013. a, b, c, d, e Nye, J.: The distribution of stress and velocity in glaciers and ice-sheets, P. Roy. Soc. Lond. A Mat., 239, 123–133, https://doi.org/10.1098/rspa.1957.0026, 1957. a Nye, J. F.: The Response of Glaciers and Ice-Sheets to Seasonal and Climatic Changes, P. Roy. Soc. Lond. A Mat., 256, 559–584, 1960. a Pattyn, F., Schoof, C., Perichon, L., Hindmarsh, R. C. A., Bueler, E., de Fleurian, B., Durand, G., Gagliardini, O., Gladstone, R., Goldberg, D., Gudmundsson, G. H., Huybrechts, P., Lee, V., Nick, F. M., Payne, A. J., Pollard, D., Rybak, O., Saito, F., and Vieli, A.: Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP, The Cryosphere, 6, 573–588, https://doi.org/10.5194/tc-6-573-2012, 2012. a Pollard, D., Deconto, R. M., and Alley, R. B.: Potential Antarctic Ice Sheet retreat driven by hydrofracturing and ice cliff failure, Earth Planet Sc. Lett., 412, 112–121, https://doi.org/10.1016/j.epsl.2014.12.035, 2015. a, b Reeh, N., Thomsen, H. H., Higgins, A. K., and Weidick, A.: Sea ice and the stability of north and northeast Greenland floating glaciers, Ann. Glaciol., 33, 474–480, https://doi.org/10.3189/172756401781818554, 2001. a Rignot, E. and Kanagaratnam, P.: Changes in the Velocity Structure of the Greenland Ice Sheet, Science, 311, 986–990, https://doi.org/10.1126/science.1121381, 2006. a, b, c Rignot, E. and Mouginot, J.: Ice flow in Greenland for the International Polar Year 2008–2009, Geophys. Res. Lett., 39, 1–7, https://doi.org/10.1029/2012GL051634, 2012. a, b Rignot, E., Velicogna, I., Van Den Broeke, M. R., Monaghan, A., and Lenaerts, J.: Acceleration of the contribution of the Greenland and Antarctic ice sheets to sea level rise, Geophys. Res. Lett., 38, 1–5, https://doi.org/10.1029/2011GL046583, 2011. a Robel, A. A.: Thinning sea ice weakens buttressing force of iceberg mélange and promotes calving, Nat. Commun., 8, 14596, https://doi.org/10.1038/ncomms14596, 2017. a Schoof, C.: Ice sheet grounding line dynamics: Steady states, stability, and hysteresis, J. Geophys. Res.-Earth, 112, 1–19, https://doi.org/10.1029/2006JF000664, 2007. a, b, c, d Schoof, C., Davis, A. D., and Popa, T. V.: Boundary layer models for calving marine outlet glaciers, The Cryosphere, 11, 2283–2303, https://doi.org/10.5194/tc-11-2283-2017, 2017. a Sciascia, R., Straneo, F., Cenedese, C., and Heimbach, P.: Seasonal variability of submarine melt rate and circulation in an East Greenland fjord, J. Geophys. Res.-Oceans, 118, 2492–2506, https://doi.org/10.1002/jgrc.20142, 2013. a Shapero, D. R., Joughin, I. R., Poinar, K., Morlighem, M., and Gillet-Chaulet, F.: Basal resistance for three of the largest Greenland outlet glaciers, J. Geophys. Res.-Earth, 121, 168–180, https://doi.org/10.1002/2015JF003643, 2016. a, b Small, D., Smedley, R. K., Chiverrell, R. C., Scourse, J. D., Ó Cofaigh, C., Duller, G. A., McCarron, S., Burke, M. J., Evans, D. J., Fabel, D., Gheorghiu, D. M., Thomas, G. S. P., Xu, S., and Clark, C. D.: Trough geometry was a greater influence than climate-ocean forcing in regulating retreat of the marine-based Irish-Sea Ice Stream, Geol. Soc. Am. Bull., https://doi.org/10.1130/B31852.1, 2018. a Sohn, H., Jezek, K. C., and van der Veen, C. J.: Jakobshavn Glacier, west Greenland: 30 years of spaceborne observations, Geophys. Res. Lett., 25, 2699–2702, https://doi.org/10.1029/98GL01973, 1998. a Sole, A. J., Mair, D. W. F., Nienow, P. W., Bartholomew, I. D., King, M. A., Burke, M. J., and Joughin, I.: Seasonal speedup of a Greenland marine-terminating outlet glacier forced by surface melt-induced changes in subglacial hydrology, J. Geophys. Res.-Earth, 116, 1–11, https://doi.org/10.1029/2010JF001948, 2011. a Steiger, N., Nisancioglu, K., Åkesson, H., de Fleurian, B., and Nick, F.: Flowline model output for Jakobshavn Isbræ from 1850–2014 with different changes in forcing parameters [Data set], Norstore, available at: https://doi.org/10.11582/2018.00018, 2018. Straneo, F. and Cenedese, C.: The Dynamics of Greenland's Glacial Fjords and Their Role in Climate, Rev. Mar. Sci., 7, 89–112, https://doi.org/10.1146/annurev-marine-010213-135133, 2015. a Straneo, F. and Heimbach, P.: North Atlantic warming and the retreat of Greenland's outlet glaciers, Nature, 504, 36–43, https://doi.org/10.1038/nature12854, 2013. a, b Straneo, F., Heimbach, P., Sergienko, O., Hamilton, G., Catania, G., Griffies, S., Hallberg, R., Jenkins, A., Joughin, I., Motyka, R., Pfeffer, W. T., Price, S. F., Rignot, E., Scambos, T., Truffer, M., and Vieli, A.: Challenges to understanding the dynamic response of Greenland's marine terminating glaciers to oc eanic and atmospheric forcing, B. Am. Meteol. Soc., 94, 1131–1144, https://doi.org/10.1175/BAMS-D-12-00100.1, 2013. a Streuff, K., Cofaigh, C. Ó., Hogan, K., Jennings, A., Lloyd, J. M., Noormets, R., Nielsen, T., Kuijpers, A., Dowdeswell, J. A., and Weinrebe, W.: Seafloor geomorphology and glacimarine sedimentation associated with fast-flowing ice sheet outlet glaciers in Disko Bay, West Greenland, Quaternary Sci. Rev., 169, 206–230, 2017. a Tedstone, A. J., Nienow, P. W., Gourmelen, N., Dehecq, A., Goldberg, D., and Hanna, E.: Decadal slowdown of a land-terminating sector of the Greenland Ice Sheet despite warming, Nature, 526, 692–695, https://doi.org/10.1038/nature15722, 2015. a Thomas, R. H., Abdalati, W., Frederick, E., Krabill, W. B., Manizade, S., and Steffen, K.: Investigation of surface melting and dynamic thinning on Jakobshavn Isbræ, Greenland, J. Glaciol., 49, 231–239, https://doi.org/10.3189/172756503781830764, 2003. a, b Todd, J. and Christoffersen, P.: Are seasonal calving dynamics forced by buttressing from ice mélange or undercutting by melting? Outcomes from full-Stokes simulations of Store Glacier, West Greenland, The Cryosphere, 8, 2353–2365, https://doi.org/10.5194/tc-8-2353-2014, 2014. a Todd, J., Christoffersen, P., Zwinger, T., Råback, P., Chauché, N., Benn, D., Luckman, A., Ryan, J., Toberg, N., Slater, D., and Hubbard, A.: A Full-Stokes 3-D Calving Model Applied to a Large Greenlandic Glacier, J. Geophys. Res.-Earth, 123, 410–423, 2018. a van der Veen, C. J.: Tidewater calving, J. Glaciol., 42, 375–385, https://doi.org/10.1179/102453311X13127324303399, 1996. a van der Veen, C. J.: Fracture propagation as means of rapidly transferring surface meltwater to the base of glaciers, Geophys. Res. Lett., 34, 1–5, https://doi.org/10.1029/2006GL028385, 2007. a van der Veen, C. J. and Whillans, I. M.: Model experiments on the evolution and stability of ice streams, Ann. Glaciol., 23, 129–137, 1996. a van der Veen, C. J., Plummer, J. C., and Stearns, L. A.: Controls on the recent speed-up of Jakobshavn Isbræ, West Greenland, J. Glaciol., 57, 770–782, https://doi.org/10.3189/002214311797409776, 2011.  a Vieli, A. and Nick, F. M.: Understanding and Modelling Rapid Dynamic Changes of Tidewater Outlet Glaciers: Issues and Implications, Surv. Geophys., 32, 437–458, https://doi.org/10.1007/s10712-011-9132-4, 2011. a, b Vieli, A. and Payne, A. J.: Assessing the ability of numerical ice sheet models to simulate grounding line migration, J. Geophys. Res.-Earth, 110, 1–18, https://doi.org/10.1029/2004JF000202, 2005. a, b, c Vieli, A., Funk, M., and Blatter, H.: Flow dynamics of tidewaterg laciers: A numerical modelling approach, J. Glaciol., 47, 595–606, https://doi.org/10.3189/172756501781831747, 2001. a Walter, J. I., Box, J. E., Tulaczyk, S., Brodsky, E. E., Howat, I. M., Ahn, Y., and Brown, A.: Oceanic mechanical forcing of a marine-terminating greenland glacier, Ann. Glaciol., 53, 181–192, https://doi.org/10.3189/2012AoG60A083, 2012. a, b Warren, C. R.: Terminal environment, topographic control and fluctuations of West Greenland glaciers, Boreas, 20, 1–15, 1991. a Weertman, J.: Stability of the junction of an ice sheet and an ice shelf, J. Glaciol., 13, 3–11, 1974. a Weidick, A. and Bennike, O.: Quaternary glaciation history and glaciology of Jakobshavn Isbrae and the Disko Bugt region, West Greenland: a review, Geol. Surv. Den. Greenl., 14, 1–78, 2007. a Xu, Y., Rignot, E., Menemenlis, D., and Koppes, M.: Numerical experiments on subaqueous melting of greenland tidewater glaciers in response to ocean warming and enhanced subglacial discharge, Ann. Glaciol., 53, 229–234, https://doi.org/10.3189/2012AoG60A139, 2012. a Xu, Y., Rignot, E., Fenty, I., Menemenlis, D., and Flexas, M. M.: Subaqueous melting of Store Glacier, west Greenland from three-dimensional, high-resolution numerical modeling and ocean observations, Geophys. Res. Lett., 40, 4648–4653, https://doi.org/10.1002/grl.50825, 2013. a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7720298171043396, "perplexity": 5745.433195575497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258003.30/warc/CC-MAIN-20190525104725-20190525130725-00045.warc.gz"}
http://mathonline.wikidot.com/rings
Rings # Rings Recall from the Groups page that a group is a set $G$ with a (closed) binary operation $\cdot$ denoted $(G, \cdot)$ such that: • 1. The operation $\cdot$ is associative, that is, for all $a, b, c \in G$ we have that $a \cdot (b \cdot c) = (a \cdot b) \cdot c$. • 2. There exists an element $e \in G$ such that for all $a \in G$ we have that $a \cdot e = a$ and $e \cdot a = a$ to which we call $e$ the unique identity of $\cdot$ in $G$. • 3. For each $a \in S$ there exists an $a^{-1} \in G$ such that $a \cdot a^{-1} = e$ and $a^{-1} \cdot a = e$ to which we call $a^{-1}$ the inverse of $a$. We will now describe a new type of structure called a ring. Definition: If $+$ and $\cdot$ are (closed) binary operations on a (nonempty) set $R$, then $R$ is called a Ring under $+$ and $\cdot$ denoted $(R, +, \cdot)$ if $R$ under $+$ and $\cdot$ satisfies the following properties: 1. For all $a, b, c \in R$, $a + (b + c) = (a + b) + c$ (Associativity of elements in $R$ under $+$). 2. There exists an $0 \in R$ such that for all $a \in R$ we have that $a + 0 = a$ and $0 + a = a$ (The existence of an identity element $0$ of $R$ under $+$). 3. For all $a \in R$ there exists a $-a \in R$ such that $a + (-a) = 0$ and $(-a) + a = 0$ (The existence of inverses for each element in $R$ under $+$). 4. For all $a, b \in R$ we have that $a + b = b + a$ (Commutativity of elements in $R$ under $+$). 5. For all $a, b, c \in R$, $a \cdot (b \cdot c) = (a \cdot b) \cdot c$ (Associativity of elements in $R$ under $\cdot$). 6. There exists a $1 \in R$ such that for all $a \in R$ we have hat $a \cdot 1 = a$ and $1 \cdot a = a$ (The existence of an identity element $1$ of $R$ under $\cdot$). 7. For all $a, b, c \in R$ we have that $a \cdot (b + c) = (a \cdot b) + (b \cdot c)$ and $(a + b) \cdot c = (a \cdot c) + (b \cdot c)$ (Distributivity of $\cdot$ over $+$). The operation $+$ is commonly referred to as addition while the operation $\cdot$ is commonly referred to as multiplication. In the definition above, instead of using $e$, we use $0$ to denote the identity element of the operation $+$ commonly referred to as the Additive Identity, and we use $1$ to denote the identity element of the operation $\cdot$ commonly referred to as the Multiplicative Identity. Sometimes the definition of a ring omits axiom (6). In this case, the term "Ring with Unit" is used to described our definition above, and the term "Ring Without Unit" describes axioms (1)-(5) and (7) above. ## Example 1 One example of a ring in the set of rational numbers $\mathbb{Q}$ with the operations $+$ of standard addition and $\cdot$ of standard multiplication. Let's verify this. It's easy to verify that $\mathbb{Q}$ is closed under $+$ and that $+$ is associative. Furthermore, the additive identity is $0 \in \mathbb{Q}$. For each $x \in \mathbb{Q}$ we have that $x = \frac{a}{b}$ for $a, b \in \mathbb{Z}$ and $b \neq 0$ and the additive inverse of $x$ is $-x = - \frac{a}{b} \in \mathbb{Q}$ since $\displaystyle{x + (-x) = \frac{a}{b} + \left ( -\frac{a}{b} \right ) = 0}$. Furthermore, it is obvious that the addition of rational numbers is commutative. If $x, y \in \mathbb{Q}$ where $\displaystyle{x = \frac{a}{b}}$ and $\displaystyle{y = \frac{c}{d}}$ with $a, b, c, d \in \mathbb{Z}$ and $b, d \neq 0$ then the product of $a$ and $b$ is closed under $\cdot$ since $ac, bd \in \mathbb{Z}$ and $bd \neq 0$: (1) \begin{align} \quad x \cdot y = \frac{a}{b} \cdot \frac{c}{d} = \frac{ac}{bd} \in \mathbb{Q} \end{align} It can be easily verified that $\cdot$ is an associated operation for addition of rational numbers. The multiplicative identity is the rational number $\displaystyle{1 = \frac{1}{1}}$ Lastly, it's not hard to show that the distributivity property holds. Let $x = \frac{a}{b}, y = \frac{c}{d}, z=\frac{e}{f} \in \mathbb{Q}$. For left distributivity we have: (2) \begin{align} \quad x \cdot (y + z) = \frac{a}{b} \cdot \left ( \frac{c}{d} + \frac{e}{f} \right ) = \frac{ac}{bd} + \frac{ae}{bf} = x *y + x \cdot z \end{align} For right distributivity we have: (3) \begin{align} \quad (x + y) \cdot z = \left ( \frac{a}{b} + \frac{c}{d} \right ) \cdot \frac{e}{f} = \frac{ae}{bf} + \frac{ce}{df} = x*z + y*z \end{align} Therefore $(\mathbb{Q}, +, \cdot)$ is a ring. ## Example 2 It is not hard to check that $(\mathbb{C}, + \cdot)$, $(\mathbb{R}, +, \cdot)$, and $(\mathbb{Z}, +, \cdot)$ are also rings.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989984631538391, "perplexity": 116.89757139536029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00142.warc.gz"}
http://link.springer.com/article/10.1007%2Fs00193-012-0410-y
, Volume 23, Issue 3, pp 233-249 # Spherical combustion clouds in explosions $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract This study explores the properties of spherical combustion clouds in explosions. Two cases are investigated: (1) detonation of a TNT charge and combustion of its detonation products with air, and (2) shock dispersion of aluminum powder and its combustion with air. The evolution of the blast wave and ensuing combustion cloud dynamics are studied via numerical simulations with our adaptive mesh refinement combustion code. The code solves the multi-phase conservation laws for a dilute heterogeneous continuum as formulated by Nigmatulin. Single-phase combustion (e.g., TNT with air) is modeled in the fast-chemistry limit. Two-phase combustion (e.g., Al powder with air) uses an induction time model based on Arrhenius fits to Boiko’s shock tube data, along with an ignition temperature criterion based on fits to Gurevich’s data, and an ignition probability model that accounts for multi-particle effects on cloud ignition. Equations of state are based on polynomial fits to thermodynamic calculations with the Cheetah code, assuming frozen reactants and equilibrium products. Adaptive mesh refinement is used to resolve thin reaction zones and capture the energy-bearing scales of turbulence on the computational mesh (ILES approach). Taking advantage of the symmetry of the problem, azimuthal averaging was used to extract the mean and rms fluctuations from the numerical solution, including: thermodynamic profiles, kinematic profiles, and reaction-zone profiles across the combustion cloud. Fuel consumption was limited to$\sim $60–70 %, due to the limited amount of air a spherical combustion cloud can entrain before the turbulent velocity field decays away. Turbulent kinetic energy spectra of the solution were found to have both rotational and dilatational components, due to compressibility effects. The dilatational component was typically about 1 % of the rotational component; both seemed to preserve their spectra as they decayed. Kinetic energy of the blast wave decayed due to the pressure field. Turbulent kinetic energy of the combustion cloud decayed due to enstrophy$\overline{\omega ^{2}} $and dilatation$\overline{\Delta ^{2}} \$ . Communicated by L. Bauwens. This paper is based on work that was presented at the 23rd International Colloquium on the Dynamics of Explosions and Reactive Systems, Irvine, California, USA, July 24–29, 2011.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4320797026157379, "perplexity": 3559.385863079332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/366138/how-to-factor-x4-7x2-18/366140
# How to factor $x^4-7x^2-18$ I am not sure how I would factor this. The $x^4$ and $x^2$ are really throwing me off. Can someone explain how I would factor this? - Let $y=x^2$, then factor the expression in $y$. –  Joel Reyes Noche Apr 19 '13 at 4:33 I've heard this method colloquially referred to as 'chunking'. It works in other situations which might throw you off, e.g. $e^{2x}+ae^x+b$ becomes $y^2+ay+b$ with $y=e^x$. Solving the quadratic equation in $y$ and substituting back in is far easier than any alternative. –  Ian Coley Apr 19 '13 at 4:35 Even if you don't recognize immediately that you can substitute $y=x^2$, you can work to that as follows. Note that the polynomial is even in $x$: replace $x$ with $-x$ and the polynomial stays the same. So, if $a$ is a root, then $-a$ is a root. So, it factors to $(x-a)(x+a)(x-b)(x+b)$ for some complex $a$ and $b$. Collect related factors to get $(x^2 - a^2)(x^2 - b^2)$. Give $a^2$ and $b^2$ simpler names, say $c$ and $d$, where these are possibly complex. It should then be clear. –  Eric Jablow Apr 19 '13 at 5:23 Let $y=x^2$. You then get $y^2-7y-18$. Can you factor it now? - Since all of the powers of $x$ in this polynomial are even ($18$ counts as $18 \cdot x^0$), you would make a substitution of $t = x^2$ . Since $x^4 = (x^2)^2$ , you can write your polynomial as $t^2 - 7t - 18$ . How would you factor that? - Solution 1. \begin{eqnarray*} x^4-7x^2-18&=&(x^4+2x^2)-(9x^2+18)\\ &=&x^2(x^2+2)-9(x^2+2)\\ &=&(x^2+2)(x^2-9)\\ &=&(x^2+2)(x-3)(x+3) \end{eqnarray*} Solution 2. \begin{eqnarray*} x^4-7x^2-18&=&(x^4-9x^2)+(2x^2-18)\\ &=&x^2(x^2-9)+2(x^2-9)\\ &=&(x^2-9)(x^2+2)\\ &=&(x-3)(x+3)(x^2+2) \end{eqnarray*} Solution 3. \begin{eqnarray*} x^4-7x^2-18&=&(x^4-81)-(7x^2-63)\\ &=&(x^2+9)(x^2-9)-7(x^2-9)\\ &=&(x^2-9)(x^2+9-7)\\ &=&(x-3)(x+3)(x^2+2) \end{eqnarray*} - Hint: For this one, note that $x$ only appears as an even power. Substitute $y$ for $x^2$ and see if you can do it. - Hint: If the $x^4$ and $x^2$ are confusing, a very useful trick is to replace them. More precisely, if we let "$y$" mean $x^2$, then the polynomial is $$y^2-7y-18.$$ Can you factor this? After you have done that, you can replace $y$ with $x^2$ and keep going. - As most other people have commented, the most sensible thing to do is probably make the substitution $y=x^2$. In this way, $$x^4-7x^2-18\tag{1}$$ becomes $$y^2-7y-18\tag{2}$$ We can then factor $(2)$ as follows: $y^2-7y-18 = (y-9)(y+2)$. Since $y=x^2$, we see that $(1)$ factors as follows: $$x^4-7x^2-18=(x^2-9)(x^2+2)=(x-3)(x+3)(x^2+2).$$ This is probably the most straightforward, easy way of going about it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352300763130188, "perplexity": 276.421296388369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330174.85/warc/CC-MAIN-20150827031530-00076-ip-10-171-96-226.ec2.internal.warc.gz"}
https://research.aston.ac.uk/en/publications/statnote-34-the-kolmogorov-smirnov-ks-test
Statnote 34 : the Kolmogorov-Smirnov (KS) test Anthony Hilton, Richard Armstrong Research output: Contribution to specialist publicationArticle Abstract The Kolmogorov-Smirnov (KS) test is a non-parametric test which can be used in two different circumstances. First, it can be used as an alternative to chi-square (?2) as a ‘goodness-of-fit’ test to compare whether a given ‘observed’ sample of observations conforms to an ‘expected’ distribution of results (KS, one-sample test). An example of the use of the one-sample test to determine whether a sample of observations was normally distributed was described previously. Second, it can be used as an alternative to the Mann-Whitney test to compare two independent samples of observations (KS, two-sample test). Hence, this statnote describes the use of the KS test with reference to two scenarios: (1) to compare the observed frequency (Fo) of soil samples containing cysts of the protozoan Naegleria collected each month for a year with an expected equal frequency (Fe) across months (one-sample test), and (2) to compare the abundance of bacteria on cloths and sponges sampled in a domestic kitchen environment (two-sample test). Original language English 28-30 3 14 Microbiologist Published - Sep 2013 Keywords • statistics • microbiology • Kolmogorov-Smirnov test Fingerprint Dive into the research topics of 'Statnote 34 : the Kolmogorov-Smirnov (KS) test'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762471675872803, "perplexity": 2711.4352194599637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00662.warc.gz"}
https://cs.stackexchange.com/questions/75722/why-is-np-hard-not-necessarily-np/75724
# Why is NP-hard not necessarily NP? A problem X is NP-hard if every problem in NP can be reduced to X. But every problem in NP has a polynomial time verification algorithm, so then does that not mean that I can also verify X in polynomial time because every NP problem is reducible to X? Can someone please explain or give an example. Here is an abstract definition of the concepts NP-hard and NP-complete. If $A,B$ are two decision problems, say that $A \leq B$ if there is a polytime reduction from $A$ to $B$, that is, if there exists a polytime function $f$ such that $x \in A$ iff $f(x) \in B$. Then: \begin{align*} &\mathsf{NP\text{-}hard} = \{ B : A \leq B \text{ for all } A \in \mathsf{NP} \}, \\ &\mathsf{NP\text{-}complete} = \mathsf{NP\text{-}hard} \cap \mathsf{NP}. \end{align*} Here is a different example. Consider the universe of all subsets of $\mathbb{N}$, ordered under $A \leq B$ if $A \subseteq B$. Let $\mathsf{X}$ consist of all subsets of some fixed set $X$. Define $\mathsf{X\text{-}hard}$ and $\mathsf{X\text{-}complete}$ just as above. You can check that \begin{align*} &\mathsf{X\text{-}hard} = \{ S : S \supseteq X \}, \\ &\mathsf{X\text{-}complete} = \{ X \}. \end{align*} In a similar way, NP-hardness is a lower bound on the difficulty, whereas NP-completeness is both a lower bound and an upper bound. In contrast to the second example above, there are many NP-complete problems, all of them having the same level of difficulty (two problems $A,B$ have the same level of difficulty if $A \leq B \leq A$). Finally, here is a concrete problem which is NP-hard but not NP-complete: the halting problem. The halting problem is NP-hard. Let $A$ be any computable decision problem, say computed by a Turing machine $M$ which always halts. Let $M'$ be the Turing machine which simulates $M$, halts if $M$ accepts, and runs into an infinite loop if $M$ rejects. Define a polytime function $f$ by $f(x) = (\langle M' \rangle,x)$. Then $x \in A$ iff $M$ accepts $x$ iff $M'$ halts on $x$ iff $f(x) \in \mathsf{HALT}$. The halting problem is not in NP. The halting problem is not computable, and in particular not in NP. • Maybe you meant "NP-hardness is an upper-bound". If that's the case, ping me and I will remove this comment. – nbro May 23 '17 at 14:50 • @nbro No, NP-hardness is a lower bound on hardness. The problem is at least as difficult as all problems in NP. – Yuval Filmus May 23 '17 at 18:23 • Ok. Seen from another perspective. I probably should have read your answer more carefully. – nbro May 23 '17 at 18:58 But every problem in NP has a polynomial time verification algorithm, so then does that not mean that I can also verify X in polynomial time because every NP problem is reducible to X? No. If a problem $P$ is polynomial-time reducible to $X$, it means that a solution for $X$ can be used to solve problem $P$ (in no more time that is required to find the solution for $X$), and not necessarily that a solution for $P$ may be used to solve problem $X$ (unless $X$ is also polynomial-time reducible to $P$). Provided, clearly, that those problems require at least a polynomial-time algorithm to find their solution. In other words, $X$ is at least as hard as $P$, that is we have an upper bound of the hardness of $P$. You need to think about the reduction as a mapping: whenever you have a solution for $X$, you also have a solution for $P$. A more accurate definition of what a reduction is can be found here. • It's a bit misleading when you say that a solution for X can be used to solve problem P without further time, since (1) the reduction takes time, and (2) the reduction could blow up the instance (by a polynomial amount). – Yuval Filmus May 23 '17 at 18:25 • @YuvalFilmus I'm actually saying in parenthesis: "in no more time that is required to find the solution for X". – nbro May 23 '17 at 23:26 An NP-hard problem can be beyond NP. The polynomial-time reduction from your X to any problem in NP does not necessarily have a polynomial-time inverse. If the inverse is harder, then the verification is harder. An NP-complete problem, on the other hand, is one that is NP-hard and itself in NP. For these, of course, there exist polynomial time verifications. Can someone please explain or give an example? I'll give an example. The Clique Problem: A clique in an undirected graph $$G=(V,E)$$ is a subset $$V' \subseteq V$$ of vertices, each pair of which is connected by an edge in $$E$$. In other words, a clique is a complete subgraph of $$G$$. The size of a clique is the number of vertices it contains. The clique optimization problem is the problem of finding a clique of maximum size in a graph. The clique decision problem is simply whether or not a clique of size $$k$$ (given input) exists in the graph. The clique decision problem is in NP-complete, although the clique optimization problem is in NP-hard. We can reduce from the clique decision problem to the clique optimization problem in polynomial time. This is easier, we simply solve the optimization problem, then check that the size of the maximum clique is greater than or equal to $$k$$. This is because of a simple relation: any clique of size $$n$$ contains a clique of size $$n-1$$. We cannot verify the clique optimization problem in polynomial time. If the optimization problem is given a certificate, we can verify that the certificate is a clique in polynomial time (similar to the decision problem), but we cannot verify that the certificate is a maximum clique in polynomial time. Thus it is only in NP-hard, not in NP. Generally speaking, a lot of problems in NP-hard are optimization problems because it is difficult to verify that a certificate for the problem is in fact optimal. It is actually enough if one NP complete problem X can be reduced in polynomial time to a problem Y to make Y NP-hard. (Because every problem in NP can be reduced to any NP complete problem). Most problems can be viewed as special cases of a harder problem. If problem A is a special case of a harder problem B, then there is a reduction from problem A to problem B that isn't just polynomial time, but zero time. Think about the consequence of what you are saying: If Y had to be in NP, then it would be impossible to have an NP-complete problem that is a special case of a more general problem that is not in NP.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411394000053406, "perplexity": 202.28310580911796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00007.warc.gz"}
https://cs.stackexchange.com/questions/106345/counting-the-number-of-subsets-with-positive-sum
# Counting the number of subsets with positive sum I have some constant vector $$\mathbf{s}$$ on $$n$$ dimensions, where every element of $$\mathbf{s}$$ is a real number, and I would like to multiply it by every possible $$n$$-dimensional binary vector $$\mathbf{v}_j\ (0\leq j < 2^n)$$. I only care about whether $$\mathbf{s}\cdot \mathbf{v}_j > 0$$. I would like to count all of the times this condition holds. Question: As the naive algorithm has complexity $$O(2^n)$$, it blows up fast. What are my options for optimizing this for large $$n$$? • So, you basically want to know how many subsets of $\mathbf{s}$ sum to a positive value? – ryan Apr 1 at 22:53 • You want to determine the expected value of a linear threshold function (LTF). There is some literature on that. – Yuval Filmus Apr 2 at 4:20 • You can use a meet-in-the-middle approach to get an $O(2^{n/2})$ algorithm: calculate all sums of each half of the vector, then run a two pointer algorithm. – Yuval Filmus Apr 2 at 4:22 • Additionally, if the real numbers can be expressed as rationals with some small common denominator, expressing it as a knapsack problem will help. If it is acceptable to round to such numbers (e.g., to three digits after the decimal point), the same applies. – Gassa Apr 2 at 11:03 • @ryan Thanks for reformulating -- that is correct. – Jake Brukhman Apr 2 at 14:53 Suppose that you could solve this in $$T(n)$$. Given a list of positive integers $$a_1,\ldots,a_n$$ and a target $$T$$, consider the two instances $$a_1,\ldots,a_n,-T$$ and $$a_1,\ldots,a_n,-T+1$$. Denoting by $$N(\cdots)$$ the number of positive sums, we get: • $$N(a_1,\ldots,a_n,-T)$$ is the number of subsets of $$\{a_1,\ldots,a_n\}$$ whose sum is larger than $$T$$. • $$N(a_1,\ldots,a_n,-T+1)$$ is the number of subsets of $$\{a_1,\ldots,a_n\}$$ whose sum is at least $$T$$. By comparing these two numbers, you can solve SUBSET-SUM in time $$T(n+1)$$. There is a simple $$O(2^{n/2})$$ algorithm, which proceeds as follows. We break the array $$s_1,\ldots,s_n$$ into two equal halves. We compute an ordered list $$A_1,\ldots,A_{2^{n/2}}$$ of all sums of the first half, and an ordered list $$B_1,\ldots,B_{2^{n/2}}$$ of all sums of the second half. This takes time $$O(2^{n/2})$$ if done carefully (by iterative merging). We put a pointer $$j$$ at $$B_{2^{n/2}}$$, and decrease it until $$A_1 + B_j \le 0$$. The value of $$j$$ tells us the number of pairs $$(1,j')$$ satisfying $$A_1 + B_j > 0$$. Then we do the same for $$A_2$$ — note that we can start the scan at the current value of $$j$$; and so on. This phase also takes $$O(2^{n/2})$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937767744064331, "perplexity": 201.17033515456603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00427.warc.gz"}
https://math.libretexts.org/TextMaps/Applied_Mathematics/Book%3A_Introduction_to_the_Modeling_and_Analysis_of_Complex_Systems_(Sayama)/2%3A_Fundamentals_of_Modeling/2.5%3A_A_Historical_Perspective
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 2.5: A Historical Perspective $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ As the final section in this chapter, I would like to present some historical perspective of how people have been developing modeling methodologies over time, especially those for complex systems (Fig. 2.5). Humans have been creating descriptive models (diagrams, pictures, physical models, texts, etc.) and some conceptual rule-based models since ancient times. More quantitative modeling approaches arose as more advanced mathematical tools became available. In the descriptive modeling family, descriptive statistics is among such quantitative modeling approaches. In the rule-based modeling family, dynamical equations (e.g., differential equations, difference equations) began to be used to quantitatively formulate theories that had remained at conceptual levels before. During the second half of the 20th century, computational tools became available to researchers, which opened up a whole new area of computational modeling approaches for complex systems modeling. The first of this kind was cellular automata, a massive number of identical finite-state machines that are arranged in a regular grid structure and update their states dynamically according to their own and their neighbors’ states. Cellular automata were developed by John von Neumann and Stanisław Ulam in the 1940s, initially as a theoretical medium to implement self-reproducing machines [11], but later they became a very popular modeling framework for simulating various interesting emergent behaviors and also for more serious scientific modeling of spatio-temporal dynamics [18]. Cellular automata are a special case of dynamical networks whose topologies are limited to regular grids and whose nodes are usually assumed to be homogeneous and identical. Dynamical networks formed the next wave of complex systems modeling in the 1970s and 1980s. Their inspiration came from artificial neural network research by Warren McCulloch and Walter Pitts[19] as well as byJohn Hopfield[20,21], and also from theoretical gene regulatory network research by Stuart Kauffman [22]. In this modeling framework, the topologies of systems are no longer constrained to regular grids, and the components and their connections can be heterogeneous with different rules and weights. Therefore, dynamical networks include cellular automata as a special case within them. Dynamical networks have recently merged with another thread of research on topological analysis that originated in graph theory, statistical physics, social sciences, and computational science, to form a new interdisciplinary field of network science [23, 24, 25]. Finally, further generalization was achieved by removing the requirement of explicit network topologies from the models, which is now called agent-based modeling(ABM). In ABM, the only requirement is that the system is made of multiple discrete “agents” that interact with each other (and possibly with the environment), whether they are structured into a network or not. Therefore ABM includes network models and cellular automata as its special cases. The use of ABM became gradually popular during the 1980s, 1990s, and 2000s. One of the primary driving forces for it was the application of complex systems modeling to ecological, social, economic, and political processes, in fields like game theory and microeconomics. The surge of genetic algorithms and other population-based search/optimization algorithms in computer science also took place at about the same time, which also had synergistic effects on the rise of ABM. I must be clear that the historical overview presented above is my own personal view, and it hasn’t been rigorously evaluated or validated by any science historians (therefore this may not be a valid model!). But I hope that this perspective is useful in putting various modeling frameworks into a unified, chronological picture. The following chapters of this textbook roughly follow the historical path of the models illustrated in this perspective. Exercise 2.6 Do a quick online literature search to find a few scientific articles that develop or use mathematical/computational models. Read the articles to learn more about their models, and map them to the appropriate locations in Fig. 2.5.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3319482207298279, "perplexity": 881.704805734783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589251.7/warc/CC-MAIN-20180716095945-20180716115945-00411.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/nhm.2020029
# American Institute of Mathematical Sciences September  2020, 15(3): 519-542. doi: 10.3934/nhm.2020029 ## Kinetic modelling of multiple interactions in socio-economic systems 1 Department of Mathematics "F. Casorati", University of Pavia, Via A. Ferrata 5, 27100 Pavia, Italy 2 Department of Mathematical Sciences "G. L. Lagrange", Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, Italy Received  October 2019 Revised  March 2020 Published  September 2020 Unlike the classical kinetic theory of rarefied gases, where microscopic interactions among gas molecules are described as binary collisions, the modelling of socio-economic phenomena in a multi-agent system naturally requires to consider, in various situations, multiple interactions among the individuals. In this paper, we collect and discuss some examples related to economic and gambling activities. In particular, we focus on a linearisation strategy of the multiple interactions, which greatly simplifies the kinetic description of such systems while maintaining all their essential aggregate features, including the equilibrium distributions. Citation: Giuseppe Toscani, Andrea Tosin, Mattia Zanella. Kinetic modelling of multiple interactions in socio-economic systems. Networks & Heterogeneous Media, 2020, 15 (3) : 519-542. doi: 10.3934/nhm.2020029 ##### References: show all references ##### References: The asymptotic wealth variance $\Sigma^\infty$ vs. the taxation rate $\alpha$ for both constant and $\alpha$-dependent mean wealth $m$ of the background Evolution at times $t = 1,\,5,\,25$ of the multiple-interaction model (8), (9) with $N = 5$, $N = 100$ and of its linearised version (12), (13) Comparison between the large time solution ($t = 25$) of the linearised model (12), (13) and the equilibrium distribution $f^\infty$ (17) computed from the Fokker-Planck equation (16) in the quasi-invariant regime. Top row: $\kappa = 0.1$, bottom row: $\kappa = 0.01$. The right column displays the log-log plots of the graphs in the left column Comparison between the equilibrium distribution $f^\infty$ (17) and the large time solution of the multiple-interaction model with $N = 10^2$ and $N = 10^3$ for $\kappa = 0.1$. The right panel is the log-log plot of the graph in the left panel, which gives a closer insight into the tails of the compared distributions Evolution at times $t = 5,\,10,\,25$ of the multiple-interaction model (18), (20) with $N = 2$, $N = 5$, $N = 100$ and of its linearised version (22), (23). The bottom row displays the log-log plots of the graphs in the top row to better appreciate the approximation of the tail of the distribution Left: comparison of the equilibrium distribution 30 and the large time distribution of the linearised model (22), (24) in the quasi-invariant limit. Right: comparison of the evolution of the energy of the multiple-interaction model (20) with $N = 10$ for decreasing $\lambda$ and the solution to (28) obtained in the quasi-invariant limit [1] José Antonio Alcántara, Simone Calogero. On a relativistic Fokker-Planck equation in kinetic theory. Kinetic & Related Models, 2011, 4 (2) : 401-426. doi: 10.3934/krm.2011.4.401 [2] Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056 [3] Manh Hong Duong, Yulong Lu. An operator splitting scheme for the fractional kinetic Fokker-Planck equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5707-5727. doi: 10.3934/dcds.2019250 [4] Sylvain De Moor, Luis Miguel Rodrigues, Julien Vovelle. Invariant measures for a stochastic Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (2) : 357-395. doi: 10.3934/krm.2018017 [5] Michael Herty, Christian Jörres, Albert N. Sandjo. Optimization of a model Fokker-Planck equation. Kinetic & Related Models, 2012, 5 (3) : 485-503. doi: 10.3934/krm.2012.5.485 [6] Marco Torregrossa, Giuseppe Toscani. On a Fokker-Planck equation for wealth distribution. Kinetic & Related Models, 2018, 11 (2) : 337-355. doi: 10.3934/krm.2018016 [7] Patrick Cattiaux, Elissar Nasreddine, Marjolaine Puel. Diffusion limit for kinetic Fokker-Planck equation with heavy tails equilibria: The critical case. Kinetic & Related Models, 2019, 12 (4) : 727-748. doi: 10.3934/krm.2019028 [8] Andreas Denner, Oliver Junge, Daniel Matthes. Computing coherent sets using the Fokker-Planck equation. Journal of Computational Dynamics, 2016, 3 (2) : 163-177. doi: 10.3934/jcd.2016008 [9] Ioannis Markou. Hydrodynamic limit for a Fokker-Planck equation with coefficients in Sobolev spaces. Networks & Heterogeneous Media, 2017, 12 (4) : 683-705. doi: 10.3934/nhm.2017028 [10] Giuseppe Toscani. A Rosenau-type approach to the approximation of the linear Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (4) : 697-714. doi: 10.3934/krm.2018028 [11] John W. Barrett, Endre Süli. Existence of global weak solutions to Fokker-Planck and Navier-Stokes-Fokker-Planck equations in kinetic models of dilute polymers. Discrete & Continuous Dynamical Systems - S, 2010, 3 (3) : 371-408. doi: 10.3934/dcdss.2010.3.371 [12] Michael Herty, Lorenzo Pareschi. Fokker-Planck asymptotics for traffic flow models. Kinetic & Related Models, 2010, 3 (1) : 165-179. doi: 10.3934/krm.2010.3.165 [13] Linjie Xiong, Tao Wang, Lusheng Wang. Global existence and decay of solutions to the Fokker-Planck-Boltzmann equation. Kinetic & Related Models, 2014, 7 (1) : 169-194. doi: 10.3934/krm.2014.7.169 [14] Ludovic Dan Lemle. $L^1(R^d,dx)$-uniqueness of weak solutions for the Fokker-Planck equation associated with a class of Dirichlet operators. Electronic Research Announcements, 2008, 15: 65-70. doi: 10.3934/era.2008.15.65 [15] Joseph G. Conlon, André Schlichting. A non-local problem for the Fokker-Planck equation related to the Becker-Döring model. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1821-1889. doi: 10.3934/dcds.2019079 [16] Simon Plazotta. A BDF2-approach for the non-linear Fokker-Planck equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2893-2913. doi: 10.3934/dcds.2019120 [17] Florian Schneider, Andreas Roth, Jochen Kall. First-order quarter-and mixed-moment realizability theory and Kershaw closures for a Fokker-Planck equation in two space dimensions. Kinetic & Related Models, 2017, 10 (4) : 1127-1161. doi: 10.3934/krm.2017044 [18] Lvqiao Liu, Hao Wang. Global existence and decay of solutions for hard potentials to the fokker-planck-boltzmann equation without cut-off. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3113-3136. doi: 10.3934/cpaa.2020135 [19] Esther S. Daus, Shi Jin, Liu Liu. Spectral convergence of the stochastic galerkin approximation to the boltzmann equation with multiple scales and large random perturbation in the collision kernel. Kinetic & Related Models, 2019, 12 (4) : 909-922. doi: 10.3934/krm.2019034 [20] Shui-Nee Chow, Wuchen Li, Haomin Zhou. Entropy dissipation of Fokker-Planck equations on graphs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 4929-4950. doi: 10.3934/dcds.2018215 2019 Impact Factor: 1.053
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7216139435768127, "perplexity": 3223.0101182200906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00657.warc.gz"}
https://engineering.stackexchange.com/questions/12114/what-happens-when-the-surface-ends-at-the-laminar-zone-of-the-boundary-layer
# What happens when the surface ends at the laminar zone of the boundary layer? When a body travels through any fluid(like aircrafts and cars), due to viscosity of the fluid a boundary layer is formed around the surface of the body which separates the velocity of the fluid inside the boundary layer(near the surface of object) to velocity of the fluid.The boundary layer consists of three zones laminar, transient and turbulent zones respectively. Is it compulsory for all the three zones to exist in a boundary layer? If we consider a small smooth surface, is it possible that the surface ends at the laminar zone of the boundary layer and there is no turbulent zone in boundary layer over the surface? And if it is possible will there be a low pressure zone at the end of the surface or something? • Useful search term : "Reynolds number". – user_1818839 Nov 2 '16 at 11:00 It is certainly possible to achieve this in certain circumstances for example http://www.aviation-history.com/theory/lam-flow.htm Typically it is a lot easier to claanly accelerate a flow arround the leading edge of a surface than to keep it attached as the flow converges again. So the best chance of achieveing this is a long tapering teardrop shape as in sprint cycling helmets and some solar powerd endurance vehicles. However in practice achieving completely laminar flow tends to compromise practical design to the extent that it is not useful and most practical aerodynamics is more about managing turbulence and vortex generation rather than attempting to eliminate it entirely. For example in F1 cars a lot of engineering effort goes into making vortices do a useful job (particularly) in separating the turbulent wash from the tyres from the laminar flow under the floor/diffuser) Is it compulsory for all the three zones to exist in a boundary layer? in practice, yes. If we consider a small smooth surface, is it possible that the surface ends at the laminar zone of the boundary layer and there is no turbulent zone in boundary layer over the surface? IF you have a sufficiently smooth surface, and sufficiently slow velocity, the NSE simplify (I use the term loosely) into a PDE that can be solved analytically. (stokes flow around a cylinder or sphere, for example) The fact that they can be solved analytically indicate that the flow is not turbulent! (again, this is theoretical) And if it is possible will there be a low pressure zone at the end of the surface or something? I'd have to crack open my fluid mechanics books before I can give a definite answer
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301143050193787, "perplexity": 733.1385690937288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00115.warc.gz"}
https://puzzling.meta.stackexchange.com/questions/5420/sharing-and-rewarding-what-went-into-making-a-good-puzzle
# Sharing and rewarding what went into making a good puzzle I'd like to put forth an idea which I think could simultaneously address two heavily discussed topics on Puzzling.SE:- 1. Adding more content about puzzle creation itself (in addition to being a repository of high-quality puzzles) 2. Rewarding good puzzles by awarding the questioner more than +5 per vote While Puzzling.SE originally started out as a site to learn about creating and solving puzzles, and some meta-posts state that posting and solving puzzles created by others is kind of a way to learn about puzzles, there isn't actually much insight on offer on what really went into making those puzzles. There are several great puzzles on this site, and it would be interesting as well as educative for others to learn things about how their designers created them, such as: • The source(s) of inspiration • How the puzzle started and evolved, ideas that were discarded or added along the way, and why • Any resources or programs that were used to research or produce the content • What the OP learnt from solvers' responses to the puzzle • Etc. ## Proposal The proposal is to allow the OP to share their story by (optionally) posting a self-answer with this description. Each such answer could have a brief standardized heading, possibly with a meta-post link, to explain what it is and that it's not an answer to the puzzle itself. This post would typically be posted after the puzzle is solved. ## Rewarding good puzzles Now anybody who reads this special answer and appreciates the work behind the puzzle can choose to upvote it, thereby awarding an additional +10 points. For puzzles that were copied (with due accreditation) from another source such as a puzzle book, no such narration is possible, and thus good but unoriginal puzzles would not be unduly awarded additional points. Similarly, regular questions that aren't puzzles would only receive the standard +5 points. ## Pros and cons 1. This mechanism would work without requiring any site-specific code changes. 2. Moreover, it allows a finer level of control than a blind +10 per upvote (as described above). 3. Bounties can also be awarded to this answer in lieu of the question. [Thanks, @Ian MacDonald for this suggestion.] The few potential concerns that I can think of are: 1. This additional post should be genuinely interesting and not degrade to something like "Please upvote this answer if you liked this puzzle" 2. It shouldn't be abused to place hints to the answer of the puzzle itself 3. A really interesting post could get more upvotes than the accepted answer (i.e. solution) and earn the "Populist" badge 4. If this proposal goes through, there may be an initial spurt of people trying to mass edit their old questions. I think all of these can be dealt with, and overall this approach would make puzzling more interesting and rewarding. ## Samples A sandbox for sample commentaries on puzzle creation has been created. There is also a question on how these samples should be standardized in format. • Wonderful idea, and might not need much formalization because behind-the-puzzle "answers" could refer to this meta question for support. I am extremely interested in the sparks behind many puzzles and appreciate every little tidbit someone allows (e.g.). – humn Sep 6 '16 at 6:13 • KeyboardWielder, many of your puzzles hinge on playful parallels that make me wonder what circumstances brought them together (threepersonalfaves). Why not treat us to a peek-post behind one/some of your puzzles (and link it/them to and from here)? – humn Sep 6 '16 at 6:23 • @humn: I was thinking of this example. I'm thinking of waiting for a few more days for general feedback, and then creating a temporary sandbox on Meta where anyone can post their behind-the-scenes samples - which should avoid bumping the main site prematurely. – KeyboardWielder Sep 6 '16 at 20:00 • Might not need to approach this so gingerly. Considering how few posers answer even in comments when I ask about a puzzle's background, a sandbox stage might be safe to skip. For me, an about-this-puzzle post and any edits would be just as relevant as most posts here. (Would be nice, in any case, to have a general way to keep from interloping on the "front page": Making an edit without bumping the puzzle up?, Is there a way to edit a question without bumping it to the front page?) – humn Sep 6 '16 at 20:43 • If someone provides a commentary on the puzzle-creation process for an old puzzle, is it a good thing or a bad thing if it gets bumped? Bumping it would increase the visibility of the puzzle creation process and people may get to see high-quality puzzles that they missed before, but it would further crowd out the high-quality unanswered puzzles. There might be an initial influx of commentaries, but I expect that the volume would reduce after a short time. – Gordon K Sep 7 '16 at 15:05 • @GordonK and humn: By "bumping the main site" I meant throwing the idea into the wild before it was sharpened. I went ahead and created the sandbox. I'd personally love to read (and write) commentaries on old puzzles. ;) – KeyboardWielder Sep 7 '16 at 19:04 • A bounty could also be awarded to the puzzler's inspiration "answer". – Ian MacDonald Sep 7 '16 at 20:58 • @IanMacDonald: Good suggestion! Edited in. – KeyboardWielder Sep 13 '16 at 17:09 • Time for a stamp of approval? Seems to have passed the shake-out stage and posers are already posting wrap-ups in situ. – humn Sep 16 '16 at 19:39 • @humn: Not waiting for anything... Actually, I don't know how the process for suggestions at Meta goes. Does me accepting an answer to my own suggestion indicate community approval? Well, I'll go ahead and hit the tick button on this one (not quite sure what to do with the sandbox though). – KeyboardWielder Sep 22 '16 at 19:06 • This is not a bad idea, but a potential negative is that this was actually proposed by Shog9 as a way of rewarding good puzzles. If this is implemented, then we may never see some of the other things we wanted, such as awarding bounties to questions and awarding +10 rep for each upvote on a question. – pacoverflow Sep 23 '16 at 7:37 An outline for wrap-up posts can be found at Wrap-up posts: What should the formal part of it contain? Please copy the outline’s header, being sure to include the words wrap-up, making and poser, so that wrap-ups are easier to find and distinguish from other kinds of posts. Content below the header can be free-form and the outline includes many excellent ideas. Examples of wrap-up posts may be found by searching for keywords such as wrap up making poser. (Experimental wrap-up posts, made while this topic was under discussion, may be found at Sample commentaries on puzzle creation.) ### Full answer, which motivated community approval Just to clarify I understood the idea correctly: • this is an optional thing • the OP posts an answer (call it wrap-up) to his/her own puzzle-question on the main site For now: As this idea is tested, the wrap-up is posted on the meta site sandbox only • the wrap-up is posted only after the puzzle is solved (i.e. when an accepted answer already exists.) • the wrap-up never becomes the accepted answer • there is some format standardization of wrap-up answers I very much like this idea! And it is similar to some of the "community-wrapup-answers" I've done in the past, but still I think your suggestion is better than this community-wiki post. (And could co-exist.) 1. This additional post should be genuinely interesting and not degrade to something like "Please upvote this answer if you liked this puzzle" Self-solving, as one could easily downvote bad wrap-up posts (and comment on them). 1. It shouldn't be abused to place hints to the answer of the puzzle itself I think it should only be posted after the puzzle has an accepted answer, so this problem does then not exist. Note, that for too-difficult/never-solved puzzles, there is still the possibility to "self-answer-solve" a puzzle with a community-wiki post. So the wrap-up idea does not interfere. 1. A really interesting post could get more upvotes than the accepted answer (i.e. solution) and earn the "Populist" badge I see this as a benefit, not as a problem. 1. If this proposal goes through, there may be an initial spurt of people trying to mass edit their old questions. True. Not really a huge issue - and maybe a wave of very interesting information sparking new puzzles! - but we might add to the proposal that old posts should be wrapped up at a slow pace. (One at a day / user ?)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40287816524505615, "perplexity": 2218.267473870762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00483.warc.gz"}
http://www.esrf.eu/UsersAndScience/Publications/Highlights/2006/XAMS/xams08
The idea of achieving room-temperature ferromagnetism in diluted magnetic semiconductors (DMS) attracts a lot of interest due to DMS importance for developing future “spintronic” devices. One of the most promising systems is GaN diluted with Mn. A ferromagnetic ordering of (Ga,Mn)N at room temperature has been observed by several groups, while others have reported on its paramagnetic behaviour. State-of-the-art characterisation techniques and single-phase (Ga,Mn)N layers are required to clarify the electronic and magnetic properties of Mn atoms, and consequently to reveal the microscopic origin of magnetism that occurs in this DMS. Fig. 111: Experimental configuration for XLD measurements. Normalised experimental XANES spectra (left scale) for linearly polarised X-ray light oriented parallel (red line) and perpendicular (green line) to the c axis and XLD (blue line, right scale) signal recorded at the Mn and Ga K edges for a (Ga,Mn)N epilayer with 6.3% Mn. X-ray linear dichroism (XLD) and X-ray magnetic circular dichroism (XMCD) spectroscopies have been used to study the structural, electronic, and magnetic properties of DMS thin films. Since these techniques are element specific, they are the most appropriate tools for directly probing the electronic structure and magnetism of the diluted transition metal. Wurtzite (Ga,Mn)N with 6.3% Mn epilayer film was grown using plasma-assisted molecular beam epitaxy, with standard effusion cells for Ga and Mn, and a radio-frequency plasma cell to supply active nitrogen. XLD and XMCD experiments were performed at beamline ID12. XLD were recorded at both the Mn and the Ga K-edge using a diamond quarter wave plate to convert circularly polarised X-rays into linearly polarised ones. Using the geometry of the experiment shown on Figure 111, the XLD measures the anisotropy of the unoccupied density of states of the 4p shell of the Ga and Mn atoms in the (a,c) plane of the wurtzite structure. The Mn K-edge XLD spectral shape (Figure 111) is very similar to the one observed at the Ga K edge (identical to the Ga K-edge XLD of GaN single crystal), but its amplitude is 1.8 times smaller. This is precisely what one would observe for the Mn atoms in substitution with Ga atoms, whereas for other possible Mn sites occupation, e.g. N-substituted or interstitial sites, the spectral shape and amplitude of XLD signal should be drastically different [1]. Within the detection limit of the method, the presence of any secondary phases or metallic clusters have not been revealed. The XANES spectrum at the Mn K edge in contrast with XLD spectra are quite different from those at the Ga K edge. Two additional peaks at the low-energy side of the absorption edge are observed. These two peaks originate from both quadrupolar (1s 3d) and dipolar (1s 4p) transitions reflecting hybridisation of the Mn impurity 4p band with the 3d band located in the GaN gap. The presence of these two peaks in the XANES spectra indicates that the valence state of Mn is mainly 3+ (d4) rather than 2+ (d5), where only one peak is usually observed. To elucidate the microscopic origin of the magnetic behaviour of (Ga,Mn)N, we have performed XMCD measurements at the Mn K edge (Figure 112). A very intense XMCD signal (1.6% with respect to the edge jump) is observed mainly at the first peak of the XANES spectrum. Given the fact that only the orbital magnetisation of the absorbing atom gives rise to the K-edge XMCD signal, our result provides another strong argument in favour of the Mn3+ valence state in wurtzite (Ga,Mn)N. Indeed, in the case of Mn2+ where the 3d and 4p orbital moments are vanishingly small, the XMCD signal is at least one order of magnitude smaller. Moreover, the negative sign of the XMCD signal suggests that the Mn orbital magnetic moment is antiparallel to the applied field and, therefore, to the sample magnetisation. The inset of Figure 112 shows the magnetisation curve recorded by monitoring the Mn K-edge XMCD signal as a function of applied field at 7 K. This magnetisation curve is a typical signature of a ferromagnetic system in the vicinity of a transition temperature. In conclusion, thanks to the element specificity of XLD and XMCD, we have demonstrated that wurtzite (Ga,Mn)N with 6.3% Mn is a intrinsic ferromagnetic DMS with a rather low Curie temperature of only 8K. Fig. 112: Isotropic XANES spectrum (black line, left scale) and corresponding XMCD signal (red line, right scale) recorded at the Mn K edge measured under 6 Tesla in-plane field and at 7 K for a (Ga, Mn)N film with 6.3% Mn. Inset: magnetisation curve measured at 7 K by monitoring the amplitude of the Mn K-edge XMCD signal. Reference [1] F. Wilhelm, E. Sarigiannidou, E. Monroy, A. Rogalev, N. Jaouen, H. Mariette and J. Goulon, AIP Conference Proceedings 879, 1675 (2007). Principal Publications and Authors E. Sarigiannidou (a,d), F. Wilhelm (b), E. Monroy (a), R.M. Galera (c), E. Bellet-Amalric (a), A. Rogalev (b), J. Goulon (b), J. Cibert (c), and H. Mariette (a), Phys. Rev. B 74, 041306(R) (2006). (a) CEA-CNRS-UJF group “Nanophysique et Semiconducteurs”, CEA-Grenoble (France) (b) ESRF (c) Institut Néel, Grenoble (France) (d) INPG, Grenoble (France)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8353617787361145, "perplexity": 4713.751100189749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948542031.37/warc/CC-MAIN-20171214074533-20171214094533-00745.warc.gz"}
https://baas.aas.org/pub/aas236-342p03-bandyopadhyay/release/1
Skip to main content # Expansion of the Universe and its correlation with Dark Energy Published onJun 01, 2020 Expansion of the Universe and its correlation with Dark Energy The redshift of the distant galaxies proves that the Universe is expanding and at a good rate. The trouble is not with the expansion rather the force that is helping in this expansion. The Four Forces that we know of that is Gravitational Force, Electromagnetic Force, WeakForce and Strong Force, unfortunately, does not help in understanding the expansion of the Universe even after 13.8 billion years from the Big Bang. Initially, it was thought that the Universe had an exponential expansion just after the Big Bang and this will slow down before Gravity starts contracting the Universe. This theory got a setback after the Red Shift of the Galaxies showed that still, the Universe is expanding. This means that the Gravitational Force is not being able to drift galaxies towards one another. Scientists have argued that there is some force that is repelling the Universe but understanding this force has been difficult till now. What is this Dark Energy is a haunting question in today’s world. This is due to the fact that still not too much is known about this force. It 5% of the observable Universe is known whereas around 95% is still a mystery to us. Of that 95% around 68% is Dark Energy. So the importance of understanding this force is need of the hour. The issue as of now is, Dark Energy is hypothetical in nature. The idea of Dark Energy goes to explain the expansion of the Universe if DarkEnergy is taken as an Anti- Gravitational Force. Is Dark Energy the fifth fundamental force that has been ignored for so long. Einstein and Schrodinger did interact with one another after he had understood that the Universe was expanding through the theory presented by Hubble. To counter the exigence that space-time changes with the matter he had proposed a constant by the name Cosmological Constant. Later he took the constant away stating that it was his blunder not to understand that the Universe was Expanding. Schrodinger had proposed to put the Cosmological Constant in the right side of the equation. Einstein later did not agree with the idea. Still it can be considered that both of them were talking about an extra force but could not come to any conclusion on this.This paper tries to analyze the concept of Dark Energy as noninteracting supermassive Energy(NISE). The paper will try to see the relationship between expanding Universe and Dark Energy. The paper will try to develop a new spectrum that can make Dark Energy visible understandable. The paper will also like to see the relationship between Dark Energy and Photon. In the later stage of the paper, it will be tried to analyze if Dark Energy can be used as fuel for interstellar travel. Comments 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225007891654968, "perplexity": 302.1798854375368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00294.warc.gz"}
https://tex.stackexchange.com/questions/180583/vectorized-text-effects-transformations-stretching-expanding-wordart?noredirect=1
Vectorized Text Effects/Transformations: Stretching & Expanding (WordArt) I'd like to apply a text transformation similar to what the Inkscape extension 'Modify Path -> Envelope' does within LaTeX: In-source documentation of the envelope extension: This extension implements Bezier enveloping. It takes an arbitrary path (the "letter") and a 4-sided path (the "envelope") as input. The envelope must be 4 segments long. Unless the letter is to be rotated or flipped, the envelope should begin at the upper left corner and be drawn clockwise. Presumably this can only be done with one of the more advanced drawing packages: • tikz (pgf/tikz library decorations.text, basic layer canvas transformation) • pstricks (possibly based on \psWarp form pst-text (v.1.01) together with a postscript font) • asymptote • MetaPost • So you want an example with Tikz? Where is the question? – percusse Jul 23 '14 at 19:13 • @percusse: thanks for catching up. the question is: transforming the text having an envelope which is not just a rectangle (see the second picture). My answer works only for the simpler case. I am happy with tikz, but other tools are equally good. I don't have a preference – Hotschke Jul 23 '14 at 19:18 2 Answers Needs work (and not a TiKZ text effect): \documentclass[border=5pt, tikz, mult]{standalone} \usetikzlibrary{positioning,calc} \begin{document} \begin{tikzpicture} \coordinate (o) at (0,0); \def\j{o} \foreach \i [count=\ino] in {T,E,X,T,-,E,F,F,E,C,T} { \node (\ino) [scale=\ino*\ino, right=0pt of \j, inner sep=0pt, outer sep=0pt] {\i}; \global\let\j\ino } \path [draw] (1.north east) -- (\j.north west) -| (\j.south east) -- (\j.south west) -- (1.south west) -- cycle; \end{tikzpicture} The question Stretching text vertically offers a solution for applying a rectangular envelope, so to speak: Using \scalebox{⟨h-scale⟩}[⟨v-scale⟩]{⟨text⟩} of the package graphics (\$ texdoc grfguide): \documentclass[varwidth]{standalone} \usepackage{graphics} \begin{document} HELLO\\[1ex] \scalebox{2}[1]{HELLO} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423630237579346, "perplexity": 6675.717316126135}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00378.warc.gz"}
https://de.maplesoft.com/support/help/maplesim/view.aspx?path=componentLibrary/electrical/machines/losses/dcMachines/Brush&L=G
Brush Model considering voltage drop of carbon brushes Description The Brush component models the voltage drop and losses of carbon brushes. Note: The voltage drop $v$ is the total voltage drop of all series connected brushes. See also BrushParameters. If it is desired to neglect brush losses, set $V=0$ (this is the default). Equations $v=\left\{\begin{array}{cc}0& V\le 0\\ V& {I}_{\mathrm{linear}} $v={v}_{p}-{v}_{n}$ $i={i}_{p}=-{i}_{n}$ $\mathrm{lossPower}=vi$ Variables Name Units Description Modelica ID $i$ $A$ Current flowing from pin p to pin n i $v$ $V$ Voltage drop between the two pins v $\mathrm{lossPower}$ $W$ Loss power leaving component via heat port lossPower Connections Name Description Modelica ID $p$ Positive pin p $n$ Negative pin n $\mathrm{heatPort}$ heatPort Parameters General Parameters Name Default Units Description Modelica ID use heat port $\mathrm{false}$ true means heatPort is enabled useHeatPort Losses Parameters Name Default Units Description Modelica ID ${I}_{\mathrm{linear}}$ $0.01$ $A$ Current indicating linear voltage region of brush voltage drop ILinear $V$ $0$ $V$ Total voltage drop of brushes for currents > ILinear V Modelica Standard Library The component described in this topic is from the Modelica Standard Library. To view the original documentation, which includes author and copyright information, click here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4092639982700348, "perplexity": 3652.7622583830303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00442.warc.gz"}
http://math.stackexchange.com/questions/456613/does-the-map-f-emptyset-longrightarrow-0-exist/456619
# Does the map $f:\emptyset\longrightarrow \{0\}$ exist? What function $f$ maps in the following way: $$f:\emptyset\longrightarrow \{0\}$$ - Does the cardinality of the domain count as a function? –  Ataraxia Jul 31 '13 at 18:28 There is exactly one such function, which is the empty set. Explanation: By definition, functions $f : A \to B$ are certain subsets of the cartesian product $A\times B$ (for the defining property, see below). You can imagine this definition as identiying $f$ with its graph. In your case, $A = \emptyset$, $B = X$ and $A \times B = \emptyset \times X = \emptyset$, whose only subset is the empty set. The defining property is that for every $a\in A$ there is a unique element of $f$ having $a$ as its first component. In our case $f = \emptyset$, this condition is vacuously true. - The only subset of the cartesian product $\,\emptyset\times X\;,\;\;X$ any set, is the only function $\,\emptyset\to X\,$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189561605453491, "perplexity": 233.60160915926312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009777.87/warc/CC-MAIN-20141125155649-00070-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.middleprofessor.com/files/applied-biostatistics_bookdown/_book/best-practices-issues-in-inference.html
# Chapter 12 Best Practices – Issues in Inference I, II, S, M ## 12.2 multiple testing Multiple testing is the practice of adjusting p-values (and less commonly confidence intervals) to account for the expected increase in the frequency of Type I error when there are multiple tests (typically Null Hypothesis Significance Tests). Multiple testing tends to arise in two types of situations: 1. Multiple pairwise contrasts among treatment levels (or combinations of levels) are estimated. 2. The effects of a treatment on multiple responses are estimated. This can arise if 1. there are multiple ways of measuring the consequences of something – for example, an injurious treatment on plant health might effect root biomass, shoot biomass, leaf number, leaf area, etc. 2. one is exploring the consequences of an effect on many, many outcomes – for example, the expression levels of 10,000 genes between normal and obese mice. Despite the ubiquitous presence of multiple testing in elementary biostatistics textbooks, in the applied biology literature, and in journal guidelines, the practice of adjusting p-values for multiple tests is highly controversial among statisticians. My thoughts: 1. In situations like (1) above, I advocate that researchers do not adjust p-values for multiple tests. In general, its a best practice to only estimate contrasts for which you care about because of some a priori model of how the system works. If you compare all pairwise contrasts of an experiment with many treatment levels and/or combinations, expect to find some false discoveries. 2. In situations like (2a) above, I advocate that researchers do not adjust p-values for multiple tests. 3. In situations like (2b) above, adjusting for the False Discovery Rate is an interesting approach. But, recognize that tests with small p-values are highly provisional discoveries of a patterns only and not a discovery of the causal sequelae of the treatment. For that, one needs to do the hard work of designing experiments that rigorously probe a working, mechanistic model of the system. Finally, recognize that anytime there are multiple tests, Type M errors will arise due to the vagaries of sampling. This means that in a rank-ordered list of the effects, those at the top have measured effects that are probably bigger than the true effect. An alternative to adjusted p-values is a penalized regression model that shrinks effects toward the mean effect. ### 12.2.1 Some background #### 12.2.1.1 Family-wise error rate The logic of multiple testing goes something like this: the more tests that a researcher does, the higher the probability that a false positive (Type I error) will occur, therefore a researcher should should adjust p-values so that the Type I error over the set (or “family”) of tests is 5%. This adjusted Type I error rate is the “family-wise error rate”. If a researcher carries out multiple tests of data in which the null hypothesis is true, what is the probability of finding at least one Type I error? This is easy to compute. If the frequency of Type I error for a single test is $$\alpha$$, then the probability of no Type I error is $$1 - \alpha$$. For two tests, the probability of no Type I error in either test is the product of the probability for each test, or $$(1 - \alpha)^2$$. By the same logic, for $$m$$ tests, the probabilty of no type I error in any of the tests is $$(1 - \alpha)^m$$. The probability of at least one type one error, across the $$m$$ tests, then, is $$1 - (1 - \alpha)^m$$. A table of these probabilities for different $$m$$ is given below. If the null is true in all tests, then at least one Type I error is more likely than not if there are 14 tests, and close to certain if there more than 50 tests. Don’t skip over this paragraph – the logic is important even if I don’t advocate adjusting for multiple tests. Table 12.1: Probability of at least one type I error within the set of multiple tests, for data in which the null hypothesis is true. The Type I error rate for a single test is 0.05. The number of tests is m. The probability is p. m p 1 0.05 3 0.14 6 0.26 10 0.40 50 0.92 100 0.99 #### F alse discovery rate If a researcher carries out thousands of tests to “discover” new facts, and uses $$p < 0.05$$ as evidence of discovery, then what is the frequency of false discoveries? #### 12.2.1.2 p-value filter I – Inflated effects If a researcher caries out many tests, and ranks the effects by magnitude or p-value, then the effect sizes of the largest effects will be inflated. Before explaining why, let’s simulate this using an experiment of allelopathic effects of the invasive garlic mustard (Alliaria petiolata) on gene expression in the native American ginseng (Panax quinquefolius). In the treated group, we have ten pots, each with an American ginseng plant grown in a container with a mustard plant. In the control group, we have ten pots, each with an American ginseng plant grown in a container with another American ginseng. I’ve simulated the response of 10,000 genes. The treatment has a true effect in 10% of the 10,000 genes but most effects are very small. set.seed(4) p <- 10^4 # number of genes pt <- 0.1*p # number of genes with true response to treatment n <- 10 # sample the gene effects from an exponential distribution theta <- .3 beta <- c(rexp(pt, rate=1/theta), rep(0, (p-pt))) # the set of 10,000 effects # sample the variance of the expression level with a gamma, and set a minimum sigma <- rgamma(p, shape=2, scale=1/4) + 0.58 # quantile(sigma, c(0.001, 0.1, 0.5, 0.9, 0.999)) Y1 <- matrix(rnorm(n*p, mean=0, sd=rep(sigma, each=n)), nrow=n) Y2 <- matrix(rnorm(n*p, mean=rep(beta, each=n), sd=rep(sigma, each=n)), nrow=n) # check # use n <- 10^4 to check # apply(y2, 2, mean)[1:5] # b[1:5] x <- rep(c("cn","tr"), each=n) bhat <- numeric(p) p.value <- numeric(p) sigma_hat <- numeric(p) for(j in 1:p){ fit <- lm(c(Y1[,j], Y2[, j]) ~ x) bhat[j] <- coef(summary(fit))["xtr", "Estimate"] p.value[j] <- coef(summary(fit))["xtr", "Pr(>|t|)"] sigma_hat[j] <- sqrt(sum(fit$residuals^2)/fit$df.residual) } Table 12.2: The top 10 genes ranked by p-value. Rank is the rank of the true effect, from large to small. effect estimate sigma sd p.value relative true effect rank 2.23 2.67 0.81 0.55 0.0000000 1.00 1 1.59 1.92 0.62 0.57 0.0000005 0.71 7 1.46 1.86 0.84 0.71 0.0000159 0.65 10 1.47 1.78 0.83 0.74 0.0000409 0.66 9 0.00 1.95 1.10 0.85 0.0000717 0.00 NA 0.48 1.26 0.67 0.56 0.0000816 0.21 212 0.97 1.32 0.69 0.60 0.0001004 0.43 45 0.54 1.68 1.06 0.78 0.0001321 0.24 173 0.00 2.05 1.39 0.96 0.0001488 0.00 NA 0.43 -1.78 1.33 0.84 0.0001733 0.19 244 The table above lists the top 10 genes ranked by p-value, using the logic that the genes with the smallest p values are the genes that we should pursue with further experiments to understand the system. Some points 1. Six of the top ten genes with biggest true effects are not on this list. And, in the list are three genes with true effects that have relatively low ranks based on true effect size (column "rank") and two genes that have no true effect at all. Also in this list is one gene with an estimated effect (-1.78) that is opposite in sign of the true effect (but look at the p-value!) 2. The estimate of the effect size for all top-ten genes are inflated. The average estimate for these 10 genes is 1.47 while the average true effect for these 10 genes is 0.92 (the estimate ). 3. The sample standard deviation (sd) for all top-ten genes is less than the true standard deviation (sigma), in some cases substantially. The consequence of an inflated estimate of the effect and a deflated estimate of the variance is a large t (not shown) and small p. What is going on is an individual gene’s estimated effect and standard deviation are functions of 1) the true value and 2) a random sampling component. The random component will be symmetric, some effects will be overestimated and some underestimated. When we rank the genes by the estimate of the effect or t or p, some of the genes that have “risen to the top” will be there because of a large, positive, sampling (random) component of the effect and/or a large, negative, sampling component of the variance. Thus some genes’ high rank is artificial in the sense that it is high because of a random fluke. If the experiment were re-done, these genes at the top because of a large, random component would (probably) fall back to a position closer to their expected rank (regression to the mean again). In the example here, all genes at the top have inflated estimates of the effect because of the positive, random component. This inflation effect is a function of the signal to noise ratio, which is controled by theta and sigma in the simulation. If theta is increased (try theta=1), or if sigma is decreased, the signal to noise ratio increases (try it and look at the histogram of the new distribution of effects) and both the 1) inflation and the 2) rise to the top phenomenon decrease. ### 12.2.2 Multiple testing – working in R #### 12.2.2.1 Tukey HSD adjustment of all pairwise comparisons The adjust argument in emmeans::contrast() controls the method for p-value adjustment. The default is “tukey”. 1. “none” – no adjustment, in general my preference. 2. “tukey” – Tukey’s HSD, the default 3. “bonferroni” – the standard bonferroni, which is conservative 4. “fdr” – the false discovery rate 5. “mvt” – based on the multivariate t distribution and using covariance structure of the variables The data are those from Fig. 2D of “Data from The enteric nervous system promotes intestinal health by constraining microbiota composition”. There is a single factor with four treatment levels. The response is neutrophil count. m1 <- lm(count ~ donor, data=exp2d) m1.emm <- emmeans(m1, specs="donor") summary(m1.pairs.none, infer=c(TRUE, TRUE)) ## contrast estimate SE df lower.CL upper.CL t.ratio p.value ## gf - wt -1.502 1.48 58 -4.47 1.47 -1.013 0.3153 ## sox10 - wt 4.679 1.23 58 2.23 7.13 3.817 0.0003 ## sox10 - gf 6.182 1.45 58 3.29 9.08 4.276 0.0001 ## iap_mo - wt -0.384 1.53 58 -3.45 2.68 -0.251 0.8025 ## iap_mo - gf 1.118 1.71 58 -2.31 4.54 0.654 0.5159 ## iap_mo - sox10 -5.064 1.49 58 -8.05 -2.07 -3.391 0.0013 ## ## Confidence level used: 0.95 Tukey HSD: m1.pairs.tukey <- contrast(m1.emm, method="revpairwise", adjust="tukey") summary(m1.pairs.tukey, infer=c(TRUE, TRUE)) ## contrast estimate SE df lower.CL upper.CL t.ratio p.value ## gf - wt -1.502 1.48 58 -5.43 2.42 -1.013 0.7426 ## sox10 - wt 4.679 1.23 58 1.44 7.92 3.817 0.0018 ## sox10 - gf 6.182 1.45 58 2.36 10.01 4.276 0.0004 ## iap_mo - wt -0.384 1.53 58 -4.43 3.66 -0.251 0.9944 ## iap_mo - gf 1.118 1.71 58 -3.41 5.64 0.654 0.9138 ## iap_mo - sox10 -5.064 1.49 58 -9.01 -1.11 -3.391 0.0067 ## ## Confidence level used: 0.95 ## Conf-level adjustment: tukey method for comparing a family of 4 estimates ## P value adjustment: tukey method for comparing a family of 4 estimates ## 12.4 Inference when data are not Normal No real data are normal, although many are pretty good approximations of a normal distribution. I’ll come back to this point, but first, let’s back up. Inference in statistical models (standard errors, confidence intervals, p-values) are a function of the modeled distributions of the parameters (for linear models, this parameter is the conditional (or error) variance $$\sigma^2$$); if the data do not approximate the modeled distribution, then inferential statistics might be to liberal (standard errors are too small, confidence intervals are too narrow, Type I error is more than nominal) or to conservative (standard errors are too large, confidence intervals are too wide, Type I error is less than nominal). Linear models assume that “the data” (specifically, the conditional response, or, equivalently, the residuals from the model) approximate a Normal distribution. Chapter xxx showed how to qualitatively assess how well residuals approximate a Normal distribution using a Q-Q plot. If the researcher concludes that the data poorly approximate a normal distribution because of outliers, the researcher can use robust methods to estimate the parameters. If the approximation is poor because the residuals suggest a skewed distribution or one with heavy or light tails, the researcher can choose among several strategies 1. continue to use the linear model; inference can be fairly robust to non-normal data, especially when the sample size is not small. 2. use a generalized linear model (GLM), which is appropriate if the conditional response approximates any of the distributions that can be modeled using GLM (Chapter xxx) 3. use bootstrap for confidence intervals and permutation test for p-values 4. transform the data in a way that makes the conditional response more closely approximate a normal distribution. 5. use a classic non-parametric test, which are methods that do not assume a particular distribution This list is roughly in the order of how I would advise researchers, although the order of 1-3 is pretty arbitrary. I would rarely advise a researcher to use (4) and never advise (5). Probably the most common strategies in the biology literature are (4) and (5). The first is also common but probably more from lack of recognition of issues or because a “test of normality” failed to reject that the data are “not normal”. On this last point, do not use the p-value from a “test for normality” (such as a Shapiro-Wilk test) to decide between using the linear model (or t-test or ANOVA) and an alternative such as a generalized linear model (or transformation or non-parametric test). No real data is normal. Tests of normality will tend to “not reject” normality (p > 0.05) when the sample size is small and “reject” normality (p < 0.05) when the sample size is very large. But again, a “not rejected” hypothesis test does not mean the null (in this case, the data are normal) is true. More importantly, where the test for normality tends to fail to reject (encouraging a researcher to use parametric statistics) is where parametric inference performs the worst (because of small n) and where the test for normality tends to reject (encouraging a researcher to use non-parametric statistics) is where the parametric inference performs the best (because of large sample size) (Lumley xxx). ### 12.4.1 Working in R The data for demonstrating different strategies are from Fig. 4A of “Data from The enteric nervous system promotes intestinal health by constraining microbiota composition”. There is a single factor with two treatment levels. The response is neutrophil count. A linear model to estimate the treatment effect and 95% confidence interval. m1 <- lm(count ~ treatment, data=fig4a) m1_emm <- emmeans(m1, specs="treatment") summary(contrast(m1_emm, method="revpairwise"), infer=c(TRUE, TRUE)) ## contrast estimate SE df lower.CL upper.CL t.ratio p.value ## sox10 - wt 5.16 1.75 174 1.7 8.62 2.947 0.0037 ## ## Confidence level used: 0.95 ### 12.4.2 Bootstrap Confidence Intervals A bootstrap confidence interval is computed from the distribution of a statistic from many sets of re-sampled data. The basic algorithm is 1. compute the statistic for the observed data, assign this to $$\theta_1$$ 2. resample $$n$$ rows of the data, with replacement. “with replacement” means to sample from the entire set of data and not the set that has yet to be sampled. $$n$$ is the original sample size; by resampling $$n$$ rows with replacement, some rows will be sampled more than once, and some rows will not be sampled at all. 3. compute the statistic for the resampled data, assign these to $$\theta_{2..m}$$ 4. repeat 2 and 3 $$m-1$$ times 5. Given the distribution of $$m$$ estimates, compute the lower interval as the $$\frac{\alpha}{2}$$th percentile and the upper interval as the $$1 - \frac{\alpha}{2}$$th percentile. For 95% confidence intervals, these are the 2.5th and 97.5th percentiles. Let’s apply this algorithm to the data from fig4A neutrophil count data in the coefficient table above. The focal statistic in these data is the difference in the mean count for the sox10 and wild type groups (the parameter for $$treatment$$ in the linear model). The script below, which computes the 95% confidence intervals of this difference, resamples within strata, that is, within each group; it does this to preserve the original sample size within each group. n_iter <- 5000 b1 <- numeric(5000) inc <- 1:nrow(fig4a) # the rows for the first iteration are all rows, so this is the observed effect for(i in 1:n_iter){ # inc creates the index of rows to resample preserving the sample size specific to each group b1[i] <- coef(lm(count ~ treatment, data=fig4a[inc, ]))["treatmentsox10"] inc <- c(sample(which(fig4a[, treatment] == "wt"), replace=TRUE), sample(which(fig4a[, treatment] == "sox10"), replace=TRUE)) } ci <- quantile(b1, c(0.025, 0.975)) c(contrast = b1[1], ci[1], ci[2]) ## contrast 2.5% 97.5% ## 5.163215 2.077892 8.264316 The intervals calculated in step 5 are percentile intervals. A histogram of the the re-sampled differences helps to visualize the bootstrap (this is a pedagogical tool, not something you would want to publish). #### 12.4.2.1 Some R packages for bootstrap confidence intervals Percentile intervals are known to be biased, meaning the intervals are shifted. The boot package computes a bias-corrected interval in addition to a percentile interval. boot is a very powerful bootstrap package but requires the researcher to write functions to compute the parameter of interest. simpleboot provides functions for common analysis that does this for you (in R speak, we say that simpleboot is a “wrapper” to boot). The function simpleboot::two.boot computes a boot-like object that returns, among other values, the distribution of $$m$$ statistics. The simpleboot object is then be fed to boot::boot.ci to get bias-corrected intervals. bs_diff <- two.boot(fig4a[treatment=="sox10", count], fig4a[treatment=="wt", count], mean, R=5000) boot.ci(bs_diff, type="bca") ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 5000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = bs_diff, type = "bca") ## ## Intervals : ## Level BCa ## 95% ( 2.087, 8.410 ) ## Calculations and Intervals on Original Scale ### 12.4.3 Permutation test A permutation test effectively computes the probability that a random assignment of a response to a particular value of X generates a test statistic as large or larger than the observed statistic. If this probability is small, then this “random assignment” is unlikely. From this we infer that the actual assignment matters, which implies a treatment effect. The basic algorithm is 1. compute the test statistic for the observed data, assign this to $$\theta_1$$ 2. permute the response 3. compute the test statistic for the permuted data, assign these to $$\theta_{2..m}$$ 4. repeat 2 and 3 $$m-1$$ times 5. compute $$p$$ as $$$p_{perm} = \frac{N_{\theta_i \ge \theta_{1}}}{m}$$$ This is easily done with a for loop in which the observed statistic is the first value in the vector of statistics. If this is done, the minimum value in the numerator for the computation of $$p_{perm}$$ is 1, which insures that $$p_{perm}$$ is not zero. The test statistic depends on the analysis. For the simple comparison of means, a simple test statistic is the difference in means. This is the numerator of the test statistic in a t-test. The test has more power if the test-statistic is scaled (Manley xxx), so a better test statistic would be t, which scales the difference by its standard error. Here, I implement this algorithm. The test is two-tailed, so the absolute difference is recorded. The first value computed is the observed absolute difference. set.seed(1) n_permutations <- 5000 d <- numeric(n_permutations) # create a new column which will contain the permuted response # for the first iteration, this will be the observed order fig4a[, count_perm := count] for(i in 1:n_permutations){ d[i] <- abs(t.test(count_perm ~ treatment, data = fig4a)$statistic) # permute the count_perm column for the next iteration fig4a[, count_perm := sample(count)] } p <- sum(d >= d[1])/n_permutations p ## [1] 0.002 #### 12.4.3.1 Some R packages with permutation tests. lmPerm::lmp generates permutation p-values for parameters of any kind of linear model. The test statistic is the sum of squares of the term scaled by the residual sum of squares of the model. set.seed(2) coef(summary(lmp(count ~ treatment, perm="Prob", Ca=0.01, data=fig4a))) ## [1] "Settings: unique SS " ## Estimate Iter Pr(Prob) ## (Intercept) 13.694815 5000 0.0042 ## treatment1 -2.581608 5000 0.0042 ### 12.4.4 Non-parametric tests 1. In general, the role of a non-parametric test is a better-behaved p-value, that is, one whose Type I error is well controlled. As such, non-parametric tests are more about Null-Hypothesis Statistical Testing and less (or not at all) about Estimation. 2. In general, classic non-parametric tests are only available for fairly simple experimental designs. Classic non-parametric tests include • Independent sample (Student’s) t test: Mann-Whitney-Wilcoxan • Paired t test: Wilcoxan signed-rank test One rarely sees non-parametric tests for more complex designs that include covariates, or multiple factors, but for these, one could 1) convert the response to ranks and fit the usual linear model, or 2) implement a permutation test that properly preserves exchangeability. Permutation tests control Type I error and are powerful. That said, I would recommend a permutation test as a supplment to, and not replacement of, inference from a generalized linear model. A non-parametric (Mann-Whitney-Wilcoxon) test of the fake data generated above wilcox.test(count ~ treatment, data=fig4a) ## ## Wilcoxon rank sum test with continuity correction ## ## data: count by treatment ## W = 2275, p-value = 0.001495 ## alternative hypothesis: true location shift is not equal to 0 ### 12.4.5 Log transformations Many response variables within biology, including count data, and almost anything that grows, are right skewed and have variances that increase with the mean. A log transform of a response variable with this kind of distribution will tend to make the residuals more approximately normal and the variance less dependent of the mean. At least two issues arise 1. if the response is count data, and the data include counts of zero, then a fudge factor has to be added to the response since log(0) doesn’t exist. The typical fudge factor is to add 1 to all values, but this is arbitrary and results do depend on the magnitude of this fudge factor. 2. the estimates are on a log scale and do not have the units of the response. The estimates can be back-transformed by taking the exponent of a coefficient or contrast but this itself produces problems. For example, the backtransformed mean of the log-transformed response is not the mean on the origianl scale (the arithmetic mean) but the geometric mean. Geometric means are smaller than arithmetic means, appreciably so if the data are heavily skewed. Do we want our understanding of a system to be based on geometric means? #### 12.4.5.1 Working in R – log transformations If we fit a linear model to a log-transformed response then the resulting coefficients and predictions are on the log scale. To make interpretation of the analysies easier, we probably want to back-transform the coefficients or the predictions to the original scale of the response, which is called the response scale. m2 <- lm(log(count + 1) ~ treatment, data=fig4a) (m2_emm <- emmeans(m2, specs="treatment", type = "response")) ## treatment response SE df lower.CL upper.CL ## wt 8.22 0.965 174 6.5 10.3 ## sox10 12.59 0.934 174 10.9 14.6 ## ## Confidence level used: 0.95 ## Intervals are back-transformed from the log(mu + 1) scale The emmeans package is amazing. Using the argument type = "response" not only backtransforms the means to the response scale but also substracts the 1 that was added to all values in the model. What about the effect of treatment on count? summary(contrast(m2_emm, method="revpairwise", type = "response"), infer=c(TRUE, TRUE)) ## contrast ratio SE df lower.CL upper.CL t.ratio p.value ## sox10 / wt 1.47 0.185 174 1.15 1.89 3.100 0.0023 ## ## Confidence level used: 0.95 ## Intervals are back-transformed from the log scale ## Tests are performed on the log scale It isn’t necessary to backtransform the estimated marginal means prior to computing the contrasts as this can be done in the contrast function itself. Here, the type = "response" argument in the contrast function is redundant since this was done in the computation of the means. But it is transparent so I want it there. Don’t skip this paragraph Look at the value in the “contrast” column – it is “sox10 / wt” and not “sox10 - wt”. The backtransformed effect is a ratio instead of a difference. A difference on the log scale is a ratio on the response scale because of this equality $$$\mathrm{exp}(\mu_2-\mu_1) = \frac{\mathrm{exp}(\mu_2)}{\mathrm{exp}(\mu_1)})$$$ The interpretation is: If $$b^*$$ is the backtransformed effect, then, given a one unit increase in $$X$$, the expected value of the response increases $$b^*\times$$. For a categorical $$X$$, this means the backtransformed effect is the ratio of backtransformed means – its what you have to multiply the mean of the reference by to get the mean of the treated group. And, because it is the response that is log-transformed, these means are not arithemetic means but geometric means. Here, this is complicated by the model – the response is not a simple log transformation but log(response + 1). It is easy enough to get the geometric mean of the treated group – multiply the backtransformed intercept by the backtransformed coefficient and then subtract 1 – but because of this subtraction of 1, the interpretation of the backtransformed effect is awkward at best (recall that I told you that a linear model of a log transformed response, and especially the log of the response plus one, leads to difficulty in interpreting the effects). # backtransformed control mean -- a geometric mean mu_1 <- exp(coef(m2)[1]) # backtransformed effect b1_star <- exp(coef(m2)[2]) # product minus 1 mu_1*b1_star -1 ## (Intercept) ## 12.59357 # geometric mean of treatment group n <- length(fig4a[treatment=="sox10", count]) exp(mean(log(fig4a[treatment=="sox10", count+1])))-1 ## [1] 12.59357 Back-transformed effect m1 <- lm(count ~ treatment, data=fig4a) exp(coef(m2)) ## (Intercept) treatmentsox10 ## 9.219770 1.474394 ### 12.4.6 Performance of parametric tests and alternatives #### 12.4.6.1 Type I error If we are going to compute a $$p$$-value, we want it to be uniformly distributed “under the null”. A simple way to check this is to compute Type I error. If we set $$\alpha = 0.05$$, then we’d expect 5% of tests of an experiment with no effect to have $$p < 0.05$$. # first create a matrix with a bunch of data sets, each in its own column n <- 10 n_sets <- 4000 fake_matrix <- rbind(matrix(rnegbin(n*n_sets, mu=10, theta=1), nrow=n), matrix(rnegbin(n*n_sets, mu=10, theta=1), nrow=n)) treatment <- rep(c("cn", "tr"), each=n) tests <- c("lm", "log_lm","mww", "perm") res_matrix <- matrix(NA, nrow=n_sets, ncol=length(tests)) colnames(res_matrix) <- tests for(j in 1:n_sets){ res_matrix[j, "lm"] <- coef(summary(lm(fake_matrix[,j] ~ treatment )))[2, "Pr(>|t|)"] res_matrix[j, "log_lm"] <- coef(summary(lm(log(fake_matrix[,j] + 1) ~ treatment )))[2, "Pr(>|t|)"] res_matrix[j, "mww"] <- wilcox.test(fake_matrix[,j] ~ treatment, exact=FALSE)$p.value res_matrix[j, "perm"] <- coef(summary(lmp(fake_matrix[,j] ~ treatment, perm="Prob", Ca=0.01)))[2, "Pr(Prob)"] } apply(res_matrix, 2, function(x) sum(x < 0.05)/n_sets) ## lm log_lm mww perm ## 0.04150 0.05250 0.04350 0.04675 Type I error is computed for the linear model, the linear model with a log transformed responpse, Mann-Whitney-Wilcoxon, and permutation tests. All four tests are slightly conservative for data that look like that modeled. The computed Type I error of the permutation test is closest to the nominal value of 0.05. #### 12.4.6.2 Power Power is the probability of a test to reject the null hypothesis if the null hypothesis is false (that is, if an effect exists) $$$\mathrm{Power} = \mathrm{Prob}(p < \alpha | mathrm{effect} \neq 0)$$$ If all we care about is a $$p-value$$ then we want to use a test that is most powerful. But, while power is defined using $$\alpha$$, we can care about power even if we don’t consider $$\alpha$$ to be a very useful concept because increased power also increases the precision of an estimate (that is, narrows confidence intervals). # first create a matrix with a bunch of data sets, each in its own column n <- 5 n_sets <- 4000 fake_matrix <- rbind(matrix(rnegbin(n*n_sets, mu=10, theta=1), nrow=n), matrix(rnegbin(n*n_sets, mu=20, theta=1), nrow=n)) treatment <- rep(c("cn", "tr"), each=n) tests <- c("lm", "log_lm","mww", "perm") res_matrix <- matrix(NA, nrow=n_sets, ncol=length(tests)) colnames(res_matrix) <- tests for(j in 1:n_sets){ res_matrix[j, "lm"] <- coef(summary(lm(fake_matrix[,j] ~ treatment )))[2, "Pr(>|t|)"] res_matrix[j, "log_lm"] <- coef(summary(lm(log(fake_matrix[,j] + 1) ~ treatment )))[2, "Pr(>|t|)"] res_matrix[j, "mww"] <- wilcox.test(fake_matrix[,j] ~ treatment, exact=FALSE)\$p.value res_matrix[j, "perm"] <- coef(summary(lmp(fake_matrix[,j] ~ treatment, perm="Prob", Ca=0.01)))[2, "Pr(Prob)"] } apply(res_matrix, 2, function(x) sum(x < 0.05)/n_sets) ## lm log_lm mww perm ## 0.09200 0.12525 0.08375 0.10600 As above, Power is computed for the linear model, linear model with a log-transformed response, Mann-Whitney-Wilcoxan, and permutation, by simulating a “low power” experiment. The effect is huge (twice as many cells) but the power is low because the sample size is small ($$n = 5$$). At this sample size, and for this model of fake data, all tests have low power. The power of the log-transformed response is the largest. A problem is, this is not a test of the means but of the log transformed mean plus 1. The power of the permutation test is about 25% larger than that of the linear model and Mann-Whitney-Wilcoxan test. An advantage of this test is that it is a p-value of the mean. A good complement to this p-value would be bootstraped confidence intervals. Repeat this simulation using $$n=40$$ do see how the relative power among the three change in a simulation of an experiment with more power.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.7191576361656189, "perplexity": 1970.3082676047657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882634.5/warc/CC-MAIN-20200703153451-20200703183451-00117.warc.gz"}
https://citizendium.org/wiki/index.php?title=Combined_gas_law&oldid=480104
# Combined gas law Main Article Discussion Related Articles  [?] Bibliography  [?] Citable Version  [?] This editable Main Article is under development and subject to a disclaimer. The combined gas law which combines Charles's law, Boyle's law, and Gay-Lussac's law. These laws each relate one thermodynamic variable to another mathematically while holding everything else constant. Charles's law states that volume and temperature of an ideal gas are directly proportional to each other as long as everything is held constant. Boyle's law asserts that pressure and volume of an ideal gas are inversely proportional to each other at fixed temperature. Finally, Gay-Lussac's law introduces a direct proportionality between temperature and pressure of an ideal gas as long as it is at a constant volume. The inter-dependence of these variables is shown in the combined gas law, which clearly states that: The ratio between the pressure-volume constant and the temperature of an ideal gas remains constant. This can be stated mathematically as ${\displaystyle \qquad {\frac {PV}{T}}=k}$ where: P is the pressure, V is the volume, T is the absolute temperature, k is a constant (with units of energy (science) divided by temperature). For comparing the same substance under two different sets of conditions, the law can be written as: ${\displaystyle \qquad {\frac {P_{1}V_{1}}{T_{1}}}={\frac {P_{2}V_{2}}{T_{2}}}}$ The addition of Avogadro's law to the combined gas law yields the ideal gas law.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255537986755371, "perplexity": 538.7436474644234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00645.warc.gz"}