arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# How to prove Bloch function is periodic in reciprocal lattice?
How to prove Bloch function is periodic in reciprocal lattice?
I saw in some textbooks this formula: $$\Psi_{\mathbf{k}} (\mathbf{r}) = \sum_{\mathbf{G}} c_{\mathbf{k}+\mathbf{G}}e^{i(\mathbf{k}+\mathbf{G})\cdot \mathbf{r}}$$ which makes the statement of this question obvious. ($\mathbf{G}$ is reciprocal lattice vectors)
But I don't understand this formula. I know $$\Psi_{\mathbf{k}}(\mathbf{r}) = e^{i\mathbf{k}\cdot\mathbf{r}}u_{\mathbf{k}}(\mathbf{r})$$ and $u_{\mathbf{k}}(\mathbf{r})$ is periodic function of lattice, therefore can be written in Fourier series: $$u_{\mathbf{k}}(\mathbf{r}) = \sum_{\mathbf{G}} c_{\mathbf{k},\mathbf{G}}e^{i\mathbf{G}\cdot\mathbf{r}}$$ Now I don't understand why $c_{\mathbf{k},\mathbf{G}}$ can be written as $c_{\mathbf{k}+\mathbf{G}}$ ?
because the index of summation only relates to G,you can forget about "k",and also k=G+k(that shows the transnational symmetry). and look here.
• G=G+k ? Do you mean k = k+G ? – Tim Jan 26 '15 at 20:51
• Also your argument is not true, a function of two arguments $c_{k,G}$ is not necessary to only depend on $k+G$ – Tim Jan 26 '15 at 20:54
• yes,corrected my statement! – user71065 Jan 26 '15 at 20:54
Because the reciprocal lattice $G$-periodic, the state with a wave vector $k+G$ describes the same state as that of wave vector $k$. You can therefore reduce your study to the first Brillouin Zone ($-\pi<k\leq\pi$). This means that the coefficient in your Fourier expansion will only depend on where you are within this zone. You can add or subtract as many times $G$ as you like from your $k$ vector, and the result will stay the same. At least in simple descriptions where no further corrections make the Bloch theorem only an approximation.
I'm also not happy with the exposition found in most (solid state physics) textbooks and think one cannot rigorously prove this without group theory. The argument would be the following in a setting with periodic boundary conditions (Born-von Karman) where $$\Psi (x + Na) = \Psi (x)$$ (for simplicity in 1d):
Using $$[H, T] = 0~,$$ where the translation operator is defined as $$T f(x) = f(x + a)~,$$ $$k$$ labels the $$N$$ unique solutions $$\Psi_k$$ that can be distinguished by $$T$$ and yield $$T \Psi_k (x) = {\rm e}^{{\rm i} k a} \Psi_k (x) \quad \text{with}\quad k \in \left\{ \frac{2 \pi n}{N a} : n \in \mathbb N^{[0, N)}\right\}~.$$ Now, for any $$\Psi_{k'}$$ with $$k' = k + G$$, where $$G = 2\pi / a \cdot m$$ is an integer multiple of the reciprocal lattice vector $$b = 2\pi / a$$, we would find $$T \Psi_{k+G}(x) = {\rm e}^{{\rm i} k a} \Psi_{k+G} (x)~,$$ i.e. $$\Psi_{k'}$$ yields the same eigenvalues of $$T$$ as $$\Psi_k$$ and is therefore not distinguishable from $$\Psi_k$$ as it belongs to the same irreducible representation. We can therefore define $$\Psi_{k + G} (x) \equiv \Psi_k (x) \quad\text{for any}\quad G=2\pi/a \cdot m~.$$ All the properties of the Fourier representation of $$\Psi_k$$ are a consequence of this and not the other way round.
Literature:
• Dresselhaus, Group Theory, Chp. 10.2
• Zee, Group Theory in a Nutshell, Chp. III.1
• I have the same doubt. Thank you much for your help. I have a doubt. I hope you can answer. You wrote: "Ψk′ yields the same eigenvalues of T as Ψk and is therefore not distinguishable from Ψk" Is this a property of the eigenfunctions? or it is part of group theory? – Who Nov 6 '20 at 11:21
• It is a property of the operator $T$ which only has $N$ eigenvalues and therefore eigenfunctions labelled by the $N$ values of $k$. Does that help? – Floyd4K Nov 19 '20 at 15:53
• Thank you for your response. Sorry, it is not clear yet to me. So the argument is that the operator T has only one set of eigenvalues, and since both states have the same eigenvalues then they must be the same state. But how we know this must be the case? what let us conclude this? how are we sure they arent different states with the same eigenvalues? – Who Nov 21 '20 at 1:49
• I didn't and wouldn't say must, but can, in the following sense: The two functions will "look" transform the same way under translations, so they won't be able to represent states differing in the translation properties. Of course this doesn't mean you cannot have more than one physical state at a given $k$. On the contrary, that is why we have a band structure. But all wavefunctions at this $k$ will transform similarly under translation by $a$. – Floyd4K Dec 8 '20 at 9:55
Bloch functions are not necessarily periodic in reciprocal space. By the translation symmetry of the lattice, the wave function $$\psi_{nk}(r)$$ must satisfy the Bloch condition:
$$\psi_{nk}(r-R) = e^{-ik\cdot R}\psi_{nk}(r)$$ where $$R$$ is a lattice vector. Now this is generically satisfied by a function of the form $$\psi_{nk}(r) = e^{ik\cdot r}u_{nk}(r)$$ where $$u_{nk}(r-R)=u_{nk}(r)$$. But the choice of $$u_{nk}(r)$$ is not unique. There is a gauge freedom meaning that we can take $$u_{nk}(r)\mapsto e^{-iG\cdot r}u_{nk}(r)$$ and the new wavefunction will still satisfy the Bloch condition. So does it matter which one we choose?
Well the convention is to choose the so-called periodic gauge condition, i.e. we choose to have the wavefunction $$\psi_{nk}$$ be periodic in reciprocal space: $$\psi_{n,k+G}(r)=\psi_{nk}(r)$$. For this to be true, we must choose a $$u_{nk}(r)$$ which satisfies
$$u_{n,k+G}(r) = e^{-iG\cdot r} u_{nk}(r)$$
So this is what makes $$\psi_{nk}(r)$$ periodic in reciprocal space. We do not have to satisfy this condition, but it is conventional and convenient.
|
|
444 views
With 5 distinct nodes, the maximum no. of binary trees that can be formed is _____
5 distinct nodes implies nodes are labelled
Catalan number returns the total count of structurally independent binary trees.
For each structurally independent binary tree, there can only be one BST, and only one unlabelled binary tree made out of it.
To get labelled binary tree, we can permute the labels in the nodes of a structure in n! ways.
Catalan Number gives the count of distinct binary trees with nodes that cannot be distinguished with each other,i.e unlabelled nodes and only the structure of tree is different.
But, here nodes are distinct A,B,C,D,E and total permutations of them = 5! = 120 .
So, each structure of tree includes these permutations.
Hence, maximum number of BT = Catalan number * 120 = 5040
by
If question said $5$ nodes instead of 5 distinct nodes, then answer would be 42. rt??
yes i.e. binary search tree.
i think the answer should be 42..
We have to maximise the number of binary trees, so go with labelled.
So, 5th Catalan multiplied by 5 factorial.
$\frac{10*9*8*7*6}{6*5*4*3*2}*5!=5040$
|
|
These are applications of the remainder and factor theorems. If a polynomial \displaystyle \begin{align*} P(x) \end{align*} is divided by \displaystyle \begin{align*} (x - \alpha ) \end{align*}, then the remainder is equal to \displaystyle \begin{align*} P \left( \alpha \right) \end{align*}.
|
|
Europe/Berlin
Online event
Description
# We warmly invite the neutron scattering community!
This year, the MLZ User Meeting will take place December 08th to 09th and is directly linked to the German Neutron Scattering Conference DN2020 on December 09th and 10th which is hosted by the FRM II/ MLZ for the second time after 2008. A common poster session is going to link both meetings and will give the opportunity for exchange between the participants of both events.
Due to the current restrictions regarding the Corona pandemic, the MLZ User Meeting as well as the DN2020 will take place as online events - the planned programme will not be affected by this!
Participation is only possible for registered participants. Thus, you will get all further technical details well in advance. Keep an eye on the page "Technical details"!
We are looking forward to see you online in December!
Contact
• Tuesday, December 8
• 1:00 PM 2:30 PM
MLZ Users 2020 - Materials Science: Part 1/3
Conveners: Michael Hofmann, Ralph Gilles
• 1:00 PM
Development of novel Co-base superalloys for turbine applications by advanced characterization techniques 40m
Superalloys are key materials for energy conversion in jet engines, rockets or power plants. For more than 60 years, Ni-based superalloys are in use. Due to their unique two-phase microstructure, they retain their strength up to 70% of their melting temperature. In 2006, a new, ternary Co3(Al,W) compound was discovered that enabled the development of Co-based superalloys with similar microstructures than the conventional Ni-based superalloys.
In the following years, we developed compositionally complex Co-based superalloys with significantly improved properties starting from the simple ternary Co-Al-W alloys. In this talk, it will be shown how various advanced characterization techniques, such as in-situ high temperature neutron scattering with neutron diffraction at the beamline SPODI and Small-Angle Neutron Scattering at SANS-1 Together with Transmission Electron Microscopy and Atom Probe Tomography helped to understand the observed microstructures and the resulting mechanical properties. It was found that the matrix is under tension and the precipitates under compression due to a positive lattice misfit between both phases of up to 0.8%, which is larger compared to conventional Ni-based superalloys. Additionally, the volume fraction of the intermetallic precipitate phase is exceptionally high (up to 70%). These findings were essential to develop polycrystalline Co-based wrought alloys that show enhanced creep properties compared to conventional Ni-based wrought alloys.
Speaker: Steffen Neumeier (Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg)
• 1:40 PM
On the Determination of Residual Stresses in AM Lattice Structures 25m
The determination of residual stresses becomes more complicated with increasing complexity of the structures investigated. Unlike most of the fabrication techniques, laser powder bed fusion allows the production of lattice structures without any additional manufacturing step. These lattice structures consist of thin struts and are thus susceptible to internal stress-induced deformation. In the best case, internal stresses remain in the structures as residual stress. The determination of the residual stress in lattice structures through non-destructive neutron diffraction is described in this work. In the case of lattice structures, we show how to overcome two formidable difficulties: a) the proper alignment of the filigree structures within the neutron beam; b) the proper determination of the RS field in a representative part of the structure. The magnitude and the direction of residual stress are discussed. The residual stress in the strut was found to be uniaxial and to follow the orientation of the strut, while the residual stress in the knots is more hydrostatic. We show that strain measurements in at least seven independent directions are necessary for the estimation of the principal stress directions. The measurement directions should be chosen according to sample geometry and an informed choice on the possible strain field. Indeed, we finally show that if the most prominent direction is not measured, the error in the calculated stress magnitude increases considerably.
Speaker: Dr Tobias Fritsch (Bundesanstalt für Materialforschung und -prüfung)
• 2:05 PM
Neutron PDF for insight into hydration shells around iron oxide nanoparticles 25m
Interfaces between iron oxide nanoparticles (IONP) and water are of great importance in various fields spanning biomedicine, waste water treatment and catalysis. Recently, we could distinguish adsorbed water species and extended hydration layers around IONPs via a double-difference X-ray pair distribution function (dd-PDF) analysis.1 Details of the interfacial hydrogen bond network shall now be addressed with neutrons.
Here we present neutron total scattering data on IONP powders and their aqueous dispersions (H2O/D2O), to which we apply our dd-PDF strategy.1 7 nm sized IONPs are synthesized in basic diethylene glycole and capped with citrate or phosphocholine. We developed a transferable, robust combination of TGA, AAS and elemental analysis to determine the exact composition of the powders, especially amount of the organic capping agents, important for absolute normalization of the neutron data.2 Additionally, contributions of surface-OH (-OD) groups of wet powders with varying amount of surface water layers are investigated according to 3.
Finally, we aim at elucidating interfacial structures like surface hydroxyls, ligand coordination and possibly contributions from in-plane co-adsorbed water molecules, via a contrast match study to bridge the gap between insight into wet powders and colloidal dispersions.
1. Thomä, S. L. J. et al., Nat. Commun. 2019, 10(1), 995.
2. Eckardt, M. et al.,in preparation
3. Wang, H. et al., J. Am. Chem. Soc. 2013, 135, 6885-6895
Speaker: Sabrina Thomä (Universität Bayreuth)
• 1:00 PM 2:30 PM
MLZ Users 2020 - Neutron Methods: Part 1/3
Conveners: Peter Link, Sebastian Busch (GEMS at MLZ, HZG)
• 1:00 PM
Imaging with fast neutrons - improvements in spatial resolution and quantification 40m
Fast neutron imaging is a technique to investigate large objects where X-rays or thermal neutrons face limitations due to their comparatively low penetration capabilities. Compared to thermal neutrons, where thin scintillators (< 100um) generally provide good detection efficiencies (>> 1%) at high spatial resolutions in the 10th of microns range, fast neutrons currently require mm-thick scintillator materials with low detection efficiencies (~ 1%) at spatial resolutions in the mm-range, often blurring important details in radiographs, such as cracks or small inclusions.
Additionally, the predominant interaction mechanism for fast neutrons is via nuclear scattering in mostly forward direction. This furthermore implicates the traditional imaging approach by placing objects very close to the detector surface to reduce geometrical blurring. In collaboration between the Paul Scherrer Institut (PSI), Forschungs-Neutronenquelle (FRMII) Heinz Maier-Leibnitz, Los Alamos National Laboratory and the company RC Tritec, measurements were performed at the FRMII and LANL to characterize the impact of the mentioned challenges and to find a pathway for quantification and standardization of imaging setups at different facilities, as well as for improving resolution and efficiency of the technique in a collaborative effort.
Here we present the results of these measurements, including a break-through in improved spatial resolution by use of a new scintillator concept for fast neutrons.
Speaker: Dr Eberhard Lehmann (PSI)
• 1:40 PM
Multimodal Imaging using Neutrons and Gammas at the NECTAR Instrument 25m
NECTAR is a unique beam-line with access to fission neutrons for non-destructive inspection of large and dense objects, where thermal neutrons or X-rays face limitations due to their comparatively low penetration. With the production of fission neutrons at the instrument, as well as neutrons interacting with beamline geometry, such as the collimator, gamma rays are inevitably produced in the same process. Furthermore, these gamma rays are highly directional due to their constraint to the same beam-line geometry and come with similar divergence as the neutrons. While difficult to shield, it is possible to utilize them by using gamma sensitive scintillator screens in place of the neutron scintillators, viewed by the same camera and swapped-out in-situ.
Here we present the advantages of combining the information gained from neutron imaging in conjunction with gamma imaging at the NECTAR beam-line, providing a unique probe with unparalleled isotope identification capabilities.
Speaker: Dr Adrian Losko (Technische Universität München, Forschungs-Neutronenquelle MLZ (FRMII))
• 2:05 PM
Chlorine determination in archaeological iron artefacts by PGAA 25m
Archaeological iron finds often undergo a secondary destructive corrosion process after excavation. Since chlorine is supposed to play a major role in this process, it is important to have a method for determining the chlorine content of such objects non-destructively, both for assessing the danger of corrosion and to verify the efficiency of methods trying to remove the chlorine. Neutron activation analysis (NAA), and prompt gamma activation analysis (PGAA) in particular, is the method of choice for the Cl determination in archaeological iron artefacts. By PGAA, sizeable pieces can be studied in a non-destructive manner with a detection limit of about 10 ppm. Since hardly any long-lived radioactivity is produced, the objects can be returned to museum collections within weeks after the analysis. We will report on studies of a large number of mainly Celtic iron artefacts from Bavaria that were excavated in the past 150 years and are in various states of preservation. Space-resolved Cl determination helps to understand details of the corrosion process. The efficiency of the removal of Cl from ancient objects by heating and leaching processes has been studied. The Cl removal from artificially corroded iron specimens prepared in the laboratory was studied in order to obtain a better understanding of the chemical bonding of the Cl that can be removed by the different methods.
Speaker: Dr Friedrich Wagner (Physik-Department, Technische Universität München)
• 1:00 PM 2:30 PM
MLZ Users 2020 - Nuclear, Particle, and Astrophysics: Part 1/1
Convener: Bastian Märkisch (Physik Department, TU München)
• 1:00 PM
Frequency-based decay electron spectroscopy 40m
Precision measurements of $\beta-$decay spectra can provide exquisitely sensitive tests of various predictions and underlying symmetry assumptions of the Standard Model (SM) of Particle Physics. Possible symmetry violations can alter the shape of $\beta$-decay spectra in characteristic ways. Beyond SM physics e.g. causes the finite masses of neutrinos that alter the $\beta-$decay spectrum of tritium in a predictable but still undetectable way. In a first step to design an experiment with a sensitivity of $40\,\mathrm{meV/c^2}$ to the neutrino mass scale the Project 8 collaboration has recently demonstrated a novel, frequency-based electron spectroscopy technique. Cyclotron Radiation Emission Spectroscopy (CRES) determines the electron's kinetic energy from the feeble cyclotron radiation emitted by an electron spiralling in a magnetic trap. I will present the basics of CRES and results obtained with mono-energetic conversion electrons from $^{83\mathrm{m}}\mathrm{Kr}$ as well as preliminary results from measurements using molecular tritium. I will discuss the prospect of CRES in the context of precision $\beta-$decay experiments of the next generation, in particular with a focus on the neutron decay spectrum.
This work has been supported by the Cluster of Excellence "PRISMA+" (EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149), the US DOe and NSF and by internal investments at all collaborating institutions.
Speaker: Martin Fertl (Johannes Gutenberg Universität Mainz)
• 1:40 PM
Neutron optics for PERC 25m
The PERC experiment is currently under construction at the new beam port MEPHISTO at the FRM II. It aims to measure correlation parameters in neutron beta decay with an accuracy improved by one order of magnitude to a level of $10^{-4}$.
Inside the PERC instrument, an 8 m long neutron guide contains the decay volume in a magnetic field of 1.5 T and is fed by a highly polarized cold neutron beam. In order to ensure a depolarization of the neutron beam on the level of $10^{-4}$ per bounce, completely non-magnetic coatings preferably made of diamagnetic materials are required. We present measurements of new supermirrors made from copper and titanium layers with excellent reflectivity. Despite the well-known high mobility of copper, which leads to degradation of the reflectivity caused by interdiffusion, our supermirrors are highly resistant to baking-out needed to fulfill the requirement of low residual gas pressure.
We also present results on solid-state neutron polarizers made with Iron/Silicon coatings. These polarizers are based on neutron transmission through the polarizer substrate. This opens the opportunity to choose a substrate material with higher neutron optical potential than the potential of the Fe for neutrons with spin antiparallel to the magnetic holding field, which eliminates total reflection of the unwanted spin component even in the low q-region. Main advantages are high degree of polarization over wide angular range as well as a very compact construction.
Speaker: Alexander Hollering
• 2:05 PM
T-odd effects in the binary fission of uranium induced by polarized neutrons 25m
T-odd effects in the fission of heavy nuclei have been extensively studied during more than a decade in order to study the dynamics of the process. A collaboration of Russian and European institutes discovered the effects in the ternary fission in a series of experiments performed at the ILL reactor [1-2] and the effects were carefully measured for a number of fissioning nuclei. The analogous effects for gammas and neutrons in fission of 235U and 233U was also measured [2-5] after the observation of T-odd effects for ternary particles accompanying the reaction 235U(n,f) induced by cold polarized neutrons. All experiments up to now were performed with cold polarized neutrons, which suggests a mixture of several spin states of the compound nucleus, the relative contributions of which are not well known. The measurements of gamma and neutron asymmetries in an isolated resonance of uranium is important in order to get “clean” data. The present work describes a number of our team’s measurements that include the results of T-odd effects in the fission of uranium isotopes by polarized neutrons with different energies at the POLI facility and the MEPHISTO beamline of the FRM2 reactor in Garching.
[1] P.Jesinger et al., Phys.At.Nucl. 62, 1608 (1999)
[2] Y.Kopatch et. al., EPJ Web of Conf. 169, 00010 (2018)
[3] G.Danilyan et al., Phys.At.Nucl. 72, 1812 (2009)
[4] G.Danilyan et al., Phys. At. Nucl. 74, 671 (2011)
[5] G.Danilyan et al., Phys.At.Nucl. 74, 671 (2011)
Speaker: Mr Gadir Ahmadov (Joint Institute for Nuclear Research, 141980 Dubna, Russia, National Nuclear Research Centre, Baku, Azerbaijan)
• 1:00 PM 2:30 PM
MLZ Users 2020 - Quantum Phenomena: Part 1/3
Conveners: Robert Georgii, Yixi Su (JCNS-MLZ)
• 1:00 PM
Impact of anisotropy on the control of spin textures 40m
Magnetic anisotropy does not only play a vital role in the formation and stability of long-range magnetic orders but also affects the ability to manipulate such spin structures. Via case studies, I show how competition of single-ion anisotropies at different magnetic sites can lead to unconventional magnetic orders and how modulation vectors of magnetic spirals can be controlled by tuning anisotropy.
Speaker: Istvan Kezsmarki (Uni Augsburg)
• 1:40 PM
Putative spin-nematic phase in the square lattice compound BaCdVO(PO4)2 25m
We report neutron-scattering and ac magnetic susceptibility measurements of the two-dimensional spin-1/2 frustrated magnet BaCdVO(PO4)2. At temperatures well below TN≈1K, we show that only 34% of the spin moment orders in an up-up-down-down stripe structure. Dominant magnetic diffuse scattering and comparison to published muon−spin−rotation measurements indicates that the remaining 66% is fluctuating. This demonstrates the presence of strong frustration, associated with competing ferromagnetic and antiferromagnetic interactions, and points to a subtle ordering mechanism driven by magnon interactions. On applying magnetic field, we find that at T=0.1 K the magnetic order vanishes at 3.8 T, whereas magnetic saturation is reached only above 4.5 T. We argue that the putative high-field phase is a realization of the long-sought bond-spin-nematic state.
Speaker: Dr Markos Skoulatos (TUM)
• 2:05 PM
Spin-liquid-like state in anion-disordered Gd$_2$Hf$_2$O$_7$ 25m
Pyrochlore antiferromagnets (AFM) Gd$_2$$T_2$O$_7$ ($T$: tetravalent metal elements) are prototypical materials for realizing classical spin liquid states. However, most of them have been observed to show long-range magnetic order. Previous studies show that Gd$_2$Hf$_2$O$_7$ has Curie-Weiss temperature $\approx-7.3$ K and a tiny sharp peak on the top of a large broad maximum in the specific heat data indicating a long-range AFM order. Here we present our investigation on the nuclear and magnetic structures of Gd$_2$Hf$_2$O$_7$. Using neutron diffraction, we found that the sample has $\sim8\%$ oxygen Frankel defects with undetectable Gd/Hf antisite defects. The polarized neutron diffuse scattering pattern shows liquid-like scattering at 30 mK without any magnetic Bragg peaks, evidencing a spin-liquid-like ground state. The pattern was further analyzed using reverse Monte Carlo method together with unsupervised machine learning techniques, which reveals a Palmer-Chalker order over the range of a single unit cell. Bond disorder due to oxygen anion disorder may be responsible for the absence of long-range order.
Speaker: Jianhui Xu (MLZ, TUM)
• 1:00 PM 2:30 PM
MLZ Users 2020 - Soft Matter: Part 1/3
Conveners: Henrich Frielinghaus (JCNS), Michaela Zamponi (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science at Heinz Maier-Leibnitz Zentrum)
• 1:00 PM
Polyanion Diffusion in Polyelectrolyte Multilayers 40m
Coatings of oppositely charged macromolecules (proteins, DNA, polyelectrolytes) are used for surface modification and functionalization. Yet, it remains a challenge to control the position and mobility of the molecules within the coating. As a model system, polyelectrolyte multilayers were used, which were prepared by the sequential adsorption of oppositely charged polyions. With neutron reflectivity, the diffusion constant of the polyanion PSS was measured. Two parameters were found to be important: (i) the conformation of the polyelectrolytes, which depends on the ion concentration in the deposition solution and (ii) the molecular weight of the polycation; the latter was the dominant parameter. Thus, the diffusion coefficient of PSS could be varied by five orders of magnitude; the observed scaling laws are consistent with sticky reptation.
An important question concerns the relationship between the polyelectrolyte composition in the multilayer and the deposition solution. Multilayers were prepared from binary mixtures of long deuterated PSS$_\text{long}$ and short protonated PSS$_\text{short}$. A small amount of PSS$_\text{long}$ in the deposition solution led to a disproportionate increase of PSS$_\text{long}$ in the film, consistent with the higher diffusion coefficient of PSS$_\text{short}$. The results provide insight into the parameters which influence polyelectrolyte mobility and furthermore demonstrate how polyelectrolyte mobility influences film composition.
Speaker: Christiane Helm
• 1:40 PM
Distortion of amphiphile lamellar phases induced by surface roughness 25m
The structure of concentrated solutions of tetraethyleneglycol dodecyl ether has been compared against a smooth surface and one with a roughness of the order of the lamellar spacing. This has been done in order to investigate the role perturbations have on the overall lamellar order, when these have length scales of the order of the interactions between neighboring lamellae. The results showed that the surfactant forms a well-ordered and aligned structure at a smooth surface, extending to a depth of several micrometers from the interface. Increasing the temperature of the sample and subsequent cooling promotes alignment and increases the number of oriented layers at the surface. The same sample forms a significantly less aligned structure, against a rough surface that does not align to the same extent, even after heating. The perturbation of the structure caused by thermal fluctuations was found to be much less than that imposed by a small surface roughness.
• 2:05 PM
Nanoscale Structural Rearrangements in Ultrathin Nanocellulose Films induced by Water 25m
Cellulose nanofibrils (CNF) as a sustainable biomaterial are excellent building blocks for mechanically exceptional materials and functional coatings. Yet, the water uptake and response to humidity still poses a challenge. We first demonstrate a facile route to prepare large-scale cellulose-based nanostructured thin films with a low surface roughness down to 2.5 nm on (20 × 100) mm$^2$ substrates. We employ in situ grazing incidence small-angle neutron scattering to study the morphological features within the ultra-smooth CNF thin films under as-prepared conditions as well as their rearrangement under humidification. Increasing CNF surface charge is highly beneficial for the layering mechanism as it directly influences the self-assembly process, which results in a low roughness of the densely packed CNF network. We observe distinct domains of smaller cellulose bundles and larger bundles or agglomerates within the thin film. During in situ humidification and drying of the CNF film, the domains reversibly change from cylindrical to spherical appearance. With decreasing values of surface roughness corresponding to increasing surface charge densities of CNF films, the surface free energy is observed to be tunable. This knowledge can be used to promote the use of polar solvents in applications such as organic solar cells and to further enhance physical properties and materials lifetime.
Speaker: Stephan Roth (DESY / KTH)
• 1:00 PM 2:30 PM
MLZ Users 2020 - Structure Research: Part 1/3
Conveners: Anatoliy Senyshyn, Martin Meven (RWTH Aachen University, Institute of Crystallography - Outstation at MLZ)
• 1:00 PM
The hunt for enzyme isoform specific ligand binding: using neutron crystallography to elucidate selective inhibition of carbonic anhydrase by saccharin-based ligands. 40m
Up-regulation of carbonic anhydrase IX (CA IX) expression is an indicator of cancer metastasis and is associated with poor cancer patient prognosis. As such, CA IX has emerged as an attractive cancer target for diagnosis, cancer staging, imaging, and also treatment. However, due to the high level of sequence conservation between human variants of the enzyme, development of isoform-specific inhibitors has been largely unsuccessful. In this study, a CA IXmimic construct that mimics the CA IX active site while maintaining CA II characteristics that make it amenable to crystallography. The mimic construct is based on CA II but with seven point mutations introduced to match the greater active site region with >96% identity to that of CA IX. The structures of CA IXmimic unbound and in complex with saccharin (SAC) and a saccharin-glucose conjugate (SGC) were determined using joint X-ray and neutron protein crystallography. Previously, SAC and SGC have been shown to display CA isoform inhibitor selectivity in assays but X-ray crystal structures failed to reveal the basis of this selectivity. Joint X-ray and neutron crystallographic studies have shown which active site residues play a role and how solvent displacement and H-bonding re-organization occurs prior to - or upon - SAC and SGC binding. Specifically, these observations highlighted the importance of residues 67 (Asn in CA II, Gln in CA IX) and 130 (Asp in CA II, Arg in CA IX) in selective CA inhibitor targeting.
Speaker: Zoe Fisher (European Spallation Source ERIC)
• 1:40 PM
Interdependent scaling of long-range oxygen and magnetic ordering in non-stoichiometric Nd2NiO4.10 25m
Hole doping in Nd2NiO4.00 can be either achieved by substituting the trivalent Nd atoms by bivalent alkaline earth metals or by oxygen doping, yielding Nd2NiO4+δ. While the alkaline earth metal atoms are statistically distributed on the A-cation sites, the extra oxygen atoms in interstitial lattice remain mobile down to ambient temperature and allow complex ordering scenarios depending on δ and T. Thereby the oxygen ordering, usually setting in far above room temperature, adds an additional degree of freedom on top of charge, spin and orbital ordering, which appear at much lower temperatures. In this study, we investigated the interplay between oxygen and spin ordering for a low oxygen doping concentration i.e. Nd2NiO4.10. The presence of a complex 3D modulated structure related to oxygen ordering already at ambient was evidenced by single crystal neutron diffraction, the modulation vectors being ±2/13a±3/13b, ±3/13a±2/13band ±1/5a±1/2c with satellites up to fourth order. The coexistence of oxygen and magnetic ordering below TN ≃ 48 K was evidenced, with magnetic satellite reflections adapting the same modulation vectors as found for the oxygen ordering, evidencing a unique coexistence of 3D modulated ordering for spin and oxygen ordering in Nd2NiO4.10. Temperature dependent measurements of magnetic intensities suggest two magnetic phase transitions below 48 K and 20 K, indicating two distinct onsets of magnetic ordering for the Ni and Nd sublattice, respectively.
Speaker: Werner Paulus (Université de Montpellier, Sud de France)
• 2:05 PM
Insights into the (de)lithiation mechanism of core-shell layered Li(Ni,Co,Mn)O2 cathode materials during cycling 25m
Layered LiNixCoyMn1-x-yO2 (NCM) oxides with core-shell morphology have been found to be prospective cathode candidates for advanced lithium-ion batteries. The electrochemical performances of NCM cathodes are tied to the transition metal relative ratios, thereby it is difficult to determine the real structure of core-shell NCM materials and to understand the synergistic effect of core and shell upon cycling. Herein, high-resolution neutron powder diffraction at the instrument SPODI was used to investigate the structure of synthesized NCM compound. The results show that the as-prepared NCM material consists of an inner Ni-rich core and a Mn-rich shell on a secondary particle level. Both core and shell possess a layered α–NaFeO2–type structure with the same space group (R3 ̅m) while a slight difference in lattice parameter. The (de)lithiation mechanism of core-shell NCM cathode materials was investigated by in situ synchrotron-based X-ray diffraction and absorption spectroscopy. These findings contribute to prepare layered Ni-based oxides with good electrochemical performances.
Speaker: Weibo Hua (Karlsruhe Institute of Technology (KIT))
• 2:30 PM 3:00 PM
Break 30m
• 3:00 PM 4:40 PM
MLZ Users 2020 - Materials Science: Part 2/3
Conveners: Michael Hofmann, Ralph Gilles
• 3:00 PM
Understanding mechanical behaviour of Nb3Sn superconducting magnet coils by combined neutron diffraction and macroscopic stress strain measurements 25m
Reliable mechanical materials data are required for predicting the strain and stress state evolution during assembly, thermal cycling and powering of superconducting magnets. The ingredients for thermomechanical modelling of linear elastic and isotropic magnet materials behaviour are often available. However, taking into account anisotropic mechanical properties, the yielding and flowing of fully annealed Cu, the brittleness of Nb3Sn, and the non-linear and irreversible thermal expansion of the Nb3Sn conductor during reaction heat treatment is particularly challenging. The Nb3Sn conductor block Young’s modulus anisotropy and mechanical behaviour are explained based on in-situ neutron diffraction loading strain measurements at the MLZ Stress-Spec diffractometer. It is shown that the conductor block behaves like a fibre reinforced composite, with iso-strain and iso-stress in the conductor constituents under axial and transverse loading, respectively. The potential of different coil characterisation methods, notably digital image analysis and indentation hardness maps in metallographic coil cross sections, and residual strain mapping of collared coil assemblies by neutron diffraction, are compared.
Speaker: Christian Scheuerlein (CERN)
• 3:25 PM
Impact of Sulfur on the melt dynamics of glass forming Ti75Ni25−xSx 25m
Bulk metallic glasses combine a spectrum of favorable mechanical and chemical properties. Especially Titanium-based bulk metallic glasses are demanded for lightweight construction and for medical devices. However, the presence of toxic Beryllium and the limited casting thickness restricts the production of Titanium-based bulk metallic glasses. Recently, Sulfur was recognized as alloying element for bulk metallic glass production. In Ti75Ni25 the substitution of Nickel by Sulfur leads to bulk metallic glass formation for 8 at.% Sulfur.
In order to identify the origin of the enhanced glass forming ability, we examined the melt dynamics of Ti75Ni25-xSx (x = 0, 5, 8) on different length scales [1]. The mean Ti/Ni self-diffusion coefficients were probed by quasielastic neutron scattering on the time-of-flight-spectrometer TOFTOF. Since Titanium-based melts are highly reactive, we applied containerless processing techniques to perform our experiments. We observe a decrease of melt dynamics for both viscosity and self-diffusion upon Sulfur addition. This is accompanied by a decrease of the melt packing fraction. Neither a reduction of the liquidus temperature nor a dense melt packing can explain the enhanced glass forming ability. Apparently, chemical interactions that lead to the development of a complex melt structure are involved.
[1] J. Wilden, F. Yang, D. Holland-Moritz, S. Szabó, W. Lohstroh, B. Bochtler, R. Busch, A. Meyer (2020) Applied Physics Letters, 117(1), 013702.
Speaker: Johanna Wilden
• 3:50 PM
Nowadays, lead acid batteries still offer a reliable and cost-effective solution compared to lithium-ion batteries, which can be adapted to different types of energy storage applications. After more than 150 years of use, the energy density of these batteries still presents substantial room for improvement. Our research group is monitoring the processes, which occur inside lead acid batteries (ad hoc manufactured small and commercial batteries) in an operando manner. To study their behaviour, we have used thermal and fission neutrons, as well as gamma radiation to perform both radiography and tomography of lead acid batteries with the goal of better understanding their function and subsequently improving their electrochemical efficiency. The type of radiation used can resolve different information. For example, thermal neutron radiography shows that electrolyte stratification is difficult to detect, due the neutron transmission not appreciably changing in the working concentration range of the electrolyte. However, by focusing on the electrodes, evidence for structural and electrochemical evolution in the active materials can be detected based on compositional change. Here, we present a summary of the recent advances we have obtained thus far from experiments performed at DINGO (OPAL reactor, Sydney) and NECTAR (FRM-II reactor, Munich).
Speaker: Jose Miguel Campillo (University of the Basque Country)
• 4:15 PM
In-operando neutron reflectometry reveals the solid electrolyte interface formation on surface coated silicon based anodes for lithium-ion batteries 25m
Silicon anodes for lithium ion batteries (LIBs) exhibit a high theoretical capacity of 3590 mA h g$^{-1}$ – one magnitude higher than commonly used graphite – but they suffer a large volume expansion of around 300 % during cycling. The formation and composition of the solid electrolyte interface (SEI) in LIBs has a huge impact on the stability and performance of the cell. Coatings of only 10 nm have a large influence on the SEI and therefor on the stability of the silicon based anode, hence also the cell.[1] Static time-of-flight neutron reflectometry (TOF NR) measurements proof the first three cycles sufficient to form the SEI using metallic lithium as counter electrode. Carbon or TiO$_2$ surface coatings on Si$_{85}$Ti$_{15}$ alloy anodes significantly influence the composition and thickness of the SEI. In-operando TOF NR measurements during cycling lead to a better fundamental understanding of the formation and growth of the SEI on these high-performance LIB anodes.
References
(1) Xie, H.; Sayed, S. Y.; Kalisvaart, W. P.; Schaper, S. J.; Müller-Buschbaum, P.; Luber, E. J.; Olsen, B. C.; Haese, M.; Buriak, J. M. Adhesion and Surface Layers on Silicon Anodes Suppress Formation of c -Li 3.75 Si and Solid-Electrolyte Interphase. ACS Appl. Energy Mater. 2020, 3, 1609–1616.
Speaker: Simon J. Schaper
• 3:00 PM 4:40 PM
MLZ Users 2020 - Neutron Methods: Part 2/3
Conveners: Peter Link, Sebastian Busch (GEMS at MLZ, HZG)
• 3:00 PM
Effect of Si content within Silicon-Graphite anodes on performance and Li concentration profiles using NDP and conventional electrochemical techniques 25m
Although addition of silicon to a conventional pure graphite anode leads to a large increase in energy densities, profound morphological changes associated with it, due to repeated (de-)lithiation, may lead to rapid degradation in cell performance. A reversible (de-)lithiation of Li-ions and the formation of a homogenous SEI layer in the initial cycles is therefore crucial.
In this work, we use conventional electrochemical techniques to quantify in-situ the amount of active Li-ions. The electrochemical analysis was conducted using coin-half-cells out of different silicon-graphite (SiG) combinations against lithium chip, as the counter electrode.
Furthermore, we utilized neutron depth profiling (NDP) for an ex situ technique to quantify lithium content, accumulated in SEI as inactive lithium, in different electrode combinations. Here, the coin-half-cells were brought to the desired depth-of-discharges (DODs), using constant current rate of 0.05 h-1. Moreover, the electrodes were extracted and dried under argon atmosphere before conducting the NDP measurements.
The focus lies on the delithiation phase after fully lithiating the electrode samples. Finally, a comparison of Li contents extracted from the two methods is represented. The results show the Li density profiles across the electrode coatings (surface and bulk) for each SiG combination.
Speaker: Mr Erfan Moyassari Sardehaei (Institute for Electrical Energy Storage Technology (EES), Technical University of Munich (TUM))
• 3:25 PM
The Robot Positioning System at the Materials Science Diffractometer STRESS-SPEC 25m
The diffractometer STRESS-SPEC is optimised for fast strain mapping and pole figure measurements. Our group was the first to pioneer the usage of industrial robots for sample handling at neutron diffractometers. However, the current robot is limited in its use due to insufficient absolute positioning accuracy of up to ± 0.5 mm. Usually, an absolute positioning accuracy of 10% of the smallest gauge volume size – which in case of modern neutron diffractometers is in the order of 1×1×1 mm^3 – is necessary to allow accurate strain tensor determination and correct centering of local texture measurements. Therefore, the original robot setup at the neutron diffractometer STRESS-SPEC is currently being upgraded to a high accuracy positioning/metrology system. We will present the complete measurement process chain for the new robot environment. To achieve a spatial accuracy of 50 µm or better during strain measurements the sample position will be tracked by an optical metrology system and it is going to be actively corrected. The additional use of radial collimators creates more space in the sample environment and enhance the residual stress analysis capabilities for large complex parts. Finally, a newly designed laser furnace can be mounted at the robot flange to conduct texture measurements at elevated temperatures of up to 1300 °C. A brief overview of the STRESS-SPEC instrument and its capabilities as well as first commissioning experiments using the new setup will be given.
Speaker: Martin Landesberger (TUM)
• 3:50 PM
Compact Clamp Cells for High Pressure Neutron Scattering at Low Temperatures and High Magnetic Fields at MLZ 25m
The combination of high pressure, low temperature and high magnetic fields with neutron scattering is of great interest for the study of a wide range of materials, e.g. quantum phenomena where competing magnetic interactions are tuned by pressure. The basic requirement for such experiments is the availability of suitable pressure devices. The most common type of device for high-pressure neutron experiments is the clamp cell: the pressure is applied and fixed ex-situ, allowing an independent use of the same cell/sample in various setups.
Here we report on the development of dedicated compact clamp cells for neutron scattering experiments in the closed-cycle cryostats and high-field magnets on the beamlines DNS, MIRA, and POLI. The cell has been produced in CuBe and in NiCrAl variants, working up to about 1.1 GPA and 1.5 GPa, respectively, in good agreement with theoretical predictions. The use of nonmagnetic materials allows measurements of magnetic properties of the sample in both cells even using polarized neutrons.
First tests in the CuBe cell have been successfully performed for the load/pressure calibration curve, cell attenuation and background measurement both with cold and hot neutrons, and to test thermal behavior, measuring magnetic reflections at very low temperatures. The results of these tests will be presented. The new cells are well suited for high pressure measurements at ultra-low temperatures and in combination with an applied magnetic field.
Speaker: Mr Andreas Eich (Forschungszentrum Jülich GmbH; RWTH Aachen)
• 4:15 PM
(Elastic) neutron scattering on hydrogen rich samples 25m
(Elastic) neutron scattering on hydrogen rich samples
Estimating the resolution of instruments equals predicting their capabilities. Of course, those estimates are only that good as the assumed simplifications are justified. One of the most significant assumption for SANS or NR is, that they are solely elastic. In this context, the interaction of neutrons with hydrogen rich samples is of particular interest, especially due to numerous neutron scattering experiments investigating soft condensed matter. Without doubt, the contribution of elastic scattered neutrons is by far dominating. However, resolving scattering features, which require high signal to noise ratio or q-resolution, are limited by in- and or quasi-elastic scattered neutrons. The same circumstances might also restrict the significance of the outcome of commonly used contrast variation experiments, originated by the strong incoherent scattering length and fast dynamics of 1H compared to deuterium. Here we will present SANS and NR experiments showing inelastic / quasi-elastic scattering, partially compare them to spectroscopic investigations in the same scattering geometry and elaborate their impact on data quality.
Speaker: Olaf Soltwedel
• 3:00 PM 4:40 PM
MLZ Users 2020 - Nuclear, Particle, and Astrophysics: Collaboration meeting PERC
• 3:00 PM 4:40 PM
MLZ Users 2020 - Quantum Phenomena: Part 2/3
Conveners: Robert Georgii, Yixi Su (JCNS-MLZ)
• 3:00 PM
Inward shift in the spin wave dispersion of a stripe discommensurated Pr-based 214-nickelate 25m
Magnetic excitations in the spin-stripe phases of La-based 214-nickelates have been vigorously explored using INS for almost last three decades and still have remained an exciting research field, especially to understand their differences yet of their structural similarities with high-T$_c$ 214-cuprates. In view of the reported two-dimensional antiferromagnetic nature, out-of-plane magnetic excitations are generally not expected in 214-nickelates. From the INS measurements of magnetic excitations in a stripe discommensurated Pr$_{3/2}$Sr$_{1/2}$NiO$_4$ with magnetic incommensurability $\epsilon=$ 0.4, here we present very compelling evidence for a sizable out-of-plane interaction ($\sim$2.2 meV) which was crucial to explain the observed shift of the spin wave dispersions towards the magnetic zone centers.
Reference: A. Maity, R. Dutta, and W. Paulus, Phys. Rev. Lett. 124, 147202 (2020).
Speaker: Avishek Maity
• 3:25 PM
Dipolar interactions and spin dynamics in the itinerant ferromagnets Fe and Ni 25m
Inelastic neutron scattering studies of the spin dynamics of archetypical ferromagnets have been conducted since the invention of those methods. However, the results were limited to relatively large momentum transfers q by experimental difficulties, mainly the coarse resolution of modern TAS or TOF instruments. Utilizing the modern method, a neutron resonance spin echo technique, we investigated the spin-wave dispersion in iron and the paramagnetic spin fluctuations in nickel at small momentum and energy transfer with very high resolution.
The spin wave dispersion of an isotropic ferromagnet is comprehensively described by the Holstein-Primakoff theory, which takes dipolar interactions into account. As expected, the dispersion follows a quadratic form for large q values $E_{SW}\propto q^2$, whereas for small q the dispersion shows linear behavior. This is attributed to the long-range dipolar interaction between the magnetic moments. The subtle influence of these interactions on the magnon spectrum can be expressed by the material specific dipolar wave vector $q_D$. Hence, the dipolar interactions are primarily probed for $q\leq q_D$.
Our results show excellent agreement with previously conducted triple-axis measurements by Collins et al. in the overlapping regime of q, validating the experimental approach, while extending the investigated range of the spin wave dispersion down to a momentum transfer of $q=0.015\unicode{x212B}^{-1}$ with unprecedented energy resolution.
Speaker: Lukas Beddrich (Heinz Maier-Leibnitz Zentrum (MLZ))
• 3:50 PM
Magnetic structure of the Mn moment in the magnetic weyl Semimetal Mn3Sn 25m
In the last few years, Mn3Sn has shown a large interest in condensed matter physics community due to the Weyl Semimetallic nature of this compound. Due to the emergent Berry flux from the Weyl points, Mn3Sn shows interesting properties like Anomalous Hall Effect, Chiral magnetic effect, and other non-local transport properties.
Along with exotic transport properties, this material shows temperature-dependent magnetic structure. To understand the connection between Weyl properties with the magnetic structure we have performed single crystal neutron diffraction of Mn3.17Sn sample at the HEiDi instrument at FRM II. Our diffraction experiment confirms that between 274 K < T < 420 K (TN) Mn moment order in an inverse triangular antiferromagnetic structure in the a-b plane. In the temperature range 50 to 274 K, Mn moments order in a spiral magnetic structure. The same spiral magnetic structure persists below 50 K down to 4 K where a spin-glass state was reported. The direct correlation between the magnetic structure and the Anomalous Hall Effect (AHE) is still unclear. As few groups claimed that in the incommensurate region (50 K to 190 K)1 no AHE was observed but other groups found AHE in this region2. We have observed AHE in the incommensurate region with amplitude compare to the published report.
Reference:
1. N. H. Sung et al. Applied Physics Letter 112, 132406 (2018).
2. S. Nakatsuji et al. Nature 527, 212 (2015).
• 4:15 PM
Destruction of long-range magnetic order in Cu$_2$GaBO$_5$ and Cu$_2$AlBO$_5$ ludwigites by an external magnetic field. 25m
Ludwigites are oxyborate compounds with the general formula $M_2^{2+}M^{\prime{\kern.5pt}3+}$BO$_5$. Their structure consists of low-dimensional zigzag walls with triangular motifs, making them an interesting playground for the realization of magnetic frustration on quasi-low-dimensional lattices. Of particular interest are copper ludwigites, in which the divalent transition-metal ion is Cu$^{2+}$, carrying a quantum spin 1/2, whereas the trivalent ion is nonmagnetic. Cu$_2$GaBO$_5$ and Cu$_2$AlBO$_5$ ludwigites have been careful characterized. Both compounds order antiferromagnetically with $T_\text{N}\approx4.1$ K and 3 K, respectively. Propagation vector for Cu$_2$GaBO$_5$ is $(0.45~0~-\!0.7)$, which was determined by diffraction measurement. We also collected $\mu$SR data as a function of temperature and weak longitudinal magnetic field. They indicate a decoupling in weak fields of about 2000 gauss, which suggests that the internal field experienced by the muon is unusually weak. On the other hand, magnetic field also induces a very fast depolarization of some small fraction of the muons, leading to a decrease in initial asymmetry, which is consistent with field-induced magnetic disorder. We also present inelastic neutron scattering measurements evidencing diffuse low-energy spin fluctuations associated with such a crossover. We suggest that these investigation help understand magnetic ordering and will be an additional step towards understanding the quantum spin system.
Speaker: Anton Kulbakov
• 3:00 PM 4:40 PM
MLZ Users 2020 - Soft Matter: Part 2/3
Conveners: Henrich Frielinghaus (JCNS), Michaela Zamponi (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science at Heinz Maier-Leibnitz Zentrum)
• 3:00 PM
Inorganic nanoparticles challenging lamellar and non-lamellar lipid membranes: the role of curvature in nano-bio interactions 25m
The interaction of engineered nanomaterials with living systems is mediated by biological barriers, determining their biological fate and cytotoxicity. Understanding the interaction of nanoparticles (NPs) with biological interfaces is the key to fill the gap between NPs development and end-use application. Lipid-based synthetic membranes can be used to mimic natural interfaces under simplified conditions, in order to identify key determinants in nano-bio interactions. While most of investigations so far focused on lamellar models, far less attention is given to curved-bilayer structures, ubiquitous in cells under certain conditions. Here, we address the interaction of inorganic NPs with biomimetic bilayers of lamellar and non-lamellar nature, i.e. flat membranes to cubic architectures encountered in diseased cells. With a library of gold NPs, we explore the effect of NPs shape and surface coating as a function of membrane curvature. Through an ensemble of structural techniques (SAXS, GISANS and Neutron Reflectivity), we found that highly curved membranes are associated with an enhanced structural stability towards NPs. Moreover, Confocal Laser Scanning Microscopy analysis highlights that cubic and lamellar phases interact with NPs according to two distinct mechanisms. These results are the first attempt to systematically study the role of membrane curvature in the interaction with NPs, disclosing new perspectives on the understanding and application of Nano-bio Interfaces.
Speaker: Ms Lucrezia Caselli (University of Florence)
• 3:25 PM
Nano-Structure Development of Oral Pharmaceutical Formulations in Simulated Intestine – D-contrast SANS and DLS 25m
Pharmaceutical drug formulations for oral delivery depict after patient intake a stepwise structure development of disintegration to micro-particles, dissolution of drug nano-complexes, interaction with bile and lipids and uptake by the intestinal membrane proteins (receptors). The processes are critical for therapy and applicability of drug and formulation, especially with hydrophobic or badly soluble drugs.
The processing of oral drug formulations was studied by neutron small angle scattering SANS with D-contrast variation, combined with DLS, with a simulator device of the human gastro-intestinal tract with SANS+DLS observation of drug nanoparticles and intermediates. A set of drugs, where oral delivery is a challenge, was investigated, e.g.: Fenofibrate, Amphotericin B, Danazol, Griseofulvin, Carbamazepine, Curcumin in combination with lipids and detergents. The biocompatibility was estimated with cell cultures. The drugs were embeded in nanoparticles and liposomes of 50-100 nm size and resolved stepwise in artificial intestinal fluid and bile. The resolution and formation of intermediated nanoparticles and excipient-drug complexes was analyzed with time resolved SANS and DLS. Substructures (domains) were localized by solvent deuterium contrast variation. The results are part of the development of novel formulations of difficult drugs upon structure investigation by SANS plus DLS in a feedback process.
Speaker: Thomas Nawroth (Gutenberg-University, Pharmaceutical Technology, Staudingerweg 5)
• 3:50 PM
Establishing deuteration services for MLZ users at the JCNS 25m
Neutron scattering experiments involving soft matter materials often require specific contrast to observe different parts of the materials. In order to increase the availability of deuterium labelled materials, we are establishing deuteration support to MLZ users. At this state, we are focusing on a limited number of projects, but in the future, a proposal based deuteration service will be available in GhOST in combination with a proposal for neutron beamtime. Furthermore, the JCNS deuteration efforts are embedded in the LENS deuteration initiative, with the objective of providing in the future a source independent deuteration support together with ILL, ESS and ISIS.
Our main synthetic focus at JCNS-1 is in the area of polymer and organic chemistry. Anionic and controlled radical polymerization techniques allow the synthesis of e. g. polydienes, polyethylene oxide, polybutylene oxide polyacrylates and methycrylates with narrow molecular weight distributions for well-defined samples. The so obtained polymers can be functionalized afterwards to attach diverse functional groups or molecules. Organic techniques are used for the production of ionic liquids, surfactants, lipids, monomers and other compounds. The presentation summarizes the synthetic expertise available at JCNS-1 as well as outlines the planned process to establish the deuteration support.
Speaker: Lisa Fruhner (JCNS-1, Forschungszentrum Jülich GmbH)
• 4:15 PM
Structure and relaxation dynamics in porous systems: A neutron scattering view 25m
Fluids play a main role in determining the final structural and transport properties in several solvated systems like hydraulic binders[1], hydrogels[2], organogels[3] and colloids in general. Thanks to their peculiar neutron-sample interaction, neutrons are the elective probe to study many hydrogen-rich systems. For this reason, neutron scattering techniques are unique in defining porous matrix topology at the nanoscale and relaxation properties in the ps-ns regime. This presentation reviews few examples were neutron scattering, even in time-resolved mode, could be of great advantage for the material intrinsic understanding and improvement.
1. F. Ridi, M. Tonelli, E. Fratini, S.-H. Chen, P. Baglioni, Langmuir 34, 2205−2218 (2018). DOI: 10.1021/acs.langmuir.7b02304
2. D. Noferini, A. Faraone, M. Rossi, E. Mamontov, E. Fratini, P. Baglioni J. of Phys. Chem. C. 123, 19183-19194 (2019). DOI: 10.1021/acs.jpcc.9b04212.
3. H.D. Santan, C. James, I. Martínez, E. Fratini, C. Valencia, M.C. Sánchez and J.M. Franco, Industrial Crops and Products 121, 90-98 (2018). DOI: 10.1016/j.indcrop.2018.05.012
Speaker: Prof. Emiliano Emiliano Fratini (University of Florence and CSGI)
• 3:00 PM 4:40 PM
MLZ Users 2020 - Structure Research: Part 2/3
Conveners: Anatoliy Senyshyn, Martin Meven (RWTH Aachen University, Institute of Crystallography - Outstation at MLZ)
• 3:00 PM
Characterization of Phosphide-Based Lithium Ion Conductors by Neutron Powder Diffraction 25m
In order to attain fast lithium conducting solid electrolytes for the development of high-energy-density all solid state batteries phosphide-based materials have recently gained much interest. With the phosphidotetrelates Li8TtP4 (Tt = Si,Ge) and Li14TtP6 (Tt = Si,Ge,Sn) several lithium conducting materials are already discovered which achieve conductivities up to 1.7mS/cm at RT.[1-4] Recently we extended this material class with the novel superionic conductor Li9AlP4 which has as an undoped material a remarkable fast ionic conductivity of ~3mS/cm at RT and a low activation energy of ~29kJ/mol.[5] Neutron powder diffraction analysis confirms the Li sub-lattice in Li9AlP4 with partial occupied and even lithium split positions. Temperature-dependent measurements reveal the phase transformation from an ordered into a disordered modification which exhibits the same structure type as found in Li14SiP6. Furthermore the crystal structures of the new compound Li8SnP4 was thoroughly analyzed by neutron powder diffraction and other methods. Maximum entropy and one-particle potential evaluations of nuclear density maps give insights into the 3D lithium ion diffusion.
[1] L. Toffoletti et al., Chem. Eur. J. 2016, 22, 17635
[2] S. Strangmüller et al., J. Am. Chem. Soc. 2019, 141, 14200
[3] H. Eickhoff et al., Chem. Mater. 2018, 30, 6440
[4] S. Strangmüller et al., Chem. Mater. 2020, 10.1021/acs.chemmater.1020c02052
[5] T. M. F. Restle et al., Angew. Chem. Int. Ed. 2020, 59, 5665
Speaker: Stefan Strangmüller
• 3:25 PM
Learning from structure solution: An enhanced solid-state Mg electrolyte 25m
All-solid-state batteries based on magnesium are considered for the use in mobile applications as well as to store energy from “renewable” intermittent energy sources. Recently, a solid state magnesium ion conductor, Mg(en)1(BH4)2 (en stands for ethylenediamine), obtained from Mg(BH4)2 : [Mg(en)3(BH4)2] 2:1 mixture, was reported to have an exceptionally high magnesium ion conductivity of up to 6·10−5 S·cm−1 at 70 °C. Here we show that this synthesis actually yields a mixture of Mg(en)1.2(BH4)2 and amorphous Mg(BH4)2. The latter was often neglected in previous investigations, though it was shown recently that its dynamics have a positive influence on the conductivity. The structure of Mg(en)1.2(BH4)2 has been solved from single crystal X-ray diffraction in space group P-1 and confirmed by neutron powder diffraction on isotopically substituted Mg(en)1.2(11BD4)2. Its structure shows three Mg atoms with coordination numbers 4, 5 and 6, the BH4 groups behaving as terminal and bridging ligands, and en chelating and bridging Mg atoms. This complexity makes the structure solution virtually impossible from powder diffraction data. Thermal decomposition of Mg(en)1.2(BH4)2 goes through an intermediate formation of the previously unknown Mg(en)2(BH4)2, its structure was solved from synchrotron X-ray powder diffraction, complemented by DFT optimization.
Speaker: Michael Heere
• 3:50 PM
Investigation of orthorhombic and tetragonal phases of Cs2CuCl4-xBrx mixed system 25m
The Cs2CuCl4-xBrx mixed system exists in orthorhombic and tetragonal polymorphs and is an example of the low-dimensional quantum spin systems. The different Cu2+ environments and their influence on the magnetic properties are important to understand the change of magnetic behaviour by applying magnetic field. The orthorhombic mixed system was studied by neutron single crystal diffraction with and without magnetic field. It shows a rich magnetic phase diagram consisting of four regimes depending on the Br concentration and is characterised by different exchange coupling mechanisms. Inelastic neutron scattering experiments on MIRA for the compound from regime III (2 < x < 3.2) with x=2.2 show dynamical correlations at a temperature around 50 mK giving evidence for a spin liquid phase [1].
I4/mmm has been used to describe the tetragonal polymorphs. The magnetic behaviour of such tetragonal compounds can be described as quasi-2D antiferromagnets with a transition temperature TN between 9K and 11K, depending on the Br content [2]. New single-crystal neutron diffraction experiments on RESI indicate a very small orthorhombic distortion at low temperature. The structure solution shows a subgroup relationship for the investigated composition of this mixed system.
[1] N. van Well et al., Ann. Phys. (Berlin), 2000147, (2020),
[2] N. van Well et al., Cryst. Growth Des., 19, 11, 6627-6635 (2019)
Speaker: Prof. Natalija van Well (Department of Earth and Environmental Sciences, Crystallography Section, Ludwig-Maximilians-University Munich)
• 4:15 PM
Hybridized crystal field–phonon bound state in cerium-113 compounds 25m
The coupling between elementary excitations in condensed matter can give rise to novel functional properties and exotic states, such as superconductivity, multiferroicity, or various types of polar order. We are particularly interested in the CeTAl3 (T is a transition metal), family of compounds, for which an unusual bound state was reported in CeCuAl3 and CeAuAl3. It arises due to the magnetoelastic coupling between the crystal field excitation (CEF) and phonons. Although it was observed in few other compounds, e.g. Tb2Ti2O7 [3] or PrNi2 [4], it was reported as an interaction of CEF and an optical phonon, while for CeAuAl3 we have observed an interaction of CEF and strongly dispersive acoustic phonon. This points to a different character of this phenomenon, and awaits a microscopic explanation. We are investigating this effect in other compounds, CePtAl3 and CePdAl3, and its connection with crystal structure and physical properties. In addition we want to determine the influence of the magnetoelastic coupling in Ce-113 compounds on their magnetic ordering and dynamics.
We have conducted various neutron diffraction and spectroscopy measurements on Ce-113 compounds. Our measurements show, that CePtAl3 exhibits a modulated antiferromagnetic ordering below TN=3.35 K, with a modulation vector q=(2/3 0 0), while CePdAl3 orders antiferromagnetically at TN=5.61 K. Magnetic properties, models of magnetic structure and first results on spin and lattice dynamics will be discussed.
Speaker: Michal Stekiel (Technische Universität München)
• 4:40 PM 5:10 PM
Break 30m
• 5:10 PM 6:00 PM
MLZ Users 2020 - Materials Science: Part 3/3
Conveners: Michael Hofmann, Ralph Gilles
• 5:10 PM
Search for vacancies in concentrated solid-solution alloys 25m
Concentrated solid solution alloys (CSA) with no principle alloying element but a single randomly populated crystal structure exhibit attractive material properties, e.g., high ductility at low temperatures or high irradiation resistance. To understand such phenomena in these alloys, often also named high-entropy alloys, assessment of atomic transport including formation and migration of vacancies is indispensable. Here results of positron annihilation lifetime spectroscopy (PALS) are reported to quantify the concentration of quenched-in thermal vacancies for CSAs with fcc structure after quenching from temperatures close to their onset of melting. This vacancy concentration decreases with increasing number of components. For alloys with 3 constituents in non-equimolar fractions (CrFeNi) vacancy concentrations in the $10^{−5}$ range were determined. However, for alloys with 4 (CoCrFeNi) and 5 constituents (CoCrFeMnNi, AlCoCrFeNi) a vacancy-specific positron lifetime was not detected. Thus, the concentration of quenched-in vacancies must be $10^{−6}$ or less. This indicates either a vanishingly small fraction of vacancies present at very high temperatures or generated vacancies are inherently unstable. For an unambiguous proof, in-situ positron studies during heating and cooling between room temperature and high temperatures are necessary. Such experiments are planned using a positron beam in the longstanding collaboration with Chr. Hugenschmidt (NEPOMUC beamline at FRM II).
Speaker: Mrs L. Resch (Graz Institute of Technology)
• 5:35 PM
Lithium Quantification in Lithium-Ion Batteries Using Operando Neutron Depth Profiling 25m
Commercial Lithium-Ion Battery (LIB) cells are mostly based on graphite as anode material. During the first inter¬calation of Li into graphite, the electrolyte gets reduced at the anode, forming a nm-thick surface layer, the so-called solid electrolyte interphase (SEI). The SEI stops further electrolyte reduction but consumes Li during its formation. Neutron depth profiling (NDP) is a non-destructive technique and a suitable tool to measure Li concentrations as a function of depth. When irradiating the sample with a cold neutron beam, 6Li nuclides emit charged particles after neutron capture. The residual energy and the signal rate of the emitted 3H particles are correlated to depth and amount of Li. Thus, SEI growth and Li (de )intercalation in graphite anodes can be studied up to a depth of ca. 30 µm. Here, we present operando NDP data for the first charge/discharge cycle of a graphite anode vs. a LiFePO4 cathode, using a custom-designed coin cell casing with 0.5 mm diameter holes which are sealed with a 7.5 µm Kapton window. We will demonstrate that the cycling performance of the operando cell is comparable to a standard laboratory cell, show that it was possible to quantitatively track the Li concentration across the graphite electrode during cycling, and thus to correlate the amount of Li in the SEI layer with the first cycle irreversible capacity.1
1 Linsenmann, Trunk, Rapp, Werner, Gernhäuser, Gilles, Märkisch, Revay, Gasteiger, J. Elec. Soc. 167 (2020) 100554.
Speaker: Fabian Linsenmann (Chair of Technical Electrochemistry, Department of Chemistry and Catalysis Research Center, Technical University of Munich)
• 5:10 PM 6:00 PM
MLZ Users 2020 - Neutron Methods: Part 3/3
Conveners: Peter Link, Sebastian Busch (GEMS at MLZ, HZG)
• 5:10 PM
MIASANS at the longitudinal resonant spin echo spectrometer RESEDA 25m
The RESEDA (Resonant Spin-Echo for Diverse Applications) instrument has been optimized for the measurement of quasi-elastic and inelastic processes over a wide parameter range. One spectrometer arm of RESEDA is configured for the MIEZE (Modulation of Intensity with Zero Effort) technique where two precisely tuned radio-frequency (RF) flippers prepare the neutron beam such that it yields a signal of time-varying neutron intensity oscillations. With MIEZE, all of the spin-manipulations are performed before the beam reaches the sample, and thus the signal from sample scattering is not disrupted by any depolarizing conditions there (i.e. magnetic materials). Currently a project is underway to optimize the MIEZE spectrometer for the requirements of small-angle neutron scattering (MIASANS), a versatile combination of the spatial and dynamical resolving power of both techniques. These upgrades include (i) installing new superconducting solenoids as part of the RF flippers to significantly extend the dynamic range (ii) design and installation of modular options for both reflecting guides and evacuated flight paths with absorbing walls for background reduction (iii) installation of a new detector on a translation stage within a vacuum vessel for flexibility in selecting both angular coverage and resolution. Current progress on each of these components will be presented.
Speaker: Jonathan Leiner (Technical University of Munich)
• 5:35 PM
MIEZEFOC 25m
Neutrons are ideal probes to study static and dynamic properties of magnetic materials and materials containing heavy and light elements. With the increasing interest in studying slow processes at large spatial scales for example i) in diffusion processes in soft matter samples and ii) the spin dynamics near quantum and thermal phase transitions it is important to develop instrumentation with very higher energy resolution down to the neV regime. TAS- or ToF-spectrometers are not suitable to achieve such high energy resolutions because of the gigantic loss of intensity involved in the selection of the small energy bands. An elegant solution to circumvent the intensity problem is the use of the Larmor precession of neutrons. Despite the significant progress made with Larmor precession techniques, it is still difficult to investigate small samples and samples under extreme conditions such as high pressure and large magnetic fields due to the small intensity. We present a new MIEZE concept using focusing optics before and within the MIEZE setup that allows the illumination of small samples i.e. volumes of a few mm$^3$. The major advantages are an increase of the intensity at the sample position and a strong decrease of the background because only the sample is illuminated and not the sample environment. Thus, the signal-to-background ratio can be strongly reduced.
Speaker: Christoph Herb (TUM)
• 5:10 PM 6:00 PM
MLZ Users 2020 - Nuclear, Particle, and Astrophysics: Collaboration meeting PERC
• 5:10 PM 6:00 PM
MLZ Users 2020 - Quantum Phenomena: Part 3/3
Conveners: Robert Georgii, Yixi Su (JCNS-MLZ)
• 5:10 PM
Neutron Larmor diffraction on nickelate powder samples 25m
Recently, the discovery of superconductivity in the Sr-doped nickelates RNiO$_2$ (R = Pr, Nd) has attracted widespread attention. The synthesis of the RNiO2 compounds has been achieved by topotactic reduction of the non-superconducting perovskite phase RNiO$_3$, removing oxygen from the crystal lattice in a controlled fashion. Remarkably, new electronic and magnetic phases can also occur in oxygen vacant phases RNiO$_{3-x}$ with intermediate oxygen content 0 < x < 1. For instance, while LaNiO$_3$ remains paramagnetic down to lowest temperatures, long-range antiferromagnetic order emerges in LaNiO$_{2.5}$. However, it has not been clarified yet whether the new electronic and magnetic phases are accompanied by structural phase transitions. Hence, we aim to use the highly sensitive Neutron Larmor diffraction (LD) technique to investigate structural changes that possibly coincide with the electronic and magnetic transitions emerging in oxygen-deficient phases of RNiO$_{3-x}$. As a first step, we report here that LD is capable of detecting the known subtle structural phase transition in PrNiO$_3$ at 120 K, while no transitions were detected in the LaNiO$_3$ sample at low temperatures. Furthermore, a newly introduced analysis technique for LD data allows us to account for resolution effects that originate from small angle scattering from powder samples.
Speaker: Dr Matthias Hepting (MPI für Festkörperforschung)
• 5:35 PM
The singlet-triplet gap structure of the noncentrosymmetric superconductor Ru7B3 25m
Noncentrosymmetric (NCS) materials present an interesting environment for superconductivity, as parity is no longer a conserved quantity, leading to the possibility of superconducting systems with a superposition of s-wave and p-wave states. Such systems are predicted to have unusual properties, such as large Pauli limiting fields and ‘helical’ vortex states. The dependence of the superfluid density on temperature is determined by the gap function, and therefore measuring this is one method to probe the presence of mixed superconducting states predicted for NCS superconductors. We have used small-angle neutron scattering from the vortex lattice on the SANS-I instrument at MLZ to investigate this in the NCS superconductor Ru7B3, a system which has already demonstrated highly unusual vortex behaviour where the structure of the vortex lattice shows a dependence on the field-history of the system in a manner not explicable by any established theory or through a mechanism such as vortex pinning. Our measurement of the temperature dependence of the superfluid density indicates that Ru7B3 is not a pure s-wave superconductor, and in-fact demonstrates the predicted s-wave and p-wave admixture from a recent theoretical model which has shown success in describing other NCS superconductors.
Speaker: Alistair Cameron (TU Dresden)
• 5:10 PM 6:00 PM
MLZ Users 2020 - Soft Matter: Part 3/3
Conveners: Henrich Frielinghaus (JCNS), Michaela Zamponi (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science at Heinz Maier-Leibnitz Zentrum)
• 5:10 PM
Aqueous foams stabilized by PNIPAM microgels: A SANS study 25m
Probing the internal structure of macroscopic liquid foams, like their film thickness, is very difficult or even impossible with optical methods, since foams strongly scatter light in the visible range. To overcome this problem, small angle neutron scattering (SANS) can be used, as already demonstrated by Axelos et al. [1].
This contribution addresses foams stabilized by Poly-N isopropylacraylamide (PNIPAM) microgels. These foams are very stable at temperatures below the VPTT and can be destabilized on demand by increasing the temperature. The internal structure of these is investigated with SANS experiments, which allows for the thickness determination of foam films inside the foam.
Four microgels with varying cross-linker concentration were used to study the influence of particle stiffness on the foam film thickness. Furthermore, each foam was probed at three different heights inside of the foaming column, which corresponds to different times after the foam formation, to probe the evolution of film thickness over time.
These findings, combined with the knowledge about the mechanical properties of individual microgels, are used to explain the macroscopic foam properties, namely foamability and foam stability.
[1] M. Axelos and F. Boué, Langmuir, 2003, 6598.
• 5:35 PM
Altering of PNIPAM microgels: pressure vs. temperature 25m
Poly(N-isopropylacrylamide)(PNIPAM) is a classical representative of stimuli-responsive polymers in various polymer systems like microgels, brushes, micelles [1-2]. Application of external stimuli such as temperature or pressure induces structural alterations of the polymer systems. It makes them the promising candidates for various application. However, the polymer parameters and the polymer phase transition strongly depends on the applied trigger and a detailed understanding of the stimuli-induced processes is of high demand.
Since PNIPAM is known to be thermo-responsive, T-induced transition of homopolymer gels as well as PNIPAM-based microgels is widely studied. In turn, the knowledge of pressure driven transition of PNIPAM microgels is still limited. We thus present the structural investigation of the cross-linked PNIPAM microgels within a wide pressure-range (0–5 kbar) at different temperatures by means of VSANS using a sapphire windows HP-cell for liquids at the KWS-3 instrument (JCNS at MLZ). The temperature- and pressure-increase leads to the change of the structural parameters of the PNIPAM microgels, for instance a difference in p- and T-driven transition was found. The results of the effect of the temperature and pressure on the above mentioned system will be presented and the temperature dependence of the pressure point at the phase transition will be discussed.
[1]T. Kyrey et al., Soft Matter (2019) 15, 6536
[2]J. Witte et al., Soft Matter (2019) 15, 1053
Speaker: Dr Tetyana Kyrey (JCNS at MLZ, Forschungszentrum Jülich GmbH)
• 5:10 PM 6:00 PM
MLZ Users 2020 - Structure Research: Part 3/3
Conveners: Anatoliy Senyshyn, Martin Meven (RWTH Aachen University, Institute of Crystallography - Outstation at MLZ)
• 5:10 PM
Multiple phase transitions in HoFeO3 determined by single crystal neutron diffraction in applied magnetic field 25m
The scientific interest the rare-earth orthoferrites RFeO3, known over decades, relive significantly in the last years due to the discovery of multiferroicity or magnetocaloric effect in this family of compounds. Their remarkable magnetic properties result from complex interactions between the 3d electrons of the transition metal and the 4f electrons on the rare-earth. HoFeO3 is one of the most interesting representatives of RFeO3 family with strong magnetic interactions and a number of reorientation transitions. It has centrosymmetric space group Pnma, and Fe sub-lattice orders AFM at TN = 640 K. At zero field the Fe sub-lattice starts to polarize Ho magnetic order at about 60 K. The magnetic structure has several different phases described by magnetic irreducible representations: Г4 = Г_(4+)Fe ⊕ Г_(4-)Ho, Г1 = Г_(1+)Fe ⊕ Г_(1-)Ho, Г2 = Г_(2+)Fe ⊕ Г_(2-)Ho. Our results, obtained under application of magnetic field along crystal axis c, show, that at low field the transition from phase Г4 to Г1, where Fe3+ moments rotates from c to a direction take place not directly in the ac plane, but over an intermediate phase with moments along b axis braking the centrosymmetry. The magnetic phase Г1 disappears completely in magnetic fields above 2.5 T. Further intermediate magnetic phase in the temperature range of 8-25 K is suppressed by magnetic fields above 1 T. This behavior of HoFeO3 in weak magnetic fields makes it a good candidate for research on magnetocaloric effect.
Speaker: Aleksandr Ovsianikov
• 5:35 PM
Cool Pressure: Implementing cryogenic temperatures at the high-pressure instrument SAPHiR 25m
SAPHiR is a 6-6 multi-anvil instrument in the new eastern neutron guide hall of FRM II dedicated to high pressure, high temperature neutron powder diffraction and radiography, which is currently operated offline. Pressures of more than 15 GPa are available, with an internal resistance furnace providing a temperature range between room temperature and ca. 2000 °C. However, experiments below room temperature have previously not been possible. Here, we present a new cooling system that is capable of cooling samples to ca. 120 K, thereby allowing controlled static and deformation experiments at variable P and T under cryogenic conditions to be performed. The system consists of cooling rings that enclose the base of each of the secondary anvils. The rings are flushed by liquid nitrogen, cooling the secondary anvils and the sample to 120 K in 40 min, which significantly undercuts the previous low-temperature record of multi-anvil presses by 100 K. Some applications of this new system include; static and deformation experiments of high pressure ices and clathrates suggested to exist in the interior of icy moons such as Titan; fundamental crystallography; mapping of phase diagrams; and material sciences. Neutron measurements will commence when the infrastructure is completed, but the instrument is already available for external users for offline test experiments.
Speaker: Christopher Howard (BGI)
• Wednesday, December 9
• 9:00 AM 12:30 PM
MLZ Users 2020: Plenary talks
Convener: Juergen Neuhaus
• 9:00 AM
Update on MLZ 30m
The MLZ directors will give an update about the MLZ during 2020 and the plans for 2021.
Speakers: Peter Müller-Buschbaum (TU München, Physik-Department, LS Funktionelle Materialien), Stephan Förster (Forschungszentrum Jülich)
• 9:30 AM
Structural disorder in photovoltaic absorber materials: insights by neutron diffraction 45m
Photovoltaics, the direct conversion from sunlight into electricity, has developed into a mature technology during recent past.
Quaternary chalcogenide semiconductors have received increasing attention as absorber material in thin film solar cells, because their constituents are abundant and non toxic. Best performing devices are obtained with off-stoichiometric kesterite-type Cu$_{2}$ZnSn(S,Se)$_{4}$. The strong stoichiometry deviation causes structural disorder which influences the optoelectronic properties of the semiconductor crucial.
Ternary nitrides (ZnSnN$_{2}$, ZnGeN$_{2}$) have attracted attention as potential earth-abundant alternatives to III-V absorber materials. ZnGeN$_{2}$ is reported to crystallise in the $\beta$-NaFeO$_{2}$-type structure, in which Zn$^{2+}$ and Ge$^{4+}$ cations are ordered. A variable degree of cation disorder was reported$^{1}$, up to full disorder (wurtzite structure). The situation becomes even more complex when taking oxygen into account: the effect of oxygen on the cation disorder from exclusive cation disorder has to be disentangled carefully.
In order to study the structural disorder in quaternary chalcogenides and ternary nitrides we applied neutron powder diffraction to differentiate the isoelectronic cations Zn$^{2+}$, Cu$^{+}$ and Ge$^{4+}$ as well as nitrogen and oxygen, deriving correlations between cation disorder, off-stoichiometry and band gap energy.
[1] C. L. Melamed et al, J. Mater. Chem. C, 2020, 8, 8736
Speaker: Susan Schorr (Helmholtz-Zentrum Berlin für Materialien und Energie)
• 10:15 AM
A study of vacancy-type defects in wide-gap semiconductors by means of positron annihilation spectroscopy 45m
Positron annihilation is a non-destructive tool for investigating vacancies in materials. A positron annihilates with an electron and emits gamma rays in solids. Their energy distribution is broadened by the momentum component of the annihilating electrons. A positron could be trapped by a vacancy because of Coulomb repulsion from ion cores. Because the momentum distribution of the electrons in the defects differs from that of electrons in the bulk, the defects can be detected by measuring the energy distribution of the annihilation radiation. The electron density in vacancies is lower than that in the bulk, which increases the lifetime of positrons. Thus, the measurement of positron lifetimes is also a useful method to detect vacancies.
Using monoenergetic positron beams constructed at University of Tsukuba and TUM FRMII (NEPOMUC), we have characterized vacancies in Mg implanted GaN. Depth distributions of the defects, their annealing behaviors, and interactions with impurities were studied to achieve p-type GaN using ion implantation. The carrier trapping phenomena by vacancies were also studied. A study of native defects in AlN and their introduction mechanisms during the growth will be also presented in the talk.
This work was supported by the MEXT “Program for research and development of next-generation semiconductor to realize energy-saving society”, and a part of this work was also supported by JSPS KAKENHI (Grant No. 16H06415).
Speaker: Prof. Akira Uedono (University of Tsukuba)
• 11:00 AM
Break 15m
• 11:15 AM
How much information is in my scattering data? Some recent approaches to the structure of microgels, polymers and nanoparticles. 45m
Recent progress with soft nanostructures will be reviewed. Traditionally, data analysis follows two approaches, roughly depending on your geographic position with respect to the Rhine river. While “inversion” predominates in the east, “modeling” is more western. In short, “inversion” minimizes the use of a-priori knowledge, while modeling starts with an idea of what the structure might be, which may be wrong … and fit perfectly. Of course many implementations ignoring geography have been developed, and we advocate a mixed approach based on known ingredients: e.g., assembling nanoparticles in nanocomposites, or monomers within microgels.
In polymer nanocomposites, we will show that SANS can be used to analyze the polymer interfacial region within a nm to NPs – which impacts dynamics as measured by BDS and NSE. On micron scales, thousands of NP are embedded in the polymer, and their dispersion affects both I(q) and the mechanics of the material. A statistical method based on RMC of this many-parameter problem will be presented, showing that key features like percolation can be described. Finally, the structure of core-shell microgels has been studied by SANS using deuteration. A model describing the polymer density profiles has been developed, and the surprising result is that the shell may not necessarily be where the intuition of the synthetic chemist located it. This leads to new nanostructures of striking mechanical properties, the study of which is an on-going endeavor.
Speaker: Julian Oberdisse (CNRS U Montpellier)
• 12:00 PM
What has the MLZ User Committee been doing? How can it help? 30m
This presentation will review the work of the MLZ User Committee by describing how it has been working. A short survey of the major points that have been raised will be also be presented. The major purpose of the talk will be to introduce discussions as to what users of MLZ want, and how these objectives can be helped by actions from the User Committee and MLZ.
• 12:30 PM 1:00 PM
Break 30m
• 1:00 PM 2:30 PM
DN2020: Welcome and Wolfram Prandl-Prize
Convener: Christine Papadakis (Technische Universität München, Physik-Department, Fachgebiet Physik weicher Materie)
• 2:30 PM 6:00 PM
Joint poster session of MLZ User Meeting and DN2020
Online poster session.
• 2:30 PM
The Materials Science group at Heinz Maier-Leibnitz Zentrum (MLZ) 20m
The Materials Science group consists of more than 50 members recorded in the mailing list and working in a variety of fields related to the applied materials science. Members of this group belong to neutron scattering or positron spectroscopy instruments including the staff acquired through 3rd party funding and the group of fuel cell development. Each month a group meeting is organized to exchange the activities of the group members, especially their scientific work. In the meetings short presentations are given by group members or invited external scientists to introduce the methods and the scientific topics of their studies.
Typical tools applied in the group are diffraction, small-angle scattering, prompt gamma activation analysis, radiography/tomography, inelastic scattering with time of flight method and neutron depth profiling. Besides the development for neutron scattering instrumentation (neutron depth profiling at PGGA instrument, implementation of a testing machine for Stress-Spec and SANS-1 instrument, positron beam experiments, radiography and spectroscopy instruments at ESS) the topics of our scientific studies are: high performance alloys, energy related materials (batteries, hydrogen storage), electronic structure of correlated materials, fundamental properties of plasmas, archeological objects and last but not least the development of a future MEU fuel element for FRM II.
Speaker: Dr Ralph Gilles
• 2:50 PM
Progress of the MEPHISTO beamline 20m
The author will present the ongoing process of the beamline MEPHISTO. The upcoming site acceptance test of the experiment PERC including a new Helium liquefier for the super conducting magnet and a temporary setup of the auxiliary components in the neutron guide hall east will be presented. An outlook will be given to the final installation of the auxiliary components on a new attic on the east hall.
Speaker: Dr Jens Klenke (FRM II)
• 3:10 PM
Neutron diffraction study of the in-situ tension deformation behaviour of SiCp/Mg-Zn composite 20m
The work hardening and softening behaviour of SiCp/Mg-5Zn composites influenced by PDZ (particle deformation zone) size were analysed and discussed using neutron diffraction experiment under in-situ tensile deformation at STRESS-SPEC. Peak broadening evolution was interpreted as the modification of dislocation density, which discovered the effect of dislocation on the work hardening behaviour of the composite. For this study, three kinds of PDZ of 5µm, 10 µm, 20 um SiCp/Mg-5Zn composites were fabricated by semi-solid stirring assisted ultrasonic treatment method. The unique tension rig at STRESS-SPEC was used for this at room temperature.
The results show that the work hardening rate of SiCp/Mg-5Zn composites increased with the enlargement of PDZ size, which was attributed to the grain size of SiCp/Mg-5Zn composites increased with the enlargement of PDZ size. Moreover, the stress reduction (ΔPi) values increased continuously during in-situ tensile for SiCp/Mg-5Zn composites due to the stored energy produced during plastic deformation increased, which provided a driving force for the softening effect. The stress reduction (ΔPi) values produced by the softening effect of SiCp/Mg-5Zn composites are affected by the grain size and stored energy produced during in-situ tensile deformation. However, the role of the grain size of SiCp/Mg-5Zn composite on the softening effect is greater than the stored energy.
Speaker: Weimin Gan (Helmholtz-Zentrum Geesthacht)
• 3:30 PM
In-situ sputter deposition of Al electrodes on active layers of non-fullerene organic solar cells 20m
Organic solar cells (OSCs) have underwent significant improvements via both, novel organic synthesis and easy fabrication methods. However, the peeling-off of the top electrode fabricated by thermal evaporation (TA) leads to an intrinsic device degradation, which is one of the main reasons for the performance losses of OSCs. TA has the drawback of establishing only a soft contact between the electrode and the functional layer interface. Another disadvantage is the inevitable high temperature during the evaporation process, which can be harmful to organic materials and is energy extensive. To overcome these challenges, the magnetron sputtering technique appears very promising.
For understanding the mechanism of the metal cluster growth, we use in-situ GISAXS to observe the morphology changes during the sputtering process. In detail, the active layer of the organic solar cells is composed of the polymer donor PffDT4T-2OD and the small molecule accepter EH-IDTBR. Both were dissolved in 1,2,4-TMB and CB respectively to obtain different morphologies of the printed films. Then 10 nm MoO3 was deposited on their surface, which acts as the electron blocking layer for the invert solar cell device. A 20 nm Al layer is sputtered ontop as top electrode. Notably, the formation of the Al electrode on MoO3 is slower than on the active layer without deposition of MoO3. In addition, GISAXS, SEM and AFM measurements indicate that the morphology impact on the Al growth significantly.
Speaker: Ms Xinyu Xinyu Jiang
• 3:50 PM
Identification of Vacancy Defects in Lead Halide Perovskites 20m
Metal halide perovskites are remarkable optoelectronic materials, within a decade the photovoltaic (PV) power-conversion efficiencies have risen from a few percent to exceed 25%. Yet advance toward the theoretical Shockley-Queisser limit value has slowed and this has been attributed to defect-assisted nonradiative recombination. First-principles calculations provide detailed insight on point defect structure and electronic properties, and on their role in fundamental mechanisms that govern material performance. While experiments have clearly identified the presence of deep defects, there has been no report of an experimental microscopic identification of a point defect. Here we detect and identify the presence of Pb cation monovacancies in the prototypical CH3NH3PbI3 (MAPbI3) using positron lifetime spectroscopy supported by density function theory. Measurements on thin film and single crystal materials all exhibit positron trapping, approaching saturation, to Pb vacancy defects with a density estimated to be greater than ~ 3 x 10^15 cm-3. No trapping to MA cation vacancies was detected. These results demonstrate the capability to experimentally identify and quantify the presence of cation vacancy and vacancy cluster point defects in metal halide perovskite materials.
Speaker: David Keeble (University of Dundee)
• 4:10 PM
Surface distortion of Fe dot-decorated TiO2 nanotubular templates using time-of-flight grazing incidence small angle scattering 20m
Physical properties of nanoclusters, nanostructures and self-assembled nanodots, which in turn are concomitantly dependent upon the morphological properties, can be modulated for functional purposes. Here, in this article, magnetic nanodots of Fe on semiconductor TiO2 nanotubes (TNTs) are investigated with time-of-flight grazing incidence small-angle neutron scattering (TOF-GISANS) as a function of wavelength, chosen from a set of three TNT templates with different correlation lengths. The results are found corroborating with the localized scanning electron microscopy (SEM) images. As we probe the inside and the near-surface region of the Fe-dotted TNTs with respect to their homogeneity, surface distortion and long-range order using TOF-GISANS, gradual aberrations at the top of the near-surface region are identified. Magnetization measurements as a function of temperature and field do not show a typical ferromagnetic behavior but rather a supermagnetic one that is expected from a nonhomogeneous distribution of Fe–dots in the intertubular crevasses.
Scientific Reports | (2020) 10:4038 | https://doi.org/10.1038/s41598-020-60899-2
Speaker: Amitesh Paul
• 4:30 PM
Localized strain induced abnormal growth of cube oriented grain in a graphene nanosheets (GNS) reinforced copper matrix composite 20m
Graphene nanosheets (GNS) reinforced copper (Cu) matrix composites were fabricated through electrophoretic deposition (EPD) and vacuum hot-pressing sintering process. The bulk texture of the as-sintered pure Cu and the GNS/Cu shows that a strong cube component formed in the GNS/Cu, while the pure Cu sintered with the same method exhibits coarse grains with random orientations. Thereafter, the evolution of microstructure and texture during sintering were characterized by SEM-EBSD and neutron diffraction ex-situ, and the macro and local strain of Cu during the sintering process was investigated in-situ using a dilatometer at high energy synchrotron radiation source at HEMS, DESY. The primary results indicate that the micro strain in the GNS/Cu which contributed by the thermal expansion mismatch between GNS and Cu during sintering can enhance the growth of the cube oriented grains, which finally lead to a strong cube texture inside the GNS/Cu composite.
Speaker: Hailong Shi
• 4:50 PM
Co-Nonsolvency Transition of PNIPMAM-based Block Copolymer Thin Films in Water/Acetone Mixtures 20m
Co-nonsolvency occurs if a mixture of two good solvents causes the collapse or demixing of polymers into a polymer-rich and a solvent-rich phase in a certain range of compositions of these two solvents. The nonionic thermo-responsive polymer, poly(isopropylmethacrylamide) (PNIPMAM), which features a lower critical solution temperature (LCST) in aqueous solution, has been widely used to investigate its collapse transition behavior in a mixture of two competing good solvents. However, co-nonsolvency response of its block copolymer containing the zwitterionic poly(sulfobetaine)s, especially poly(4-((3-methacrylamidopropyl)dimethyllammonio)butane-1-sulfonate)) (PSBP),which exhibits an lower upper critical solution temperature (UCST) and shows a strong swelling transition in aqueous media, is newly studied. We focus on the co-nonsolvency behavior of PSBP-b-PNIPMAM thin films in a series of deuterated binary mixtures by in situ time-of-flight neutron reflectometry (TOF-NR) and spectral reflectance (SR). Furthermore, Fourier Transform Infra-red (FTIR) spectroscopy is applied to investigate the interactions between the polymer thin film and water/co-solvent, which is closely related to their deuteration level.
Speaker: Peixi Wang (Workgroup Polymer Interfaces, TUM Department of Physics, Technical University of Munich)
• 5:10 PM
Phase Transformation in AlTiNbVW High Entropy alloy 20m
High entropy alloys (HEAs), which comprise more than five principal elements, are presently of great interest in materials science and engineering. A predication by CALPHAD has been performed in a new AlTiNbVW HEA, which shows that this alloy consists of two similar bcc phases in the as-cast condition. Current work is to study the phase composition in this multicomponent alloy system at equiatomic condition using neutron/synchrotron diffraction under heat treatment. The chemical composition and microstructure of these two bcc phases has been determined using Energy Dispersive X-Ray Analysis (EDX) and Backscattered-Electron (BSE) Imaging. A diffusion controlled phase transformation between these two bcc phases has been found to take place between 1000°C and 1700°C. The phase transformations kinetic of this bcc1 to bcc2 transformation has been studied systematically using in-situ neutron/synchrotron diffraction with combination of a dilatometer.
Speaker: Xiaohu Li (FRM2, Physik, TU München)
• 5:30 PM
Highly ordered titania films with incorporated germanium nanocrystals annealed in different atmospheres for photoanodes 20m
Mesoporous titania films with ordered nanostructures show great promise in various applications, e.g. solar cells. To optimize solar cell performance, pre-synthesized crystalline germanium (Ge) nanocrystals around 10 nm are introduced into mesoporous titania films. The influence of different annealing atmosphere (air and argon) on the morphology and properties of the titania/Ge composite films is studied. Resulting surface and inner morphology changes are investigated by scanning electron microscopy and grazing incidence small-angle X-ray scattering (GISAXS), respectively. Elemental composition of the titania/Ge composite films annealed in air and argon atmosphere is compared via X-ray photoelectron spectroscopy. The crystalline and optical properties are observed by X-ray diffraction, transmission electron microscopy and ultraviolet–visible spectroscopy, respectively. Through the incorporation of germanium nanoparticles with varied weight percent and annealing under different atmospheres, the optimized morphology and properties of titania/Ge composite films will be obtained, providing a promising candidate for solar cell photoanodes.
Speaker: Nian Li
• 5:35 PM
In Operando Neutron Reflectometry Study of SEI Formation on Lithium Metal Anodes Modified with PS-b-PEO Thin Films 25m
Due to the high theoretical capacity (3860 mAh g-1) and the lowest discharge/charge potential (-3.04 V vs. standard hydrogen electrode) of the lithium metal anode, rechargeable lithium metal batteries have been identified as one of the most advanced energy storage systems, which hold great promise for practical applications. However, lithium metal batteries suffer from serious safety concerns and poor cycling stability, which can be attributed to the uncontrolled growth of lithium dendrites and the unstable formation of the solid electrolyte interface (SEI). Interface property engineering by surface modification of the lithium metal electrode is one of the most promising methods to improve the electrochemical performance of lithium-metal batteries. Using amphiphilic block copolymers to modify the lithium metal anode is regarded as an effective method to enhance its electrochemical performance. Therefore, we are aiming to study the effect of the amphiphilic block copolymer modification and its morphology on the SEI formation and the final electrochemical performance. Due to the sensitivity to light elements in the SEI compounds, neutron reflectometry (NR) is an ideal technique to investigate the layer thickness, roughness, and the layers scattering length density (SLD) of the SEI. By comparing the experiment data, the effect of block copolymer modification layer on the Li metal anode on the formation of SEI can be elucidated clearly.
Speaker: Mr Suzhe Liang (E13, Physik-Department, TUM)
• 5:35 PM
Temperature-dependent Phase Behavior of the Thermoresponsive Polymer Poly(N-isopropylmethacrylamide) in Aqueous Solution 25m
Compared to the widely-investigated poly(N-isopropylacrylamide) (PNIPAM), poly(N-isopropylmethacrylamide) (PNIPMAM) has a higher phase transition temperature (43 °C instead of 32 °C). PNIPMAM has a similar chemical structure as PNIPAM, but the additional methyl groups on its backbone may lead to steric hindrance and weaker intramolecular interactions. To understand how these effects affect the thermal and structural behavior of PNIPMAM aqueous solutions, we investigate the phase behavior of PNIPMAM in D2O using turbidimetry, differential scanning calorimetry, Raman spectroscopy, small-angle and very small-angle neutron scattering (at KWS-1 and KWS-3 at MLZ). The PNIPMAM solutions undergo first macroscopic phase transition, but the PNIPMAM chains only dehydrate 2~3 °C above TCP. The methyl groups in PNIPMAM give rise to a more compact local chain conformation than in PNIPAM. Moreover, physical crosslinks and loosely packed large-scale inhomogeneities and physical crosslinks appear already in the one-phase state. We assign these differences to enhanced attractive intermolecular interactions resulting from the hydrophobic methyl groups. In the two-phase state, PNIPMAM mesoglobules are larger and more hydrated than PNIPAM mesoglobules. This is attributed to the steric hindrance caused by the methyl groups, which weaken the intrapolymer interactions. Thus, the methyl groups in PNIPMAM chains play a crucial role in the thermal and structural behavior around the phase transition.
Speaker: Chia-Hsin Ko (E13, Physik-Department, Technische Universität München.)
• 5:40 PM
3D-printed humidity chamber for neutron scattering on thin films 20m
The investigation of thin soft matter films with neutrons allows a non-destructive probe with good scattering statistics. It is used in a broad field of scientific interest that studies structures and performance of various soft matter systems such as hydrogels or organic solar cells. However, soft matter samples are very sensitive to humidity and temperature and require well-defined ambient conditions. As such, specialized sample environments are needed which provide stable control over the thermodynamic parameters at the sample position. In the framework of the FlexiProb project, a quickly interchangeable sample environment for experiments at the European spallation source (ESS) is designed. We focus on the design and fabrication of a specialized sample environment for the investigation of thin film samples with grazing incidence small angle scattering (GISANS). Its core is a 3D-printed humidity chamber that offers the necessary control of thermodynamic parameters such as temperature and humidity. The spherical chamber design has well distributed fluidic channels inside its walls, which provide a stable and rapidly adjustable temperature. The control over the atmospheric composition around the sample is realized by a remote-controlled gas-flow array that mixes up to three different humidified or dry air streams. The novel chamber design provides a first step into 3D-printed sample environment for neutron instrumentation.
Speaker: Tobias Widmann (TU München, Physik Department, LS Funktionelle Materialien)
• 5:40 PM
A buffer-gas trap for the NEPOMUC high-intensity low-energy positron beam 20m
The APEX collaboration aims to produce a neutral pair plasma, comprised of equal quantities of electrons and positrons, confined by the magnetic field of a levitated dipole. More than $10^{10}$ positrons are needed to achieve a short-Debye-length plasma with a volume of 10 litres and a temperature of $\sim 1$~eV, which necessitates new advances in positron accumulation. Buffer-gas positron traps have dramatically extended the scope for atomic and non-neutral plasma physics experiments involving antimatter. In these devices, a continuous beam of positrons enters a Penning-Malmberg trap, wherein inelastic collisions with low-density molecular gases promote the efficient capture of the antiparticles. We present our plans for the installation of a buffer-gas trap at the NEPOMUC neutron-induced positron source in Munich. Beyond the pair plasma experiments, an intense trap-based positron beam will also facilitate new applications, for example, the background-free measurement of positron-annihilation-induced Auger-electron spectra.
• 5:40 PM
A GISANS study of bio-hybrid films: Influence of pH on spray-coated ß-lactoglobulin:TiO2 film morphology for bio-templated titania nanostructures 20m
Nanostructured metal oxides such as TiO2 play a major role in hybrid photovoltaics. They can serve as the inorganic charge acceptor of the active layer. For this, a designed structure is of high importance to address different challenges on different length scales. This includes mesoscopic pores for an eased backfilling of the organic donor material and a high interfacial area between donor and acceptor domains, having domain sizes of tens of nanometers for efficient charge carrier separation. A hierarchical morphology of high surface-to-volume ratio is hence beneficial for the device performance. Diblock copolymer directed sol-gel chemistry offers a way to fabricate templated TiO2 films on an industrially relevant scale, e.g. by spray-coating. However, involved organic solvents lead to a restricted potential in environmentally friendly processing. To overcome this issue, we investigate water-based sol-gel templating with the use of biopolymers. The bovine whey protein ß-lactoglobulin is known to form differently structured aggregates by denaturing at different pH values. In combination with a water-based TiO2 precursor, different bio-hybrid film morphologies are obtained by spray-coating. The influence of pH on the film morphology is investigated by bulk and surface-sensitive grazing incidence small-angle neutron scattering (GISANS). The obtained results are complemented by real-space imaging with scanning electron (SEM) and atomic force microscopy (AFM).
Speaker: Julian Eliah Heger
• 5:40 PM
A new high resolution detector system at ANTARES 20m
The water management in polymer electrolyte membrane fuel cells (PEMFCs) has been studied extensively with neutron imaging. In contrast, for anionic electrolyte membrane fuel cells (AEMFCs), which provide a high economic potential based on the fact that no noble metal catalysers need to be employed, very few studies of water management exist to date.
A main limitation of investigating the water transport in the area of the membrane is the limited spatial resolution of neutron imaging detectors. Several approaches have been made to improve the spatial resolution below the 10μm regime. In this poster we present a novel detector concept which is currently being developed for the ANTARES beam line at FRM II which will be based on the detection of single neutron events and will employ a centroiding technique to increase the spatial resolution down to 1μm.
This project is funded by the BMBF in the framework of ErUM-Pro under the grant number 05K19WO2.
Speaker: Yiyong Han (Heinz Maier-Leibnitz Zentrum)
• 5:40 PM
A study of Linear and Nonlinear Aging in Lithium-Ion Cells by Neutron Diffraction 20m
Commercial 18650-type C/LiNi0.33Mn0.33Co0.33O2 lithium-ion cells were exposed to different charging, discharging and resting conditiones to understand their influence on the aging behaviour. When cycled with a standard 1C charging and discharging rate and different resting times, the cells show a nonlinear capacity fade after a few hundred equivalent full cycles. By increasing the discharging current or decreasing the charging current, the lifetime improves and results in a linear capacity fade. The neutron diffraction experiment reveals a loss of lithium inventory as the dominant aging mechanism for both linearly and nonlinearly-aged cells. No structural degradation of electrode materials, or their deactivation was seen. With ongoing aging, we observe an increasing capacity loss in the edge area of the electrodes. Whereas the growth of the solid electrolyte interphase defines the early stage, linear aging, marginal lithium deposition is supposed to cause the later stage, nonlinear aging.
Speaker: Neelima Paul (Technical University of Munich, Heinz Maier-Leibnitz Zentrum (MLZ))
• 5:40 PM
A tensile rig for neutron imaging 20m
Electrical steel sheets are used as the magnetic cores of electric engines. Stress in such sheets causes energy loss during the reversal of magnetization due to the magneto-elastic effect, which can be used to guide the magnetic flux. Such change in the magnetic properties can be detected by neutron grating interferometry (nGI), which allows to map ferromagnetic domains in bulk materials [1].
Previously, the effects of residual stress in electrical steel sheets, introduced through embossing, were investigated [2,3].
Now a more realistic case for electric engines is planned to be tested by replicating the effects of alternating magnetic fields and centrifugal forces.
Hence, a custom tensile rig for the nGI setup of the ANTARES beamline was built.
The new tensile rig in combination with a newly constructed magnetic yoke allows to place sheet metal samples in the nGI setup at varying levels of mechanical strain while simultaneously applying static or alternating magnetic fields. Therefore, the effects of overlapping centrifugal tensile strain and residual stress from embossing of electrical steel sheets can be evaluated.
The tensile rig can furthermore be used with different inserts to accommodate arbitrarily shaped samples.
[1] C. Grünzweig et al., APL 93, 112504 (2008)
[2] S. Vogt et al., Production Engineering 13.2 (2019), pp. 211-217
[3] H. A. Weiss et al., Journal of Magnetism and Magnetic Materials 474 (2019), pp. 643-653
Speaker: Simon Sebold (MLZ)
• 5:40 PM
An insight into the local structure and dynamics of A2Zr2O7 20m
$A_2Zr_2O_7$ oxides have been studied partially because of their possible use in the storage of nuclear waste (1) or as a photochemical catalyst materials (2). The overall $A_2Zr_2O_7$ structure is cubic (of pyrochlore type for light rare earth, defect-fluorite type for heavy rare earth ones). The pyrochlore zirconates are thoroughly investigated quantum spin ice candidates, whereas heavy rare earth zirconates remains understudies - high temperature and specially application focused studies have been reported. The material behaviour important for the upper mentioned applications strongly depends on the actual crystal structure. We present the results on (i) bulk properties of an $Er_2Zr_2O_7$ single crystal investigated by means of specific heat, magnetization and AC susceptibility, all revealing a glass-like anomaly at 2 K. (ii) the microscopic properties investigated by total neutron scattering. The pair distribution function shows the deviation from the long-range defect-fluorite structure. (iii) the muon spin rotation spectroscopy performed to reveal the nature of anticipated spin glass state at low temperature showing persisting strong spin dynamics not consistent with classical spin-glass systems. The results are discussed in a wider context of frustrated and magnetically diluted systems.
(1) K. E. Sickafus, L. Minervini, R. W. Grimes, J. A. Valdez et al., Science 289, 748 (2004).
(2) T. Omata and S. Otsuka-Yao-Matsuo, J. Electrochem. Soc. 148, E475 (2001).
Speaker: Ms Kristina Vlášková (Kristina)
• 5:40 PM
Angular Distribution of Neutrons Around Thick Beryllium Target of Accelerator-Based 9Be(d, n) Neutron Source 20m
Experimental work and simulations were carried out to determine the angular distributions of neutrons and yields of the 9Be(d, n) reaction overall angular range (360 deg) on a thick beryllium target as an accelerator-based neutron source at incident-deuteron energy 13.6 MeV. The neutron activation method was used in the experimental part using aluminum and iron foils as detectors to calculate the neutron flux. The Monte Carlo neutral-particles code (MCNP5) was used to demonstrate and simulate the neutron distribution, also to understand and compare it with the experimental results. The neutron energy spectrum was computed using the projection angular-momentum coupled evaporation code PACE4 (LISEþþ) and the spectrum was adopted in MCNP5 code. Two experimental ways were used, one with a beryllium target and another one without the beryllium target, to evaluate the neutron flux emitted only by the beryllium target. Typical
computational results were presented and are compared with the previous experimental data to evaluate the computing model as well as the characteristics of emitted neutrons produced by the 9Be(d, n) reaction with a thick Be-target. Moreover, the results can be used to optimize the shielding and collimating system for neutron therapy.
Speaker: Mr Abdullah Shehada (National Research Tomsk Polytechnic University)
• 5:40 PM
Bambus: introducing a new inelastic neutron multianalyser for Panda at MLZ 20m
Cold-neutron triple-axis spectrometers (TAS) are focused on the investigation of low-energy excitations within condensed matter physics, covering a broad selection of topics from superconductivity to magnetism. The original design of this type of spectrometer measures an individual point in a large (Q, E) space for each instrument setting. In order to increase the useful signal on TAS, recently a new form of detector design was envisioned which allows for the measuring of multiple points in (Q, E) space simultaneously. Concordantly, the multianalyser BAMBUS is being developed and constructed at the spectrometer PANDA at MLZ, in cooperation with TU Dresden and with financial support from the BMBF. The concept is to collect data at multiple points along curved paths in reciprocal space at multiple energy transfers by using multiple analysers with fixed positions, with the aim of building up a broad map in a comparatively short period of time. The aim is to use this as a complementary option to the traditional setup within single experiments, and so a fast switching time between the two options is envisioned. The aim is to improve both the versatility and data collection of the spectrometer PANDA.
Speaker: Alistair Cameron (TU Dresden)
• 5:40 PM
Boron-lined tubes and readout electronics for low count-rate environments 20m
With Boron-10 converters replacing helium-3 the total sensitive detector area per instrument increased likewise due to the lower efficiency per layer. However, commonly used alloys for substrates contain a significant amount of radioisotopes which lead to an undesired background counting rate. For detector applications exposed to a low flux, like in our case measuring environmental neutrons generated by cosmic-ray particles, such can easily increase the error of the signal. The tubes we have developed feature B4C coatings of up to 0.2 m2 on high-purity copper substrates. Furthermore the geometry and the pressure have been designed for a dE/dx suppression of unwanted contributions from gammas, electrons, muons and also heavy-isotope decays like from remains of radon. In combination with the form factor our pulse shaping electronics determines pulse length and height, which allows to discriminate against other particle species. The main goal of this development is to provide a detector system largely free of intrinsic background at considerably lower costs.
Speaker: Markus Köhli (University of Heidelberg)
• 5:40 PM
Calibration of p-XRF on ancient pottery using NAA results 20m
The chemical fingerprint of a representative corpus of sherds from Central Europe, North Africa, Western and Central Asia was identified by using neutron activation analysis (NAA) at the FRM II. A first batch of 30 homogenized pottery samples from archaeological field projects of LMU researchers were analysed using standard procedures following both short and long-time irradiation and measured on gamma-detectors after different decay times. 40 elements incl. many trace elements could be determined. The NAA results were then compared with the results of portable XRF instruments which are routinely used on archaeological excavations in Germany and abroad. Properly calibrated with securely identified reference material such portable equipment allows for the serial screening of ancient pottery, which in turn informs us on raw material acquisition, production cycles and, ultimately, the role of imports and the supply networks of a given society. The set of analyses carried out therefore constitutes an important step in the improvement of a research methodology. The FRM II samples will constitute the basis of a specialized calibration for ancient pottery analysis that shall subsequently be established as reference standard for other laboratories and researchers working with p-XRF.
Speaker: Michaela Schauer (LMU München)
• 5:40 PM
CHARM – A fast, high resolution curved 3He-based Multiwire- Proportional Chamber for the powder diffractometers DMC and ERWiN 20m
The upcoming high-intensity powder diffractometer ERWIN at MLZ and the cold-neutron powder diffractometer DMC at the Paul Scherrer Institut, Switzerland, will be equipped with new fast and high-resolution two-dimensional position-sensitive curved 3He-based Multi-Wire Proportional Chambers (MWPC) covering 130° horizontal and 14° vertical acceptance. The fully modular design is adopted from a development at Brookhaven National Laboratory (BNL) and consists of nine individual MWPC segments mounted seamlessly inside a common pressure vessel filled with a gas mixture of 6.5 bar 3Helium + 1.5 bar CF4. The device with a radius of curvature R = 800 mm is aiming at 75% detection efficiency for thermal neutrons, 1.6 mm x 1.6 mm position resolution (FWHM) and about 200 kHz count rate capability per MWPC segment at 10% event loss. Single channel induced charge readout using a Time-over-Threshold detection method is applied to the 1152 x 1152 individual cathode wires and strips, respectively. For each detected neutron a FPGA-based signal processing electronics developed in-house will provide 2D-position information applying a Centre-of-Gravity algorithm and time stamping with 80 ns time resolution.
First results of measurements performed with a 30°-prototype using a collimated beam of 4.73 Angstrom neutrons at the TREFF instrument at FRM II and the present status of the construction of the full size detectors will be presented.
Speaker: Dr Karl Zeitelhack (MLZ)
• 5:40 PM
Chemical analysis with neutrons at MLZ 20m
MLZ offers several instruments for chemical analysis. Prompt Gamma Activation Analysis (PGAA) is located in the neutron guide hall, and uses the strongest cold neutron beam of the world. PGAA is based on radiative neutron capture, and is used for the determination of major and minor components, and several trace elements in the samples non-destructively. The method proved to be unique in the determination of light elements, especially H and B. Many trace elements can be analyzed with much higher sensitivities when using the in-beam activation analysis option.
The same beam is used for Neutron Depth Profiling (NDP), where the concentration profile of certain elements (B, Li) can be determined from the energy loss of neutron induced charged particles. This method has been successfully used in lithium-battery research.
When irradiating the samples in high-flux channels in the reactor, Neutron Activation Analysis (NAA) offers detection limits on the ppb-ppt level for many trace elements. This method has been integrated in our analytical arsenal recently, and has been used for the characterization of meteorites and archaeological objects.
Fast neutrons also generate characteristic gamma radiation, which can be used for the analysis of nearly all the elements with similar sensitivities. Fangas (Fast Neutron Induced Gamma Spectrometry) has been installed this year, and is now available for the users. We expect it to become useful in the analysis of heavy-metal alloys.
Speaker: Dr Zsolt Révay
• 5:40 PM
Commissioning of the ‘Energy research with Neutrons’ option at MLZ. 20m
The Energy research with Neutrons (ErwiN) instrument is meant to be used for the investigation of energy storage materials, also integrated in complete components and under real operating conditions. Thus, it is possible to scan a large parameter space (e.g. temperature, state of charge, charge rate, fatigue degree) for the investigation of modern functional materials in kinetic and time-resolved experiments. Diffraction data will be obtained from the entire sample volume or in a spatially resolved mode from individual parts of the sample.
The commissioning of the ErwiN instrument is presented here. The commissioning and integration of ErwiN will enhance the attractiveness for a wider community in energy research as well as materials science while novel methods for the neutron science community will be developed.
Speaker: Michael Heere
• 5:40 PM
Comparison of guide systems for instruments at the high brilliance source (HBS) 20m
Compact accelerator-based neutron sources (CANS) have the potential to generate neutron beams for scattering studies comparable to research reactors. Such a source is currently developed at Jülich Centre for Neutron Science (JCNS). It is expected to provide thermal and cold neutrons with high brilliance and is therefore called “High Brilliance Source” (HBS). In this framework, the performance of neutron guide systems for the instrument are studied. The guide for a medium resolution time-of-flight diffractometer for nano-scaled and disordered materials, suggested for the HBS, was identified as a typical example. Different shapes of this guide have been simulated, namely an elliptical shape and ballistic shapes with elliptical diverging/converging sections of two different lengths. The moderator-guide distance has been varied between 30 and 140 cm for two different entry cross-sections using the CANS feature to bring the optics very close to the slow neutron source.
The results show that neutron beam properties at the sample position have a strong dependence on the geometry of the guide system, especially the distance from moderator to guide entry. Also, under these conditions - small source and short moderator guide distance - a ballistic guide with a long elliptical converging/diverging part has a performance comparable to that of an elliptical guide and is thus the most promising candidate for such a diffractometer.
Speaker: Zhanwen Ma (Jülich Centre for Neutron Science JCNS Forschungszentrum Jülich GmbH)
• 5:40 PM
Complementarity of PNR and XMCD for monolayer-magnetism in hetero-epitaxial Fe on Cu(001) 20m
We have combined two complementary techniques, element sensitive ex situ X-ray magnetic circular dichroism (XMCD) and in situ polarized neutron reflectivity (i-PNR), to determine the values of evolving magnetic moments obtained from a low symmetry system of hetero-epitaxial Fe monolayers (MLs), as a function of thickness. The samples were grown by magnetron sputtering on face-centered-cubic (fcc) Cu(001)/Si(001). Within experimental errors, we found a corroboration of the modulated moments from the XMCD and of the magnetic anisotropies from magnetization measurements with those obtained earlier from layer-by-layer i-PNR measurements. Furthermore, analyzing the depth sensitive i-PNR profile of a bulk-like film, we developed a model characterized by monotonic magnetism involving collinear spins. The results have been compared with those existing, following the theoretical parameterized tight-binding model with satisfactory agreement. This study distinguishes the variation of monolayer-magnetism owing to the growth morphology from the layer-by-layer investigation vis-à-vis depth-profiling of bulk-like film. At the same time, it also promises the general possibility of depth-profiling using i-PNR in other complex multilayered systems on high flux neutron sources.
Journal of Magnetism and Magnetic Materials 505 (2020) 166701 https://doi.org/10.1016/j.jmmm.2020.166701
Speaker: Dr Amitesh Paul
• 5:40 PM
Conductivity stability of EMIM-DCA post-treated semi-conducting PEDOT:PSS polymer thin films under elevated temperatures 20m
Nowadays thermoelectric generators are considered a promising technique for heat waste recovery as they enable a direct conversion of a temperature gradient into electrical power. Nevertheless, so far these devices are made of inorganic semiconducting bulk alloy materials like Bi2Te3, which typically contain rare and toxic elements, and are very difficult and expensive to process. Therefore, an increasing research interest is lying on the development of organic TE materials, as these are normally low or non-toxic, lightweight, flexible and enable a large scale, low-cost solution based processability. However, the more recent organic thermoelectric devices cannot compete with the over years well improved inorganic systems. Hence, in this work we are investigating different treatment methods to improve the thermoelectric properties of conducting polymers and try to find a morphology-function relation by measuring parameters such as Seebeck coefficient, electrical conductivity, absorbance, layer thickness and determination of the structure. Hereby, we are also focusing on the effect of different ambient conditions, like temperature or humidity, on the thermoelectric performance.
Speaker: Anna-Lena Oechsle (TU München, Physik-Department, LS Funktionelle Materialien)
• 5:40 PM
Conformational and Characteristic Modulation of Prothymosin Alpha following the Addition of Guanidinium Chloride investigated with X-ray / Neutron Scattering Techniques 20m
Prothymosin Alpha (ProTa) is one of the peculiar intrinsically disordered proteins. It is strongly negatively charged and is directly involved in various cellular mechanisms such as chromatin modification. Previous studies on that protein using single molecule techniques revealed structural changes when exposed to a strong denaturant Guanidinium Chloride (GndCl). Additionally, the emergence of internal friction being relevant for protein chain motion has also been observed. Now similarly, it is studied using small angle X-ray scattering (SAXS) and neutron spin echo spectroscopy (NSE). SAXS experiment shows first structural collapse at 1M GndCl and consecutive expansion at higher GndCl concentrations; indicating dual functionality of the denaturant. Additionally, in spite of reaching similar level of expansion, ProTa at 0M and 6M differs in terms of its degree of flexibility. Static quantities such as persistence length and characteristic ratio show enhanced flexibility with increasing GndCl concentration. This is in agreement with dynamic rigidity probed by NSE which also distinguishes between the two species. Moreover, in contrast to the previous study using FCS, NSE also reveals the existence of internal friction within the peptide chain regardless of GndCl concentration. Finally, a comparison with independent studies of different protein in different denaturant at 6M concentration, suggests a potential universality in the behavior of strongly denatured protein.
Speaker: Luman Haris (Forschungszentrum Jülich)
• 5:40 PM
Cononsolvency-Induced Collapse Transitions in Thin PMMA-b-PNIPAM and PMMA-b-PNIPMAM Films 20m
Stimuli responsive thin films combine the advantages of polymers in bulk, i.e., their increased stability and of polymer solutions, i.e., their fast response and therefore are attractive for a wide range of applications. Towards future applications, we investigate the not yet well understood phenomenon of cononsolvency. For this we prepared thin films of the thermoresponsive diblock copolymers PMMA-b-PNIPAM and PMMA-b-PNIPMAM which exhibit cononsolvency induced collapse transitions when organic cosolvents, like acetone or methanol, are introduced into the surrounding atmosphere. The chemical structures of NIPAM (N-isopropylacrylamide) and NIPMAM (N-isopropylmethacrylamide) differ by an additional methyl group, which is able to influence the film collapse kinetics on a macroscopic scale. The macroscopic changes during the swelling and collapse transitions were investigated by spectral reflectance (SR) and verified through time-of-flight neutron reflectometry (ToF-NR) measurements. On a more molecular level we further elucidate the underlying mechanism by in situ Fourier-transform infrared spectroscopy (FTIR) measurements to gain further insight into the origin of the cononsolvency effect.
Speaker: Julija Reitenbach (Technical University of Munich, Chair of Functional Materials)
• 5:40 PM
Cryo-TEM – A Complementary Technique for Neutron Scattering 20m
The neutron instrumentation at the MLZ, in particular Small Angle Neutron Scattering (SANS), reflectometry and macromolecular crystallography allow to investigate structures in the range from one up to several hundred nm in reciprocal space with high statistical accuracy.
In soft matter and biology the neutron contrast between hydrogen and deuterium is used to gain deep and quantitative insights about the shape and interactions of the objects forming the investigated structure.
Transmission electron microscopy may yield real space pictures of soft matter systems, particularly in cryogenic environment, in terms of size measurements and distribution of particles, shape, self-assembly systems and aggregates; virtually it may complete and enhance any SANS investigation on soft matter investigation Both techniques allow to detect structural changes occurring in the relatively large scale structures on the nano-scale.
In order to provide our users the possibility to complete their neutron scattering data, a Cryogenic transmission electron microscope (Cryo-TEM) is available at the Jülich Center for Neutron Sciences at MLZ in the JCNS building.
The instrument as well as the extended suite of preparation equipment will be described and several examples of research investigation in soft matter, particularly nanocomposites, fuel cells, microemulsion, liposomes and polymer self-assembly will be presented.
Speaker: Marie-Sousai APPAVOU (Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich GmbH)
• 5:40 PM
CSPEC- a cold time of flight spectrometer for the ESS 20m
The European Spallation Source (ESS) is expected to be the world’s most powerful neutron source. Among the endorsed instruments foreseen for day one instrumentation at ESS, is the cold time-of-flight spectrometer CSPEC, collaboration between the Technische Universität München, and the Laboratoire Léon Brillouin. CSPEC will probe the structures, dynamics, and functionality of large hierarchical systems as they change or operate. Hierarchical systems include liquids, colloids, polymers, foams, gels, and granular and biological materials as well as the ever-complex low-energy dynamics of energy and magnetic materials. The unique pulse structure of the ESS with its long pulse duration (2.86 ms) and a repetition rate of 14 Hz requires new concepts for the instrumentation to make optimum use of the available source time frame. The energy resolution can be tuned in the range of ΔE/E = 6 - 1%, and CSPEC will utilize cold neutrons in the range from $\lambda$ = 2 - 20 Å with the focus on the cold part of the spectrum. The large detector area, with a radius of 3.5 m, 5 – 140 degrees and 3.5 m in height, typical on a chopper spectrometer will be designed with optimal energy and Q resolution in mind while maintaining the highest signal to noise ratio. CSPEC is now in the detailed design phase, and we will present the current status and the expected performance.
Speaker: Wiebke Lohstroh (Heinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München)
• 5:40 PM
Current Status of PERC 20m
The PERC (Proton and Electron Radiation Channel) facility is currently under construction at the MEPHISTO beamline of the FRM II, Garching. It will serve as an intense and clean source of electrons and protons from neutron beta decay for precision studies. It aims to improve the measurements of the properties of weak interaction by one order of magnitude and to search for new physics via new effective couplings. PERC's central component is a 12 m long superconducting magnet system. It hosts an 8 m long decay region in a uniform field. An additional high-field region selects the phase space of electrons and protons, which can reach the downstream detector and systematic uncertainties.
The downstream main detector and the two upstream backscattering detectors, will initially be scintillation detectors with (silicon) photomultiplier readout. In a later upgrade, the downstream detector will be replaced by a pixelated silicon detector.
Delivery of the magnet system is scheduled before the end of this year. We present details on PERC’s current status.
Speaker: Mr Manuel Lebert (Physik-Department ENE, TUM)
• 5:40 PM
Curvature effects on the stability of lipid bicontinuous cubic phase films interacting with gold nanoparticles 20m
Non-lamellar lipid membranes are highly relevant in lots of biological processes like exo/endocytosis, and cell division; an interesting case is represented by inverse bicontinuous cubic phase membranes. By designing biologically inspired synthetic bicontinuous cubic phase membranes, it is possible to exploit the amphiphilic nature of their lipid components to encapsulate hydrophobic, hydrophilic and bioactive nanoparticles (NPs) (Nanoscale,2018,10,3480-3488; JCIS 541(2019):329-338). This feature makes them promising candidates as matrices for biosensing applications and the development of NPs-based therapeutic systems. Differently from the case of flat lamellar membranes (J.Microscopy,Ridolfi A.,2020), the interaction of NPs with highly curved cubic membranes has not been extensively addressed yet. We herein present a Neutron Reflectivity (NR) and Grazing Incidence Small Angle Neutron Scattering (GISANS) study on the different structural effects produced by AuNPs of different shapes, when interacting with both cubic and lamellar lipid films. We investigate how variations in the curvatures of both the lipid matrix (lamellar versus cubic phase) and AuNPs (spherical versus rods) influence the stability of the film architecture and the NPs interaction kinetics. In particular, we found that cubic phase films display an increased stability against AuNPs injection compared to lamellar phase films while rod-like AuNPs possess a more disruptive effect compared to the spherical ones.
Speaker: Andrea Ridolfi (National Research Council, Institute of Nanostructured Materials (CNR-ISMN); University of Florence)
• 5:40 PM
Dehydration of thermoresponsive molecular brushes with block or random copolymer side chains 20m
Molecular brushes with thermoresponsive copolymer side chains have attracted attention for drug delivery purposes because of their elongated shape and their versatility. In the present work, two molecular brushes having copolymer side chains composed of poly(propylene oxide) (P) and poly(ethylene oxide) (E) are studied in aqueous solution. The side chains are either a diblock (PbE) or a random copolymer (PrE). Their structures, dehydration and aggregation behavior around the cloud point, Tcp, are investigated using small-angle neutron scattering (SANS) at KWS-1, MLZ [1].
At 25 °C, the brushes are elongated and feature a core-shell structure with a polymer-rich core and a water-rich shell. Upon heating to Tcp, PbE dehydrates only weakly, and the shell shrinks slightly. Above Tcp, large aggregates from strongly interpenetrating brushes are formed, which is due to the high mobility of the still rather hydrated side chains. In contrast, PrE undergoes a rod-to-disk shape transformation along with a severe decrease of the water content, already before aggregation sets in at Tcp. Above, the PrE brushes form small aggregates from loosely connected brushes, which is a result of the low side chain mobility. Thus, the choice of the side chain architecture not only allows control of the inner structure and shape, but also of the transition behavior.
[1] J.-J. Kang, C. M. Papadakis et al., Macromolecules 53, 4068 (2020)
Speaker: Jia-Jhen Kang (Physik-Department, Technische Universität München)
• 5:40 PM
Development of a Sample Environment for in-situ Dynamic Light Scattering in Combination with Small Angle Neutron Scattering for the Investigation of Soft Matter at the European Spallation Source 20m
The most brilliant and most powerful neutron source in the world, the European Spallation Source ESS, is currently built in Lund. In the scope of the project “FlexiProb” three modular sample environments for the investigation of soft matter samples to maximize the potential of the ESS with regard to the very high neutron flux are being developed.
These are sample environments for small angle neutron scattering (SANS) with in-situ dynamic light scattering (DLS), under gracing incidence (GISANS) developed at TU Munich and on free-standing liquid films and foams developed at TU Darmstadt. All sample environments are built on an universal carrier system to ensure a high repeatability and flexibility as well as a minimum switching time between different sample environments and SANS machines.
The in-situ DLS & SANS module developed in our subproject will provide additional control parameters, in particular the sample stability, during the SANS measurements. For that, the module allows the simultaneous measurement of SANS and DLS at two different scattering angles and instant evaluation of the sample sizes distribution. To accomodate for the high neutron flux at ESS, we developed a special sample holder suitable for the precise temperature control of about 40 samples.
Speaker: Lars Wiehemeier
• 5:40 PM
Development of an indirect spectrometer Mushroom 20m
Mushroom is a concept of an indirect neutron spectrometer with its secondary spectrometer based on a super flat-cone analyser made of highly oriented pyrolytic graphite with an array of position-sensitive detectors (PSD) below it. This combination of the analyser and PSD gives the complete information of the outgoing wave vectors from each detected point on the PSDs. The idea has been first presented by R. Bewley for a new spectrometer at the spallation source ISIS in the UK. We aim to adapt the Mushroom concept to the reactor source at FRM II, such that a much higher count rate can be reached than at a traditional triple-axis spectrometer (TAS). This is possible thanks to the special analyser in Mushroom covering a solid angle up to 2π steradian, while the value is ca. 0.001 steradian at a TAS. This allows an overview of the dispersion relation and/or diffuse scattering with only a few scans. We report on the theoretical calculations matching the resolution function of the secondary to the primary spectrometer using monochromatised neutrons from one of the neutron guides of FRM II. In addition, the first McStas simulations are presented showing predictions on the instrument performance.
Speaker: Mr Ran Tang (Technical Univertsity Munich)
• 5:40 PM
Distortions and Superstructure in Inverse Perovskite Nitrides 20m
We discovered nitrogen/defect ordering leading to elpasolite-type superstructures in inverse perovskites with the general composition (A3Nx)Tt (A = Ca, Sr, Ba; Tt = Si, Ge, Sn, Pb). Due to the large scattering length of nitrogen, neutron powder diffraction is crucial when it comes to illuminating the nature of these superstructures.
For example, high quality powder X-ray diffraction patterns of nitrogen-deficient (Ca3Nx)Sn and (Ca3Nx)Pb feature barely visible reflections indicating an elpasolite-type ordering, but yield next to no information as to the extent of this ordering. The problem is compounded when looking at (Ba3Nx)Sn and (Ba3Nx)Pb, where the slight shift of barium atoms toward occupied nitrogen sites contributes much more strongly to the superstructure reflections in X-ray patterns than the occupation of the nitrogen sites itself. By contrast, the same reflections are among the strongest peaks in neutron diffraction patterns of these compounds and it is mostly the nitrogen ordering itself eliciting them.
In addition to the superstructure, (Ba3Nx)Sn and (Ba3Nx)Pb, as well as the hitherto unknown compounds (Ca3Nx)Si, (Sr3Nx)Ge and (Ba3Nx)Ge, feature distortions of the perovskite structure due to octahedral tilting. This makes the crystals undergo multiple twinning processes upon cooling down from the temperature of synthesis, which renders analysis by single crystal diffraction exceedingly difficult.
• 5:40 PM
DMPC-glycyrrhizin model membranes in the absence and presence of cholesterol: From small unilamellar vesicles to flat disc structures 20m
The saponin glycyrrhizin is the main sweet-tasting part of the liquorice root and is often used as a sweetener and emulsifier. It is known to interact strongly with cholesterol and has anti-viral activity. Using small, unilamellar lipid vesicle (SUV) as model membrane, we study the mixing properties of glycyrrhizin with the phospholipid 1,2-dimyristoyl-$sn$-glycero-phosphocholine (DMPC) by using small-angle neutron scattering (SANS). Due to the phase transition temperature of DMPC at $T_m$≈24 °C, the fluid-like state (above $T_m$: 40 °C) and the solid-like state (below $T_m$: 10 °C) of the DMPC bilayers were studied. SANS measurements show that DMPC vesicles with and without cholesterol (10 mol%) generate a vesicle like form factor. The interaction of glycyrrhizin with the DMPC bilayer can be differentiated into three regimes which are based on the concentration of glycyrrhizin. Below 7 mol%, glycyrrhizin is incorporated into the bilayer (with and without cholesterol), respectively for both states of the bilayer. From 10 to 30 mol% aggregation occurs and above 30 mol% the form factor indicated the presence of smaller structures, for a solid-like state of the bilayer. In the presence of cholesterol, aggregation is observed not before 15 mol% glycyrrhizin and no nano discs are formed. For the membrane in the fluid-like state aggregation occurs up to 40 mol% glycyrrhizin. Beyond this value no aggregation can be observed. Small structures can only be found from 60 mol% onwards.
Speaker: Friederike Gräbitz-Bräuer (Universität Bielefeld, PCIII)
• 5:40 PM
DNS - diffuse neutron scattering spectrometer at MLZ 20m
DNS is a polarised high intensity cold-neutron time-of-flight spectrometer at MLZ. It is situated between MIRA and SPHERES on neutron guide 6 and uses a wavelength between 2.4 Å and 6 Å. DNS has the capability to allow unambiguous separations of nuclear coherent, spin incoherent and magnetic scattering contributions simultaneously by polarization analysis over a large range of scattering vectors.
It is mainly used for the studies of complex magnetic correlations in frustrated quantum magnets, strongly correlated electron systems, and nanoscale magnetic systems. DNS has a number of unique features such as wide-angle polarization analysis which can be used in parallel to the non-polarization-analyzing position-sensitive detector array covering 1.9 sr.
A 300 Hz disc chopper system for inelastic experiments was commissioned in 2018 and allows an efficient measurements in all four dimensions of S(Q,E). A newly installed Fe/Si based polariser, increased the polarized flux at the sample
position about 50%, largely due to an optimal focussing.
Speaker: Thomas Mueller (JCNS)
• 5:40 PM
Effect of the protein size on the diffusion of proteins in a cell-like environment - first results from BATS 20m
The investigation of protein diffusion is essential for a comprehensive understanding of living cells.
Recently, the volume fraction dependence of the short-time center-of-mass self-diffusion of immunoglobulin (IgG) in naturally crowded environments has been reported. A remarkable agreement between simulations and experiment explained the agreement between the volume fraction dependence of pure IgG solutions and the naturally crowded samples [1]. The simulations revealed that this agreement is due to the comparable size of the IgG and the averaged size of the cellular lysate serving as crowding agent.
New neutron backscattering measurements with the BATS option of IN16B [2,3] allowed to investigate differently sized proteins in the presence of lysate. Given the increased energy range of BATS maintaining the good energy resolution, it is possible to separate the global diffusion from the internal diffusion. A dependence on the protein size of the diffusion in the presence of deuterated lysate functioning as a crowder has been observed. Global fits, taking the $q$ and energy transfers into account, reveal different hierarchies of the internal diffusion. The comparison with the pure protein solutions at the same volume fractions allowed us to investigate the effects of lysate on the internal diffusion and thus, offer significant insights into the protein properties.
[1] M. Grimaldo et al, JPCL, 2019
[2] C. Beck et al, Physica B, 2019
[3] M. Appel et al, Sci. Rep., 2018
Speaker: Christian Beck (Institut Laue Langevin)
• 5:40 PM
Engineering of the thermal moderator for a Compact Accelerator driven Neutron Source (CANS) 20m
In a CANS, different from a research reactor or a spallation source, the primary neutrons are produced inside a volume below 200 cm$^3$. One of the main requirements for a well-optimized research neutron source is to convert this compact fast neutron cloud into a compact thermal neutron cloud. There the geometry needs to be suitable for the extraction of neutron beams towards the instruments. In addition, the time structure of the pulsed proton beam impinging on the target is convoluted with the time constants of the moderation, absorption, and diffusion processes inside the moderator material.
In this presentation, we present simulations performed by exact MCNPx Monte-Carlo methods and approximated analytical diffusion-based models. We will show the effect of different moderator materials (hydrogen, deuterium, beryllium, and carbon based), dilution, poisoning, and combination of different materials inside the thermal moderator and with reflectors surrounding the thermal moderator on the peak flux and the time structure of the neutron beams delivered to a potential instrument.
Speaker: Ulrich Rücker (JCNS, Forschungszentrum Jülich)
• 5:40 PM
Enhancing the High-Temperature Strength of a Co-Base Superalloy by Optimizing the y/y' Microstructure 20m
The newly developed polycrystalline Co-base superalloy CoWAlloy2 provides a high potential for application as wrought alloy due to the large gap between solidus and y' solvus temperature along with a high y' volume fraction. The scope of this study was to maximize the high-temperature strength and to optimize the y/y' microstructure by adjusting the multi-step heat treatments.
The microstructure and mechanical properties were investigated by scanning electron microscopy (SEM), transmission electron microscopy (TEM), compression and hardness tests. In-situ high temperature small angle neutron scattering (SANS) helped to understand the microstructural evolution during different applied heat treatments. The size of the y' precipitates increases with increasing annealing time and temperature of the first annealing step. As a result, the hardness of the alloy increases until a maximum after 4 h annealing is reached. The reason is an optimum y' precipitate size in the range of about 30-40 nm, which can be explained by the model for shearing of the y' precipitates by weakly and strongly coupled dislocations. A second annealing step leads to a further increase of yield strength due to an exceptionally high y' volume fraction of about 70%. The room temperature yield strength of the optimized condition is 140 MPa higher compared to the other heat-treated condition.
Speaker: Daniel Hausmann
• 5:40 PM
Establishing deuteration services for MLZ users at the JCNS 20m
Neutron scattering experiments involving soft matter materials often require specific contrast to observe different parts of the materials. In order to increase the availability of deuterium labelled materials, we are establishing deuteration support to MLZ users. At this state, we are focusing on a limited number of projects, but in the future, a proposal based deuteration service will be available in GhOST in combination with a proposal for neutron beamtime. Furthermore, the JCNS deuteration efforts are embedded in the LENS deuteration initiative, with the objective of providing in the future a source independent deuteration support together with ILL, ESS and ISIS.
Our main synthetic focus at JCNS-1 is in the area of polymer and organic chemistry. Anionic and controlled radical polymerization techniques allow the synthesis of e. g. polydienes, polyethylene oxide, polybutylene oxide polyacrylates and methycrylates with narrow molecular weight distributions for well-defined samples. The so obtained polymers can be functionalized afterwards to attach diverse functional groups or molecules. Organic techniques are used for the production of ionic liquids, surfactants, lipids, monomers and other compounds. The presentation summarizes the synthetic expertise available at JCNS-1 as well as outlines the planned process to establish the deuteration support. We are looking forward to answering your questions at our poster!
Speaker: Lisa Fruhner (JCNS-1, Forschungszentrum Jülich GmbH)
• 5:40 PM
Estimation of lithiated cathode loss for cycled 18650-type battery by in situ neutron powder diffraction 20m
18650-type cells comprising of LiNi0.54Co0.15Mn0.31O2 and LiNi0.87Co0.07Al0.06O2 as cathode and graphite as the anode are cycled at various SoC ranges. Inconsistent capacity fade is found, indicating that the medium SoC range cycling with less capacity loss behaves better than the high and low SoC ranges, while the low SoC range cycling tends to have nonlinear capacity fade. The non-destructive methods, in situ neutron powder diffraction (NPD) at SPODI, is used to obtain detailed structural information about the cathode and anode materials without opening the cell. The crystalline phases in the cell are identified and their lattice parameters and respective weight fractions are calculated. Loss of lithium inventory (LLI) is calculated by the relative weight ratio of LiC6 and LiC12, indicating that LLI is the dominating degradation factor for the inconsistent capacity fade of cells. The lithiated cathode loss is caused by the trapping of lithium in the cathode under the controlled battery cycling voltage window (2.5V - 4.2V), which leads to the mobile lithium and cathode material loss. Lithiated cathode loss is estimated by the observation of cathode unit volume change. The estimated lithiated cathode loss accounts for approximately half of the total LLI. The pulverization of the cathode particles is observed by SEM. The disintegration of the cathode particle causes the contact loss with active material inactivity, which is the main reason for the lithiated cathode loss.
Speaker: Jiangong Zhu
• 5:40 PM
Event-Mode Imaging for Improved Spatial Resolution in Fast Neutron Imaging 20m
Event-Mode Imaging is a method where the final image is obtained as a summation of individually acquired particle interactions. In fast neutron scintillators, scattering of the high energy neutrons generates recoil protons. Ionization by the protons leads to creation of visible light in the form of a cone. The low spatial resolution in fast neutron imaging results from blurring introduced by the cone shape emission of light with the spatial resolution roughly proportional to the thickness of the scintillation material. To overcome the problem of spatial locality, a center-of-mass method is used to find the most probable location of the spot for each neutron interaction, potentially allowing to increase spatial resolution and efficiency of the method.
Here we present a parametric study for event-based imaging to computationally obtain optimal parameters, such as the impact of noise, the size of the light spot, the deviations in the center-of-mass methodology and so on through simulations. This is done by random probabilistic sampling of pixels in a grayscale input image simulating the particle interaction and applying a Gaussian blur patch to the sampled pixel value with a kernel size to replicate the problem of low spatial resolution in the simulation. Furthermore, we present the initial results of data acquired using an event-based detector system with each event processed by the proposed methodology.
Speaker: Mr PRABHA SHANKAR - (NECTAR (FRM II), TUM)
• 5:40 PM
Evolution of the structure and dynamics of bovine serum albumin induced by thermal denaturation 20m
Studying thermal protein denaturing provides valuable information on structural and dynamic aspects related to protein function. Here, we use small-angle and quasielastic neutron scattering (SANS and QENS) to shed light on the denaturing of bovine serum albumin (BSA) in the presence and absence of NaCl.
SANS reveals the temperature-dependent formation of a crosslinked BSA network. The sensitivity to NaCl-mediated protein charge screening is furthermore shown to decrease with increasing BSA concentration. A comparison of the dynamical signal from ratios of inelastic and elastic fixed-window scans (FWS) with the dynamical confinement obtained from the apparent mean-squared displacement [1] suggests that the signatures of denaturation observed on nanosecond time scales are dominated by temperature-induced changes of the dynamics. Changes of the local confinement, on the other hand, only contribute weakly. After denaturation, the dynamics is slowed down in the presence of NaCl, while the stability and dynamics of the native solution do not appear to be affected by salt.
Our approach offers a framework for a comprehensive, multi-method characterisation of thermal protein denaturing [2].
[1] Hennig et al., Soft Matter (8), 2012, 1628-1633.
[2] Matsarskaia et al., PCCP, 2020, Accepted Manuscript. DOI: 10.1039/D0CP01857K
Speaker: Olga Matsarskaia (Institut Laue-Langevin, Grenoble, France)
• 5:40 PM
Fabrication on Plasmonic Nanostructures in Photoelectronic Devices 20m
Plasmonics include various aspects of surface plasmons, which utilize light-metal interactions. In applications, surface plasmon polaritons (SPPs) and the near field from localized surface plasmon resonances (LSPRs) can be beneficial for light absorption as well as electrical characteristics of photoelectronic devices. The utilization of plasmonic metal nanoparticles (NPs) is frequently proposed as a means to further enhance the light absorption in the broad wavelength range as well as to facilitate charge collection and transport in the photoelectronic devices. Therefore, it is of crucial importance to fabricate suitable plasmonic nanostructures and investigate their fundamentals in photoelectronic devices. Advanced scattering methods such as grazing incidence small/wide-angle x-ray scattering (GISAXS and GIWAXS) were used to study plasmonic structures implemented in photoelectronic devices.
Speaker: Mr Tianfu Guan
• 5:40 PM
Feasible tuning of microstacking structure and oxidation level in PEDOT: PSS thin films via sequential post-treatment 20m
Organic semiconductors have attracted intensed attention because of their potential use in mechanically flexible, lightweight, and inexpensive electronic devices. Especially, PEDOT:PSS is the most studied conducting polymer system in thermoelectric devices due to its intrinsically high electrical conductivity, low thermal conductivity, and high mechanical flexibility. The energy conversion efficiency of a TE material is evaluated by a dimensionless figure of merit ZT and defined as ZT = S2σT/k where S is the Seebeck coefficient, σ is the electrical conductivity, T is the absolute temperature, k is the thermal conductivity, and S2σ is defined as the power factor (PF). However, it is generally considered that it is difficult to obtain a high ZT value of TE materials, due to the fact that the parameters S, σ, and k are interdependent as a function of carrier concentration and hard to be optimized simultaneously. To date, post-treatment is regarded as one promising approach to significantly enhance ZT values of PEDOT:PSS. Herein, PEDOT:PSS thin films are overcoated with salts/DMSO mixtures in order to optimize their TE performance. Subsequently, the surface morphology and the inner morphology are probed, using atomic force microscopy (AFM) and grazing-incidence wide/small-angle X-ray scattering (GISAXS/GIWAXS), respectively. Additionally, UV-Vis spectroscopy, Raman spectroscopy are used to investigate the mechanism behind for TE performance enhancement.
Speaker: Suo Tu (Institute of Functional Materials)
• 5:40 PM
Field Dependence of Magnetic Disorder in Nanoparticles 20m
Being intrinsic to nanomaterials, disorder effects crucially determine the properties of magnetic nanoparticles, such as their heating performance [1-3]. However, despite the great technological relevance and fundamental importance, a quantitative interpretation of the three-dimensional magnetic configuration and the nanoscale distribution of spin disorder within magnetic nanoparticles remains a key challenge.
Here, I will present our recent studies on the intraparticle magnetization distribution in ferrite nanoparticles [4]. In contrast to the classical, static picture of a collinearly magnetized particle core with a shell of structurally and magnetically disordered surface spins, we establish a significant field dependence of the nanoparticle moment and demonstrate how magnetic order overcomes structural disorder. In our study, polarized SANS [5] extends the traditional macroscopic characterization by revealing the local magnetization response and allows us to quantitatively separate surface spin disorder from intraparticle disorder. Finally, we elucidate the intraparticle distribution of the spin disorder energy, giving indirect insight into the structural defect density in magnetic nanoparticles.
[1] P. Bender et al., J. Phys. Chem. C 122, 3068 (2018).
[2] A. Lak et al., Nano Lett. 18, 6856 (2018).
[3] A. Lappas et al., Phys. Rev. X 9, 041044 (2019).
[4] D. Zákutná et al., Phys. Rev. X 10, 031019 (2020).
[5] S. Mühlbauer et al., Rev. Mod. Phys. 91, 015004 (2019).
Speaker: Dominika Zakutna (Institut Laue-Langevin)
• 5:40 PM
FLUKA and MCNP simulation benchmark for neutron yield measurement in HBS project 20m
The High Brilliance Neutron Source (HBS) project was initiated at the Jülich Centre for Neutron Science of the Forschungszentrum Jülich (JCNS). This project aims to develop an accelerator-driven pulsed neutron source operating at low energy (below the spallation threshold) with high current ion beams (~100 mA) and optimized to deliver high brilliance neutron beams to a variety of neutron instruments.
In the framework of the HBS-project the neutron production in beryllium, vanadium and tantalum targets irradiated with protons of various energies (22, 27, 33 and 42 MeV) delivered by JULIC (Juelich Light Ion Cyclotron) was investigated. The neutron yield was determined via gamma-ray spectrometry measuring the count rate of 2.23 MeV prompt gamma-line of hydrogen induced by thermal neutron capture in the polyethylene moderator surrounding the target. For calibration, measurement with an AmBe source of well-known neutron emission was carried out. Corrections for neutrons escaping the moderator as well as for the spatial extension of the 2.23 MeV -gamma source inside the moderator were numerically performed using the Monte Carlo codes FLUKA and MCNP6.
In this work, the results of the simulations obtained with FLUKA and MCNP6 including the neutron yields of the targets and, the neutron and gamma correction factors to assess to the experimental neutron yields are presented and discussed. Finally, the simulated neutron yields are compared with the experimental neutron yields.
Speaker: Mr Jiatong Li (Nanjing University of Aeronautics and Astronautics, China)
• 5:40 PM
Following the interface formation during sputter deposition on perovskite films 20m
Perovskite solar cells (PSCs) are promising for future and sustainable power production because they can be processed via up-scalable industrial deposition techniques such as printing or spray casting. Sputtering is a common technique for large scale metal electrode deposition. Understanding and controlling the interface formation during the sputtering process on perovskite is therefore important towards large-scale production of PSCs. In the present study, we sputtered gold on methylammonium-lead-iodide perovskite thin films. During the sputter deposition, we performed in-situ grazing-incidence small-angle X-ray scattering (GISAXS) to gain insight into the detailed steps of aggregation and growth of the sputtered metal layer. Thereby, GISAXS offers a way to gain information about the time evolution during the crucial steps of interface formation. Interestingly, the layer formation kinetics during sputtering are found to be very different for two samples of different surface roughnesses, the perovskite surface morphology seems to influence gold aggregation. On the smooth surface aggregates form of a certain size and spacing at first, which grow and merge until a closed layer is formed eventually. In contrast, the rougher surface seems to cause a broader size distribution of the gold seeds.
Speaker: Lennart Reb (TUM E13)
• 5:40 PM
Germanium-based nanostructure synthesis guided by amphiphilic diblock copolymer templating 20m
Latest research in the field of hybrid photovoltaics focuses on the benefits of inorganic and organic materials. Flexibility, low cost and large-scale production are the most valuable properties of organic components whereas the inorganic components add chemical and physical stability. So far thin films based on titanium dioxide are well investigated, whereas less is known about germanium-based compounds. In this work, we analyze thin films with optical, electrical and morphological measurement techniques to understand and control the corresponding properties. An amphiphilic diblock copolymer templating with polystyrene-b-polyethylene oxide (PS-b-PEO) and a metal-semiconductor precursor are used to prepare thin films via sol-gel synthesis. The copolymer templating results in nanoporous foam-like germanium-based thin films. In the present study, thin films with different polymers with varied molar weights of polystyrene and polyethylene oxide are prepared and analyzed. As the major technique for real-space imaging in this research field, SEM can only provide information about the surface. Therefore grazing incidence small angle X-ray scattering (GISAXS) experiments are carried out and have to be validated with grazing incidence small angle neutron scattering (GISANS) to understand the formation of the inner structure morphology.
Speaker: Christian L. Weindl (TUM Physik)
• 5:40 PM
High-resolution powder diffractometer SPODI 20m
In this contribution we present an overview on specifications, applications and recent developments at high-resolution powder diffractometer SPODI. The presentation includes the various setups and sample environmental devices which are available for in-situ materials characterization. Another key aspect is the illustration of current research areas, supported by statistics on publications. Finally, an outlook for future developments is presented.
Speaker: Markus Hoelzel
• 5:40 PM
High-resolution spectroscopy and diffraction at TRISP 20m
We present the capabilities of TRISP both for high-resolution spectroscopy and diffraction and show typical experimental examples. TRISP is a thermal three axis spectrometer incorporating the resonant spin-echo technique. Typical applications include the measurement of linewidths of phonons and spin excitations in an energy range 0.5-50meV, and the energy width of quasi-elastic scattering, originating, for example, from critical magnetic fluctuations. Neutron Larmor diffraction (LD) is a high-resolution technique which permits the measurement of lattice spacings $d_{hkl}$ and their distribution $\Delta d_{hkl}$. The latter arises, for example, from micro-strains, magnetostriction, structural and magnetic domains, or from a small splitting of Bragg peaks, resulting from distortions of the crystal lattice. The resolution of Larmor diffraction at TRISP is $10^{-6}$ (relative) for the lattice spacing and one order of magnitude less for the distribution width.
Speaker: Dr Keller Thomas (MPI outstation at the FRM II)
• 5:40 PM
Hot Neutron Diffraction Experiments under Extreme Conditions on Single Crystals with HEiDi 20m
Diffraction with neutrons is one of the most versatile tools for detailed structure analysis on various hot topics related to physics, chemistry and mineralogy. The scd HEiDi at the Heinz Maier-Leibnitz Zentrum (MLZ) offers high flux, high resolution and large q range, low absorption and high sensitivity for light elements by using the Hot Source of FRM II.
At very high temperatures studies on Brownmillerite structures as Nd2NiO4+δ and Pr2NiO4+δ concerning their oxygen diffusion pathways reveal anharmonic diplacements of the apical oxygens pointing towards the interstitial vacancy sites which create a quasicontinuous shallow energy diffusion pathway between apical and interstitial oxygen sites [M. Ceretti et al., J. Mater. Chem. A 3, 21140-21148, 2015]. A new DFG project extends these studies including developments on a special mirror furnace to optimize experiments not only at temperatures > 1300K but also in atmospheres with various oxygen contents and different gas pressures are going to reveal more details on this topic.
A recently finished BMBF project shows that studys on tiny samples < 1 mm³ and high pressure (HP) experiments with diamond anvil cells (DAC) can be performed at HEiDi [A. Grzechnik et al.; J. Appl. Cryst. 53 (2020), 1-6]. Also, they can be combined with low temperatures down to 3K. A new BMBF project has been launched in 2019 to improve efficient use of DAC on HEiDi and to build optimized HP cells for other instruments at MLZ (POLI, MIRA, DNS).
Speaker: Dr Martin Meven (RWTH Aachen University, Institute of Crystallography - Outstation at MLZ)
• 5:40 PM
Hybrid high performance computing to covert the molecular Dynamics simulation to neutron and x-ray data 20m
The world of computing always strives for a faster solution. The current work made an effort to make the program sassena faster, which calculates the neutron and x-ray.scattering data from atomic simulations, such as molecular dynamics (MD). This can be achieved using different parallelization strategies, e.g. vectorization, thread-based parallelism and distributed memory parallelism. Current work, aided by different analysis tools available on the market, tried to find such opportunities of parallelization within sassena and ensured correctness of the code while implementation of any kind of parallelization. The main goal of this work was to build upon advantages of different parallelism strategies and compensate disadvantages of one strategy by the advantages of others. This work expects a gain in the performance by the use of such a hybrid high-performance computing approach. Furthermore, this work plans to benefit from the achieved performance gain and apply this solution to validate simulations of Hydrogen storage materials with scattering data.
Speaker: Mr Arnab Majumdar (Helmholtz Zentrum Geesthacht)
• 5:40 PM
Impact of ethylenediaminetetraacetate ligands on CdS nanoparticle formation mechanism 20m
Organic ligands are commonly employed to stabilize nanoparticle sizes, shapes and long-term colloidal stability in dispersions. For Cadmium chalcogenides, ethylenediaminetetraacetate (EDTA) seems a good candidate due to its strong chelating action towards Cd2+. Further, EDTA-capped CdS nanoparticles were proven to be stable in aqueous dispersion at room temperature over months.[1,2]
Without ligands, the CdS nanoparticles nucleate via a two-step formation mechanism involving Cd13S4(SH)18 precursor particles and a diffusion-driven growth process to ca. 5 nm particles within 2.5 ms.[3] Yet, up to now no mechanistic insight into the CdS particle formation in presence of EDTA has been provided.
Here we evidence the formation of ca. 5 nm sized EDTA-capped CdS particles from CdCl2/EDTA and Na2S stock solutions with SANS and laboratory SAXS. The mixing speed and / or solvent (H2O / D2O) seem to impact the particle diameter. Contrast matching in SANS not only accesses the ligand shell, but also reveals an unexpected superstructure formation on a time scale of hours. pH-dependent studies and multinuclear and multidimensional solid-state NMR spectroscopy complement insight into the EDTA binding.[4]
[1] G. H. Reed, et al, Inorg. Chem. 1971, 10
[2] A. A. Rempel, et al, Russ. Chem. Bull. 2013, 62, 398
[3] A. Schiener, et al, Nanoscale 2015, 7, 11328
[4] S. W. Krauss, et al, in preparation
Speaker: Mr Mirco Eckardt (University of Bayreuth)
• 5:40 PM
In situ light scattering techniques at neutron instruments at the MLZ - experiences made and challenges ahead 20m
What is well established at many synchrotron beam lines is still in the development phase at neutron instruments: in situ light scattering techniques for on-beam sample control. Biological samples often show a sufficiently broad spectral range where light absorption does not play a dominant role. This enables in situ sample control using dynamic and static light scattering techniques. Many biological samples undergo a slow aggregation process during the comparatively long neutron data collection times. If the aggregates are staying few in number and/or if their form factor has decayed enough in the relevant q-range, the neutron measurement can be continued. If not, a fresh sample can be used.
Candidates for neutron instruments to be equipped with an in situ light scattering set-up are small angle scattering, spin echo, time-of-flight and backscattering instruments operating sample environments near or at room temperature. We routinely provide in situ dynamic light scattering with one fixed scattering angle at the instrument KWS-2 at MLZ to interested users. For the Jülich spin echo spectrometer J-NSE we have developed a temperature-controlled sample environment which includes two laser colours and three light scattering angles. This not only enables dynamic light scattering but also static light scattering at six different q-values is feasible.
This contribution discusses the experiences made with these in situ set-ups and looks into future developments and improvements.
Speaker: Dr Tobias E. Schrader (Juelich Centre for Neutron Science (JCNS))
• 5:40 PM
In situ neutron dilatometry investigation of βo→β phase transformation in TiAl alloys 20m
Intermetallic TiAl alloys represent a novel class of lightweight high temperature materials for applications in aero and automobile industries. One of their most impressive example of use is replacing the twice as dense Ni-based turbine blades in the last stages of the aero engine in the Airbus A320neo family, yielding to a decrease of noise and CO2 emission.
Nevertheless, there is need for improvements e.g. in processing for higher cost effectivity and better material behavior at the working temperatures. Advanced TiAl alloys are particularly well suited for hot working due to the presence of the ductile, disordered, body-centered cubic (bcc) β phase (A2 structure) at high temperatures. However the challenge to be met is to avoid presence of the so-called “ordered beta” βo phase (B2 structure), which is brittle at service temperature and decreases the turbine blades lifetime.
Our current project is a fundamental investigation of βo→β phase transformation in TiAl and its dependency from different β-stabilizing elements. We used the new dilatometer DIL 805AD as an in situ sample environment at STRESS-SPEC (FRM II, Garching bei München) for stepwise heating experiments in the temperature range from 1000°C up to 1400°C. The results unambiguously determined the presence of the βo phase and the transformation temperatures of βo→β. The results will be compared with synchrotron measurements performed with the same type of dilatometer under a better time resolution.
Speaker: Victoria Kononikhina (Helmholtz-Zentrum Geesthacht)
• 5:40 PM
In Situ Printing: Insights into the Morphology Formation and Optical Property Evolution of Slot-Die-Coated Active Layers Containing Low Bandgap Polymer Donor and Nonfullerene Small Molecule Acceptor 20m
Printing of active layers for application in organic solar cells with a meniscus-guided slot-die coating technique is a promising approach to overcome the up-scaling challenge, which is one of the main drawbacks in the field of organic photovoltaics on their way to marketability. Thin films of the conjugated high-efficiency polymer PBDB-T-SF and the non-fullerene small molecule acceptor IT-4F, which can achieve a power conversion efficiency of 13 % are printed with a meniscus-guided slot-die coater. As the solar cell performance is influenced significantly by the morphology of the active layer, it is important to understand the mechanism of structure formation during printing and drying of the active layers to enable a further optimization of the solar cell performance. Meniscus guided slot die coating of PBDB-T-SF:IT-4F is studied in situ with grazing incidence small angle X ray scattering (GISAXS), optical microscopy and UV/Vis spectroscopy to give an insight into the morphology evolution during drying of active layers.
Speaker: Kerstin Wienhold
• 5:40 PM
In-situ high temperature precipitation study in new alloy VDM 780 using SANS 20m
The new Ni-based superalloy VDM 780, developed for high temperature applications that require good mechanical properties (as gas turbines), shows the presence of only $\gamma$’ hardening precipitates. The absence of the instable $\gamma$’’ hardening phase, which transforms into $\delta$ phase at 650 °C resulting in a loss of creep resistance, will allow its use at higher operation temperatures. Due to the direct industrial application of this material, a thorough study of the precipitation process under various heat treatments in this alloy will be fundamental for further improvement of the material.
The precipitation behavior of the VDM 780 Ni-base superalloy was investigated by small angle neutron scattering (SANS) at high temperature. Atom probe tomography (APT) measurements were performed in order to obtain the real composition of the matrix and the precipitates that will allow the calculation of the scattering contrast between them. Two different samples with different heat treatment were used in order to obtain materials at different precipitation steps. SANS allowed to monitor the formation of nano-precipitates and their evolution with temperature. It was found that after the first precipitation step at 720 °C, the second precipitation step at 620 °C produces almost no new precipitates. A sample in a final precipitation state measured at the expected operation temperature of 750 °C shows its stability with almost no precipitate growth.
Speaker: Cecilia Solis
• 5:40 PM
In-situ neutron diffraction studies on micro- and macrostrains in Ni-base superalloys 20m
Polycrystalline Ni-base superalloys are frequently used materials for high-temperature applications like turbine discs. To get a deep knowledge of the precipitation kinetics during the thermomechanical production process and under service conditions, a new testing machine is built at the Research Neutron Source (FRM II) at MLZ in Germany to perform tension and compression loading up to 100 kN at temperatures up to 1200 °C in the neutron beam. In this contribution, results on in-situ tensile tests are presented that were performed on bulk samples of polycrystalline Ni-base superalloys at temperatures up to 600 °C at the STRESS-SPEC neutron diffractometer. In-situ neutron diffraction enabled to identify the fraction of the existing phases and their lattice parameters depending on mechanical and thermal loading. In particular, it was possible to determine the change in the preferred crystallographic orientation and the defect density in various phases in the elastic and plastic deformation regime.
Speaker: Frank Kuemmel
• 5:40 PM
In-situ RheoSAXS: Relating Nanostructure to Macroscopic Properties Using A Laboratory Setup 20m
Material research in all its complexity continuously calls for new analysis solutions to solve sophisticated issues in one go.
Rheology deals with the flow and deformation of matter. Applying shear force to a material can result in orientation or crystallization. With small-angle X-ray scattering (SAXS) structural parameters of nanomaterials such as size, shape, inner structure, and orientation can be determined.
Relating the nanostructure of a material to its macroscopic mechanical properties requires in-situ characterization techniques such as rheology combined with SAXS. RheoSAXS experiments have so far only been conducted at synchrotron beam lines, mainly due to the insufficient X-ray flux of laboratory X-ray sources and the lack of a dedicated RheoSAXS laboratory setup.
In this contribution we present a novel experimental setup for performing combined RheoSAXS studies with the SAXSpoint 2.0 laboratory SAXS system.
The integrated RheoSAXS sample stage enables temperature-controlled rheological experiments with in-situ determination of shear-induced structural changes of nanostructured materials on a nanoscopic length scale (from approx. 1 nm to 200 nm) by SAXS. The RheoSAXS module includes a rheological sample compartment which is integrated in the evacuated SAXS measurement chamber. The rheometer measuring head comprises a high-precision air-bearing motor which holds and controls the rheological scattering measuring system in the SAXS instrument chamber.
Speaker: Jiri Kislinger (Anton Paar GmbH)
• 5:40 PM
Incommensurate magnetic systems studied with the three-axis spectrometer MIRA 20m
Incommensurate magnetic structures like Helimagnons and Skyrmions are currently intensively studied. Due to their large periodicity they often show very low-lying excitations, where most of the interesting physics is taking place below some meV. The cold-neutron three-axis spectrometer MIRA is an instrument optimized for such low-energy excitations at small Q transfers. Its excellent intrinsic resolution makes it ideal for studying incommensurate magnetic systems. Here we will present several examples for the dynamics of such structures, which have been measured with MIRA.
Speaker: Robert Georgii
• 5:40 PM
Influence of benzocaine, propranolol and cholesterol on phospholipid bilayers 20m
Cell membranes play a fundamental role in protecting the cell from its surroundings, in addition to hosting many proteins with fundamental biological tasks. Drugs are able to perturb the structure of cell membranes, which can ultimately give rise to undesirable effects. Thus, a study of drug/lipid interactions is a necessary and important step in fully clarifying the role and action mechanism of active ingredients, and shedding light on possible complications caused by drug overdosage. Here we present the results obtained in our research focused on the understanding of the influence of benzocaine and propranolol active principles on the structure of L--phosphatidylcholine-based membranes. The investigation has been performed by means of neutron reflectivity, grazing incidence small angle neutron scattering, and small/ultra-small angle neutron scattering.
Investigations allowed discovering a stiffening of the membranes and the formation of stalks, caused by the presence of benzocaine: the addition of cholesterol increases the amount of stalks formed, if it is present up to a certain percentage (around 10% in mol). On the other hand, disordered bilayers (lamellar powders) and highly curved structures were found in the presence of propranolol. The results obtained may be rationalized in terms of the molecular structures of drugs and may serve as a starting point for explaining the toxic behavior in long-term and overdosage scenarios.
Speaker: Gaetano Mangiapia (German Engineering Materials Science Centre (GEMS) am Heinz Maier-Leibnitz Zentrum (MLZ))
• 5:40 PM
Influence of salt (NaCl) on structure and dynamics of phospholipid membranes 20m
Phospholipid membranes are the construction material of cell membranes and solutions of phospholipid vesicles find a range of applications in technical, medical and biological applications.
We previously showed the structure (neutron reflectometry, GISANS) and the dynamic behavior (GINSES) of L-α-phosphatidylcholine (SoyPC) phospholipid membranes. [1,2] We established a multi-lamellar structure as well as a surface mode, attributed to transient waves in the membranes.
We extended those studies to investigate the influence salt (NaCl) concentration in the system, both in order to ascertain the difference between previously investigated strongly hydrophobic additives as well as better represent an in-vivo biological membrane.
Two features of the membrane system were revealed: (1) The thickening of the membrane layers as reported by SAXS measurements is due to an enriched counter-ion area close to the head group of the phospholipid membranes, and not, as for hydrophobic molecules an actual swelling of the membrane. (2) The in-plane dynamics of the membranes is enhanced by the addition of NaCl, while retaining the previously reported surface mode.
Those features can play an important role in the understanding of membrane functions, such as the formation of ion channels, and thus their biological function on a fundamental level.
[1] S. Jaksch, H. Frielinghaus et al, Phys. Rev. E 91(2), 2015, 022716.
[2] S. Jaksch, H. Frielinghaus et al, Scientific Reports 7(1), 2017, 4417.
Speaker: Sebastian Jaksch (Physicist)
• 5:40 PM
Influence of the scanning strategy on the residual stress state in IN 718 additive manufactured parts 20m
Laser Powder Bed Fusion (L-PBF) is an additive manufacturing technique enabling the design of complex geometries that are unrivalled by conventional production technologies. Nevertheless, L-PBF process is known to induce a high amount of residual stresses (RS) due to the high temperature gradients present during powder melting by laser. High tensile residual stresses are to be found the edges whereas the bulk material shows balancing compressive RS. Literature shows that the RS is highly sensitive to the process parameters. In particular, this study presents the characterization of the RS state in two L-PBF parts produced with a rastering scan vector that undergoes 90° or 67° rotation between subsequent layers.
Speaker: Dr Itziar Serrano-Munoz (BAM)
• 5:40 PM
Injection of positrons into an electron space charge in a dipole field 20m
Towards the goal of magnetically confined low-energy electron-positron plasmas, the APEX collaboration has already demonstrated significant progress in injecting and confining the high-flux reactor-based positron beam, produced in the NEPOMUC facility. As previous work had focused on the single-particle regime, questions on the role of collective effects on positron injection via ExB drift needed to be investigated. Therefore a thermionic source installed on the equatorial plane of the dipole trap continuously injects electrons into the confinement volume and creates a negative space potential. Around 10% of the emitted electrons are confined in the magnetic field and contribute in establishing an additional potential as low as -58V in the injection area of the 5eV positron beam. Certain potential configurations increase the parameter range that allows successful positron injection while preserving the 100% efficiency. An overview of
the APEX project, the effect of this additional potential on the injection process of positrons as well as the diagnostic system consisting of target probes, gamma detectors and an emissive probe will be presented in this contribution.
Speaker: Markus Singer
• 5:40 PM
KOMPASS – the polarized cold neutron triple-axis spectrometer at the FRM II 20m
KOMPASS is a polarized cold-neutron three axes spectrometer (TAS) currently undergoing its final construction phase at the MLZ in Garching. The instrument is designed to exclusively work with polarized neutrons and optimized for zero-field spherical neutron polarization analysis for measuring all elements of the polarization matrix. In contrast to other TASs, KOMPASS is equipped with a unique polarizing guide system. The static part of the guide system hosts a series of three polarizing V-cavities providing a highly polarized beam. The exchangeable straight and parabolic front-end sections of the guide system allow adapting the instrument resolution for any particular experiment and provide superior energy- and Q-resolution values when compared with the existing conventional guide and instrument concepts [1, 2]. In combination with the end position of cold neutron guide, the large doubly focusing HOPG monochromator and analyzer, the V-cavity for analysis of polarization of scattering beam, the KOMPASS TAS will be very well suited to study various types of weak magnetic order and excitations in variety of complex magnetic structures and indeed first successful experiments on chiral magnets or very small crystals could already be performed.
[1] M. Janoschek et al., Nucl. Instr. and Meth. A 613 (2010) 119.
[2] A. C. Komarek et al., Nucl. Instr. and Meth. A 647 (2011) 63.
The construction of KOMPASS is funded by the BMBF through the Verbundforschungsprojekt 05K19PK1.
Speaker: Dr Dmitry Gorkov (Universität zu Köln, Technische Universität München, FRMII)
• 5:40 PM
KWS-1 SANS instrument with polarization analysis 20m
The KWS-1 small-angle neutron scattering instrument is operated by JCNS at MLZ [1]. The instrument covers a q-range from 0.0007 to 0.5 Å⁻¹ with a selectable wavelength span from 4.7 to 20 Å. The maximum neutron flux on the sample is 1×10⁸ cm⁻² s⁻¹, making it one of the most intense SANS instruments in the world.
The instrument is equipped with transmission supermirror polarizer, adiabatic radio-frequency spin flipper and a recently obtained dedicated magnet and polarization analyzer. The three-channel V-cavity polarizer with Fe/Si coated supermirrors (m=3.6) has an average polarization >93% and is positioned in a custom designed changer of revolver type. The flipper provides a high flipping efficiency of more than 99.9% for all neutron wavelengths. A custom designed hexapod allows heavy loads and precise sample positioning in beam (also for grazing incidence SANS under an applied magnetic field). For the experiments with the polarization analysis a 3He analyzer is utilized. The new sample magnet allows close positioning of the ³He cell to the magnet. The magnet has two orthogonal horizontal accesses. For the maximum field of 3 T (parallel to the beam) the decay time, T1, of the ³He cell approximately 50 cm away from the center of the magnet constituted 90 hours. The maximum analyzed q is 0.06 Å⁻¹.
All instrument components are running under a flexible instrument control system (NICOS).
[1] A.Feoktystov, H.Frielinghaus, Z.Di, et al., J. Appl. Cryst., 48, 61 (2015)
Speaker: Dr Artem Feoktystov (Forschungszentrum Jülich GmbH, JCNS at MLZ)
• 5:40 PM
KWS-2 the high Intensity / wide Q-range SANS diffractometer 20m
KWS-2 is a classical SANS diffractometer using a combination of pinholes with different neutron-wavelengths and detector distances as well as a focusing mode with MGF2 lenses to reach a large q-range between 0.0002 and 0.5 1/Å. A wide-angle detection option is currently planned to allow for measurements up to 2 1/Å, by combining SANS and WANS methods.
The instrument is designed for high intensity studies with a broad q-range, covering mesoscopic structures and their changes due to kinetic processes in the fields of soft condensed matter, chemistry, and biology.
The high neutron flux and the possibility to measure samples with large diameter (up to 5 cm), employing lenses, allow for high intensity and time-resolved studies.
In special cases, the resolution can be improved by using a double-disc chopper with adjustable openings reaching a wavelength spread between 2 and 20 %. In this way, the instrument can be flexibly adjusted to the needs of different experiments. Furthermore, the effects of chromatic aberration of the lenses and gravitation effects can be minimized. By using a secondary single-disc compact chopper, the use of the TOF mode achieves a good separation of the elastically, quasi-elastically and inelastically scattered neutrons from the sample. When only the quasi-elastic scattered neutrons are considered for the data analysis, a lower background level is obtained at high q, which makes the measurement of weak coherent signals more reliable.
Speaker: Christian Lang (Forschungszentrum Jülich GmbH)
• 5:40 PM
KWS-3 very small-angle neutron scattering focusing diffactometer at MLZ 20m
KWS-3 is a very small angle neutron scattering diffractometer operated by JCNS at Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, Germany. The principle of this instrument is one-to-one imaging of an entrance aperture onto a 2D position sensitive detector by neutron reflection from a double-focusing toroidal mirror. In current state, KWS-3 is covering Q-range between 3·10-5 and 2·10-2 Å-1 and used for the analysis of structures between 30 nm and 20 μm for numerous materials from physics, chemistry, materials science and life science, such as alloys, diluted chemical solutions, hydrogels and membrane systems. Within the last few years we have finalized several big “evolutionary” projects; we have completely re-designed and commissioned the main components of the instrument: selector area, mirror positioning system, main sample station at 10m, beam-stop system; implemented new sample stations at 3.5 and 1.3m, second (very-high resolution) detector, polarization and polarization analysis systems; adapted the instrument to almost any existing/requested sample environment like 6-position Peltier furnace (-25°C to 140°C), high-temperature furnace (< 1600°C), cryostats/inserts (>20 mK), liquid pressure cell (<5 kBar/10-80°C), CO2/CD4 gas pressure cell (<0.5 kBar/10-80°C), humidity cell/generator (5-95%/10-90°C), magnets (horizontal < 3T, vertical < 2.2T), Bio-logic® multimixer stopped flow (5-80°C), rheometer RSA II (tangential/radial) etc.
Speaker: Vitaliy Pipich
• 5:40 PM
Liquid dynamics of phase-change materials 20m
Phase-change materials can be rapidly and reversibly switched between the amorphous and crystalline states in a few nanoseconds. They have been successfully employed for non-volatile phase-change memory applications. However, the dynamics of atomic rearrangement processes and their temperature dependence, which govern their ultrafast switching, are not fully understood. Here, using quasi-elastic neutron scattering, we investigate the liquid-state dynamics of a phase-change material Ge15Sb85. With time-of-flight spectroscopy, we measured dynamic structure factors as a function of temperature. The characteristic relaxation times can be extracted at the structure factor maximum, and the mean self-diffusivity of atoms are determined at the low-q range. The relaxation times of Ge15Sb85 are smallest compared with the other phase-change materials such as Ge2Sb2Te5. The mean self-diffusivity of Ge15Sb85 is higher than Te-rich alloys. This indicates that Sb-rich alloys have faster liquid dynamics than Te-rich alloys. This may partially explain the difference in their crystal growth velocities. We show that the relaxation times extracted from neutron scattering are proportional to macroscopic viscosities. A breakdown of Stokes-Einstein relation is observed in all investigated compositions, which can be attributed to the formation of locally favored structures. The latter is likely associated with the liquid-liquid transitions revealed by a recent femtosecond X-ray diffraction study.
Speaker: Prof. Shuai Wei (Aarhus University)
• 5:40 PM
LLZO: Al, Ta, Nb, W – different dopants and their effect on microstructure and lithium diffusion 20m
To understand the impact of different dopants (Al, Ta, Nb, W) on the structure and ion conductivity of the solid electrolyte LLZO (Li7La3Zr2O12) on all length scales, we performed XRD, X-PDF, 6Li NMR and neutron diffraction experiments. The dopants Nb and Ta yielded cubic structured LLZO with highest ionic conductivities amongst this class of solid state electrolytes. Additionally, we observed that mechanical treatment of these materials cause a symmetry reduction : Ia3 ̅d →I4 ̅3d and a geometrically frustrated local structure. To understand the impact on the Li ion conductivity, neutron powder diffraction and 6Li-NMR were utilized. To this end, impedance spectroscopic and temperature dependent 6Li NMR measurements are used to determine the Li ion conductivity. Despite the finding that, in some materials, disorder can be beneficial, with respect to ionic conductivity, pulse-field gradient NMR measurements of the long-range transport up to 500 µm indicate a bulk Li+ diffusion barrier in the lower symmetric structure. The geometric frustration and symmetry reduction can be cured and converted back into the higher symmetric garnet structure by temperature treatment. The Li+ conductivity enhancing effect of the temperature treatment is proven by impedance measurements of sintered pellets.
Speaker: Charlotte Fritsch
• 5:40 PM
Low-Energy Positron Beam for Near-Surface Doppler-Broadening Spectroscopy 20m
A new positron beam setup has been put into operation for Doppler-broadening experiments with low energy positrons in order to allow the investigation of surfaces and near-surface defect structures. Positrons provided by a $^{22}$Na source are moderated in a $1~\mu\textrm{m}$ single crystalline tungsten foil from which they are guided to the sample chamber by longitudinal and transverse magnetic fields. The positrons are accelerated electrostatically by a potential difference applied between moderator and sample. Kinetic energies as low as $1~\textrm{eV}$ and up to $30~\textrm{keV}$ are possible. Inside the UHV chamber the positron beam is focused onto the sample by an electrostatic single lens.
Instead of the standard sample holder a heatable one can be mounted.
This setup is intended to complement the positron instrument suite at NEPOMUC and expands capabilities in the field of defect studies at and near the surface. First experimental results on oxides will be presented.
Speaker: Lucian Mathes
• 5:40 PM
Macromolecular Neutron Diffraction at the Heinz Maier-Leibnitz Zentrum MLZ 20m
Neutron single crystal diffraction provides an experimental method for the direct location of hydrogen and deuterium atoms in biological macromolecules, thus providing important complementary information to that gained by X-ray crystallography. At the MLZ the neutron single crystal diffractometer BIODIFF, a joint project of the Forschungszentrum Jülich and the FRM II, is dedicated to structure determination of proteins. Typical scientific questions address the determination of protonation states of amino acid side chains, the orientation of individual water molecules and the characterization of the hydrogen bonding network between the protein active centre and an inhibitor or substrate. This knowledge is often crucial towards understanding the specific function and behaviour of an enzyme. BIODIFF is designed as a monochromatic diffractometer and is able to operate in the wavelength range of 2.4 Å to about 5.6 Å. This allows to adapt the wavelength to the size of the unit cell of the sample crystal. Data collection at cryogenic temperatures is possible, allowing studies of cryo-trapped enzymatic intermediates. Some recent examples will be presented to illustrate the potential of neutron macromolecular crystallography.
Speaker: Andreas Ostermann
• 5:40 PM
Magnetic dynamics in the single-domain state of the cubic helimagnet ZnCr2Se4 20m
Anisotropic low-temperature properties of the cubic helimagnet ZnCr2Se4 in the single-domain spin-spiral state are investigated by a combination of neutron scattering, thermal conductivity,and dilatometry measurements. In an applied magnetic field, neutron spectroscopy shows a complex and nonmonotonic evolution of the spin-wave spectrum across the quantum-critical point that separates the spin-spiral phase from the field-polarized ferromagnetic phase at high fields. A tiny spin gap of the pseudo-Goldstone magnon mode, observed at wave vectors that are structurally equivalent but orthogonal to the propagation vector of the spin helix, vanishes at this quantum critical point, restoring the cubic symmetry in the magnetic subsystem. The anisotropy imposed by the spin helix has only a minor influence on the lattice structure and sound velocity, but has a much stronger effect on the heat conductivities measured parallel and perpendicular to the magnetic propagation vector. Anisotropic thermal transport is magnetic in origin and highly sensitive to an external magnetic field. We also report long-time thermal relaxation phenomena, revealed by capacitive dilatometry, which are due to magnetic domain motion related to the destruction of the single-domain magnetic state. Our results can be generalized to a broad class of helimagnetic materials in which a discrete lattice symmetry is spontaneously broken by the magnetic order.
Speaker: Dmytro Inosov (TU Dresden)
• 5:40 PM
Magnetic scattering of polarized neutrons on structures of reduced graphene oxide embedded in the polystyrene matrix 20m
The development of composite materials based on graphene, included in polymer matrices of different nature, and the study of the relationship between their structure and properties using complementary methods of research are due to several reasons. First is the search for new magnetic materials promising in spin electronics. Second, there is interest in physical processes in highly defective nanostructured carbon materials, in which, according to literature data, magnetic and superconducting effects may occur. In this study, for the first time, using the method of small-angle polarized neutron scattering (SAPNS), an assessment of the scale of arising spin correlations in reduced graphene oxide (RGO), which was preliminarily surface-modified with 3-(trimetoxysilylpropyl)methacrylate (TMSPM) and copolymerized with styrene, was made. Two-dimensional RGO structures functionalized by vinyl groups and embedded in the polystyrene matrix were measured using the SAPNS method (FRM-2, KWS-1, Garching). The SAPNS experiments showed the presence of magnetic-nuclear interference both in the modified TMSPM carbon filler and in the polystyrene/RGO composite, which indicates the presence of magnetized areas of 1000 Å scale and magnetic scattering with amplitude B≠0 in the systems under study.
The work was supported by RFBR grant № 20-02-00918 A.
Speaker: Alexander Bugrov (Institute of Macromolecular Compounds, Russian Academy of Science)
• 5:40 PM
Magnons in the collinear antiferromagnetic phase of Mn5Si3 20m
The antiferromagnetic compound Mn5Si3 is an interesting material for applications since it is hosting rich physics, such as the inverse magnetocaloric effect [1] and a large anomalous Hall effect [2]. Despite the intense research activity over the past decades [1-5], many open questions remain regarding the minimal magnetic model Hamiltonian, the role of the spin fluctuations in the magnetically ordered phases and which Mn site is responsible for them. We addressed some of these problems by combining polarized and unpolarized inelastic neutron scattering measurements and density functional theory calculations. We investigated the electronic and magnetic properties of the system and determined the magnetic exchange interactions and the biaxial magnetocrystalline anisotropy in the high temperature collinear antiferromagnetic phase of Mn5Si3. This provided the parameters for a Heisenberg model, from which we computed the spin-wave energies as a function of the external magnetic field applied perpendicular to the preferred axis. Our experimental data and theoretical results are in good agreement with each other.
[1] N. Biniskos et al; Physical Review Letters 120, 257205 (2018).
[2] C. Sürgers et al; Scientific Reports 7, 42982 (2017).
[3] M. Gottschlich et al; Journal of Materials Chemistry 22, 15275 (2012).
[4] P.J. Brown et al; J. Phys.: Condens. Matter. 4, 10025 (1992).
[5] P.J. Brown and J.B. Forsyth; J. Phys.: Condens. Matter. 7, 7619 (1995).
Speaker: Dr Nikolaos Biniskos (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science at MLZ, Lichtenbergstrasse 1, 85748 Garching, Germany)
• 5:40 PM
Manufacturing a safer world: Residual Stress in AM determined by diffraction techniques 20m
Additive manufacturing (AM) technologies are experiencing a rapid growth. They promise breakthrough in engineering design (tailored parts), efficiency (environmental impact), and performance (safety). Laser Powder Bed Fusion (LPBF) is an AM method permitting the fabrication of complex structures that cannot otherwise be conventionally produced. Nevertheless, the high cooling rates associated with the process result in the formation of complex residual stress (RS) fields, which can undermine the material safety. Diffraction-based methods using penetrating neutrons and high energy X-rays at large scale facilities offer a non-destructive method to spatially resolve surface and bulk RS in complex components. These techniques also allow tracking the changes of RS following applied thermal / mechanical loads. Therefore, they represent one of the most reliable methods to assess the materials integrity in structures.
This presentation overviews some of the success stories of using large scale facilities by the BAM (the German Federal institute for Materials Research and Testing) for the determination of RS in AM metallic alloys. In particular, the study of the influence of process parameters (e.g. scanning strategies) on the RS state and the relaxation of this stress through heat treatment is presented. It is also shown how such information is used to improve the safety of AM structures. Finally, some of the challenges for diffraction-based RS analysis in AM materials are discussed.
Speaker: Dr Alexander Evans (BAM)
• 5:40 PM
MARIA – The high-intensity polarized neutron reflectometer of JCNS 20m
The high-intensity reflectometer MARIA of JCNS is installed in the neutron guide hall of the FRM II reactor and is using a velocity selector (4.5Å<λ¬<40Å) as a primary wavelength filter with 10% resolution. In combination with a Fermi-Chopper the wavelength resolution can be increased to 1% or 3%. The beam is optionally polarized by a double-reflecting super mirror and the elliptically focusing neutron guide increases the flux at the sample position thus reducing the required sample size or measuring time. A flexible Hexapod, as sample table, can be equipped with an electromagnet (up to 1.1T) or a cryomagnet (up to 5T), a UHV-chamber (10−10 mbar range) for the measurement of Oxide MBE samples and also with soft matter solid/liquid interface cells connected to a “sample robot” for automatic solvent contrast. Together with the 400 x 400 mm² position sensitive detector and a ³He polarization spin filter based on Spin-Exchange Optical Pumping, the instrument is well equipped for investigating specular reflectivity and off-specular scattering from magnetic layered structures. Furthermore the GISANS option can be used to investigate lateral correlations in the nm range.
MARIA is a state of the art reflectometer that gives the opportunity to investigate reflectivity in a dynamic range of up to 7-8 orders of magnitude including off-specular scattering and GISANS. Furthermore the high intensity allows for kinetic measurements down to a few seconds over a dynamic range of 4 orders.
Speaker: Alexandros Koutsioumpas (JCNS)
• 5:40 PM
Micromechanical response of multi-phase Al-alloy matrix composites under uniaxial compression 20m
Aluminum alloys are extensively used in the automotive industry. Particularly, squeeze casting production of Al-Si alloys is employed in the conception of metal matrix composites (MMC) for combustion engines. Such materials are of a high interest since they allow combining improved mechanical properties and reduced weight and hence improve efficiency. Being a multiphase material, most MMCs show complex micromechanical behavior under different load conditions. In this work we investigated the micromechanical behavior of two MMCs, both consisting of a near-eutectic cast AlSi12CuMgNi alloy, one reinforced with 15%vol. Al2O3 short fibers and the other with 7%vol. Al2O3 short fibers + 15%vol. SiC particles. Both MMCs have complex 3D microstructure consisting of four and five phases: Al-alloy matrix, eutectic Si, intermetallics, Al2O3 fibers and SiC particles.
The in-situ neutron diffraction compression experiments were carried out on the Stress-Spec beamline and disclosed the evolution of internal phase-specific stresses in both composites. In combination with the damage mechanism revealed by synchrotron X-ray computed tomography (SXCT) on plastically pre-strained samples, this allowed understanding the role of every composite’s phase in the stress partitioning mechanism. Finally, based on the Maxwell scheme, a micromechanical model was utilized. The model perfectly rationalizes the experimental data and is able to predict the evolution of principal stresses in each phase.
Speaker: Sergei Evsevleev
• 5:40 PM
Micromechanics near the yield point of Nickel based superalloys 20m
Previous studies found non-monotonous lattice strain evolutions at small plastic deformations in the Nickel-based superalloys Inconel 718 and Haynes 282. For studying the micromechanical causes of this behaviour, the dependence of these effects on dislocation density, deformation history, and temperature history was examined. Due to the material‘s more readily observable non-monotonous lattice strain evolutions (when compared to Inconel 718), the material Haynes 282 was given special attention. In situ bulk diffraction experiments in the regime of small plastic strains yielded repeatable, non-monotonous strain evolutions, repeatable peak sharpening during unloading and no strong dependence on conditions typically expected to promote solute atom segregations to dislocation. The observed micromechanical softening is accompanied by strain localization (observable by slip band formation) and dynamic strain ageing at elevated temperatures.
Speaker: Mr Jonas von Kobylinski ( Lehrstuhl für Werkstoffkunde und Werkstoffmechanik TUM)
• 5:40 PM
Microstructural characterization of European historical swords through neutron imaging 20m
It is evident from several analyses performed on steel samples that the production of arms and armor used cutting edge technology of that time so a study of such artefacts can give fundamental details about the technological skills of a specific area or period. In order to correlate similar samples of a specific age or provenance, it is important to build trustworthy classification parameters. Neutron imaging techniques allow us to determine the morphology and microstructure of composite steel artefacts thus allowing us to characterize the composition, the steel quality, the welds and thermal treatment.
We started a systematic study to characterize the production methods of European swords from the early Middle Ages to the 17th century. On this purpose we started analyzing three swords of great importance now belonging to the Bayerisches Nationalmuseum.
-Longsword, produced in Tyrol in the late 15th century, inv. W872.
-Hunting sword, produced by M. Diefstetter (bladesmith), Munich, c. 1550 (blade) (grip), inv. W579.
-Sword, produced in Northern Italy, possibly Milan, c. 1560, inv. W587.
White beam tomography allowed detecting the presence of several features in the bulk of the blades as multilayered structures, cracks and defects, and determining the width and the shape of the martensitic hardened edges. Energy selective analysis allowed determining details of the steel composition and microstructure as well as mapping the different low and high carbon areas.
Speaker: Francesco Grazzi (CNR-IFAC)
• 5:40 PM
MIEZETOP for the cold triple axis spectrometer (TAS) MIRA 20m
Neutron Spin Echo is a techniques to obtain high resolution which uses the spin to record information. It is used to observe slow phenomena, which are correlated to relaxation processes, e.g. correlations between atomic positions or spin orientations. Here these phenomena manifest itself in an inelastic broadening of the structure factor S(Q) revealing time domains of inelastic processes that are magnitudes higher than classical neutron spectrometers. One way of realization is MIEZE (Modulation of IntEnsity with Zero Effort) where the energy transfer displays in a contrast change of the oscillating signal. We implement this technique into our triple axis instrument to obtain high resolution in a broad Q-range.
Speaker: Henrik Gabold
• 5:40 PM
Monte Carlo simulation and optimization for the micro-channel target of the HBS project 20m
The High Brilliance Neutron Source (HBS) project was initiated at the Jülich Centre for Neutron Science of the Forschungszentrum Jülich (JCNS). It aims to develop a medium neutron source facility based on a linear accelerator, scalable up to 70 MeV proton energy and optimized to deliver high brilliance neutron beams to a variety of neutron instruments. In the framework of this project a compact micro channel target was proposed for the powerful high-flux and compact, accelerator-driven neutron sources (CANS). Based on earlier simulations concerning fluid dynamics and structural mechanics, a preliminary design was developed. Due to the required compactness, heat dissipation and mechanical stability are the factors limiting the total neutron yield of the target. In order to find a compromise solution between high neutron yield and mechanical stability, the energy desposition as well as neutron and proton spectrum in different geometric parameters of the micro-channel target were performed with the Monte Carlo simulation code FLUKA. The details of the simulation and optimization will be presented at the workshop.
Speaker: Ms Qi Ding
• 5:40 PM
Morphology control of PS-b-P4VP templated monolayer mesoporous Fe2O3 thin films 20m
Mesoporous Fe2O3 thin films with large area homogeneity demonstrate tremendous application potential in photovoltaic industry, lithium ion batteies, gas or magnetic sensors. In the present work, the synthesis of morphology‐controlled Fe2O3 thin films is realized with the polystyrene-block-poly(4-vinylpyridine) (PS-b-P4VP) diblock copolymer assissted sol-gel chemistry. The effect of the solvent category and polymer-to-FeCl3 ratio is systematicaly investigated during the sol-gel synthesis process. For both DMF and 1,4-dioxane solvent symtem, nanosluster structures are obtained with low PS-b-P4VP concentration, which is supposed to be the result of the weak phase seperation property and thereby the weak template effect of the block polymer. When the concentration of the PS-b-P4VP reaches the critical point of micellization, spherical and wormlike porous structures can be specifically formed in the DMF and 1,4-dioxane solvent system, respectively. The further increase of the polymer-to-FeCl3 ratio lead to the enlargement of the spherical pore size in the DMF system and the shrink of center-to-center distance of the worm like structure in the 1,4-dioxane system.
Speaker: Shanshan yin
• 5:40 PM
Morphology investigation of printed active layers of hybrid solar cells with grazing incidence neutron and x-ray scattering techniques 20m
One aspect for the development of non-conventional solar cells should be the sustainability of the production process of devices. Following this idea, we developed hybrid solar cells, which can be processed out of aqueous solution. The active layer of these devices is based on laser-processed titania nanoparticles dispersed in a water-soluble polythiophene. The active layers were produced with a home-built slot die coater. With this printing technique, the thickness of layers can be easily controlled and the scale-up toward the coating of large areas is done with low effort. We investigated the morphology of the deposited active layers with time of flight - grazing incidence small angle neutron scattering (TOF-GISANS) and x-ray scattering. With GISAXS and GIWAXS we were able to follow the evolution of the morphology for different donor/acceptor ratios in situ during the printing process. The expected impact of the observed morphologies and crystallinity on the performance of corresponding devices is discussed.
Speaker: Volker Körstgens (TU München)
• 5:40 PM
Morphology of fullerene-free bulk heterojunction blends for photovoltaic applications 20m
Over the last decades, the focus of research has been shifted towards the field of organic electronics due to their advantageous properties, such as low-cost manufacturing processes, versatility, flexibility, as well as their tunable characteristics, such as absorption and solubility. These properties open up a wide range of applications, especially, in the field of photovoltaics. Hence, organic photovoltaics represent a promising alternative for the conventional inorganic photovoltaics. Even though the power conversion efficiency is lower than the ones of conventional devices, values of over 16% have been reported and thus receive industrial attention for commercialization. We study the inner morphology of a low band gap, fullerene-free bulk heterojunction blend, namely PBDB-T and ITIC of different compositions with grazing-incidence small-angle X-ray scattering (GISAXS). The obtained structural information is correlated with current density voltage characteristics and the absorbance of the active layer in order to improve the efficiency.
Speaker: Sebastian Grott (TU München, Physik-Department, Lehrstuhl für Funktionelle Materialien)
• 5:40 PM
Multimodal Imaging from meV to MeV Neutrons combined with Gamma Imaging at the NECTAR Instrument 20m
Located at the SR10 at the FRM II, NECTAR is a versatile instrument and designed for the non-destructive inspection of various objects by means of fission neutron radiography and tomography. Compared to the Z-dependency of X-ray and gamma imaging, fission neutrons have the strong advantage of often providing similar contrast for heavy and light materials. Only few facilities around the world provide access to well collimated fast neutrons, with NECTAR at the FRM II being the only instrument that has a dedicated user program for fast neutron imaging. Aside from fast neutrons, thermal neutron as well as gamma imaging is possible by using different scintillator materials with the same detector system, extending NECTAR’s imaging capabilities to different modalities.
Here we present the advantages of combining the information gained from neutron imaging in conjunction with gamma imaging at the NECTAR beam-line, providing a unique probe with unparalleled isotope identification capabilities with examples provided for archaeology, batteries, industry components and scintillator materials. Furthermore, we provide an update on the recent progress at NECTAR, with upgraded capabilities, such as the addition of gamma and single event-mode imaging.
Speaker: Dr Adrian Losko (Technische Universität München, Forschungs-Neutronenquelle MLZ (FRMII))
• 5:40 PM
Multiple Length Scales Hydration in Polymer Membranes for Fuel Cells 20m
The polymeric materials used in fuel cells applications either as proton (PEMFC) or anion (AEMFC) exchange membranes are characterized by a nanoscale phase separation into hydrophilic domains and hydrophobic crystalline regions, which enables a high conductivity and provides a good chemical and mechanical stability of the membrane. Owing to its high proton conductivity and chemical stability, Nafion was established as benchmark for PEMFC applications. However, due to its high cost and limitations in operating conditions, there is an intense search for low-cost alternative materials with similar conductive and chemo-mechanical properties. On the other hand, the high proton conductivity in PEMFC is achieved in acidic environment that requires the consumption of acidic-resistant precious metal catalysts and impedes a wide-scale commercialization. As alternative technology, the AEMFC use inexpensive, non-noble metal catalysts. However, the AEM conductivity and long-term durability is still lower compared to the PEM, therefore the interest in finding new high-performance materials. The conductivity in polymer electrolyte membranes depends on the water behavior inside the polymer network across the full range of length scales in the membrane. As we will present here, small-angle neutron scattering with contrast variation used at MLZ is a powerful technique for unraveling the hierarchical morphology and understanding the structure–property relationships in such polymeric membranes.
Speaker: Dr Aurel Radulescu (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science at Heinz Maier-Leibnitz Zentrum)
• 5:40 PM
Nano-Structure Development of Oral Pharmaceutical Formulations in Simulated Intestine – D-contrast SANS and DLS 20m
Pharmaceutical drug formulations for oral delivery depict after patient intake a stepwise structure development of disintegration to micro-particles, dissolution of drug nano-complexes, interaction with bile and lipids and uptake by the intestinal membrane proteins (receptors). The processes are critical for therapy and applicability of drug and formulation, especially with hydrophobic or badly soluble drugs.
The processing of oral drug formulations was studied by neutron small angle scattering SANS with D-contrast variation, combined with DLS, with a simulator device of the human gastro-intestinal tract with SANS+DLS observation of drug nanoparticles and intermediates. A set of drugs, where oral delivery is a challenge, was investigated, e.g.: Fenofibrate, Amphotericin B, Danazol, Griseofulvin, Carbamazepine, Curcumin in combination with lipids and detergents. The biocompatibility was estimated with cell cultures. The drugs were embeded in nanoparticles and liposomes of 50-100 nm size and resolved stepwise in artificial intestinal fluid and bile. The resolution and formation of intermediate nanoparticles and excipient-drug complexes was analyzed with time resolved SANS and DLS. Substructures (domains) were localized by solvent deuterium contrast variation. The results are part of the development of novel formulations of difficult drugs upon structure investigation by SANS plus DLS in a feedback process.
Speaker: Thomas Nawroth (Gutenberg-University, Pharmaceutical Technology, Staudingerweg 5)
• 5:40 PM
Nested Optic for Neutron Focusing 20m
The investigation of small samples by neutron scattering is usually very time consuming due to the low available neutron flux density of neutron beams and small signals from the sample. Originally, neutron guides have been used to transport neutrons over large distances to make room for additional beamlines and for improving the signal-to-noise ratio. While being originally proposed to reduce the number of reflections and therefore the losses, elliptic neutron guides are enjoying an increasing popularity also for focusing neutron beams. However, elliptic guides do not image objects properly due to very strong coma aberrations which should be avoided. In order to overcome the aberrations, we propose using nested arrays of short elliptic mirrors.
In our contribution, we report on the investigation of a nested mirror optic at the beamline MIRA. The key properties of the optic are a large brilliance transfer of approximately 75% and the possibility of adjusting the beam size and the divergence of the neutron beam at the sample position by apertures placed before the nested mirror optic. Therefore, no beam shaping devices are required close to the sample position thus reducing the background significantly. Nested mirrors will also be particularly useful for the efficient extraction of neutrons from highly brilliant moderators such as at the ESS, because the common illumination losses associated with using neutron guides are avoided.
Speaker: Christoph Herb (TUM)
• 5:40 PM
Neutron Depth Profiling at the PGAA facility of MLZ 20m
Neutron Depth Profiling (NDP) is a non-destructive, isotope-specific, high-resolution nuclear analytical technique, which is often used to probe profiles of lithium, nitrogen, boron, helium and several additional light elements concentration in different host materials. The N4DP experiment is located at the Prompt Gamma Activation Analysis (PGAA) facility of Heinz Maier-Leibnitz Zentrum (MLZ), which provides a cold neutron flux up to $5\times10^{10}\,$s$^{-1}$cm$^{-2}$. When a neutron is captured by a $^{6}$Li nucleus, the system emits an alpha particle at a well-defined energy. The loss of the charged particle traveling through the host material is related to the depth of origin at a resolution level up to a few ten nanometers.
After a short introduction to the existing N4DP facility, we will present the status of the ongoing upgrade towards its full functionality to study the lithium-ion concentration gradient in energy storage systems, i.e. Li-ion batteries. Here, NDP reveals new insights into the evolution of the lithium accumulation in different silicon-graphite anode compositions. The evolution of immobilized lithium could directly be measured, which is one of the main causes of battery lifetime limitation. This project is supported by the BMBF, Contract No. 05K19WO8.
Speaker: Robert Neagu
• 5:40 PM
Neutron guides and Ni/Ti multilayer supermirror coatings by the FRM II Neutron Optics group 20m
The MLZ makes extensive use of modern neutron guides to transport and distribute the neutrons over large distances, which are installed and maintained by the neutron optics group. Adapted to the needs of the instruments with respect to wavelength distribution and angular dispersion the guide elements are coated by 58Ni or Ni/Ti supermirror coatings with m values up to 3.5 either procured externally or produced in our DC magnetron sputtering facility. The neutron optical properties of the individual mirror plates are verified with our neutron reflectometer TREFF. In the last year, a decent effort was made to improve the quality of the supermirrors produced by the neutron optics group in terms of reducing mechanical stress and increasing their reflectivity. For this purpose, experiments were conducted to optimize different parameters of the production process. The success of those experiments lead us to get supermirrors with better reflectivities and allowed us to explore the design and production of supermirrors with larger angle of total reflection beyond m=3.5, which was historically our limit. We present our in house neutron guide production and other service for the instruments.
Speaker: Jose Manuel Gomez Guzman (Technische Universität München Heinz Maier-Leibnitz Zentrum (MLZ))
• 5:40 PM
Neutron imaging for the investigation of the lyophilisation of amorphous bulk solids 20m
Lyophilisation refers to the sublimation of ice below the triple point of water. It is employed for dehydrating biopharmaceuticals and high-value foods in frozen state as the structural and nutritional attributes are not affected. The sublimation front divides the dried area from the frozen area. The knowledge about the sublimation front is important to understand process characteristics and to ensure the product quality. However, the development of the sublimation front, especially in particulate matter, is not yet fully understood and the existing models are contradictory and based on different assumptions. No experimental validation of the existing models exists so far. Therefore, it is the aim to study the sublimation front by in-situ neutron imaging.
The experiments were carried out in the Antares beamline at FRMII using maltodextrin particles of two different particle sizes and concentrations. Sublimation for finer particles (x= 70 µm) was investigated by radiography. For the larger particles (x = 3550 µm) continuous tomographic measurements were carried out.
With the reconstructed 3D volumes, we could demonstrate the structure of the drying fronts, whereas with the radiographic images we could estimate the dynamic ingress of the sublimation front. It was shown that for small particles the sublimation front first occurred at the bottom of the particle bed and moved to the top. For large particles multiple sublimation fronts were found.
Speaker: Petra Foerst (TUM)
• 5:40 PM
Neutron yield measurements for Be, V and Ta targets from 22-42 MeV proton beams 20m
The High Brilliance neutron Source (HBS) project aims to develop a scalable Compact Accelerator-driven Neutron Source (CANS) enabling neutron fluxes at the corresponding instruments comparable to medium-flux fission-based research reactors. For scalable CANS, the target material providing the largest neutron yield depends on the energy of the sub-100 MeV primary proton beam. Simulations based on the TENDL database suggest that low-Z materials, e.g. Beryllium and Vanadium, generate more neutrons at proton beam energies below 20 MeV while high-Z materials, e.g. Tantalum, generate more neutrons at proton beam energies above 20 MeV. In order to improve the reliability of the underlying databases, the neutron yield of $\mathrm{p}+\mathrm{Be}$, $\mathrm{p}+\mathrm{V}$ and $\mathrm{p}+\mathrm{Ta}$ for 22, 27, 33 and 42 MeV protons is indirectly determined by a novel method through the measurement of the 2.23 MeV gamma ray of hydrogen induced by thermal neutron capture in a polyethylene moderator. The neutron to gamma conversion rate is measured with an AmBe calibration neutron source. Corrections for escaped neutrons are applied via an MCNP simulation of the experiment. This contribution presents the experimental results and a comparison with the neutron yield obtained from simulations.
Speaker: Marius Rimmler
• 5:40 PM
New analysis frameworks for the analysis of inelastic measurements from neutron backscattering spectrometers 20m
Recent developments in the instrument design of neutron backscattering spectrometers allow to measure the total scattering function $S(q,\omega)$ with quasi-continous energy transfers but also with specific energy transfers - so called elastic fixed window scans (E/IFWS)- with a high energy resolution. While several models have been developed for the analysis of EFWS [1], there are only few approaches to analyze IFWS.
By reducing the number of energy transfers observed, the corresponding measuring time can be significantly reduced, allowing to investigate samples as a function of control parameters such as temperature-, pressure- or time-dependent samples.
In this contribution, several different approaches are presented to analyze the I/EFWS. This includes the combination of several IFWS as "sparse QENS" [2] , the extraction of generalized mean squared displacements [3] as well as the combination of EFWS and IFWS to extract global diffusion of dissolved proteins [4,5].
The different methods will be analyzed for their suitability for different neutron spectrometers taking into account their resolutions, energy transfers and momentum transfers. Results from modeled data of complex dynamics will be compared to measurements from IN16b (ILL).
[1] D. Zeller, J. Chem. Phys. 2018
[2] K. Pounot, JPCL, 2020
[3] Roosen-Runge et al, EPJ Web of Conf. 2015
[4] O. Matsarskaia et al, PCCP, 2020
[5] C. Beck, PhD Thesis, Univ. Tübingen, 2020
Speaker: Christian Beck (Institut Laue Langevin)
• 5:40 PM
Non-destructive quantification of lithium and electrolyte losses in Li-ion batteries using neutron powder diffraction 20m
Lithium-ion batteries lose part of their capacity while they are cycled. This loss is due to various side effects like formation of the solid-electrolyte-interface (SEI), loss of active lithium, etc. The rates of side effects are spatially non-uniformly distributed, due to heterogeneously distributed parameters like temperature and current density. The loss of active lithium can be related to the formation of the SEI, whereas the role of the electrolyte in the SEI formation and its correlation to lithium losses remains not fully clear so far.
Aim of the current study is a non-destructive quantification of lithium and electrolyte, their spatial distributions throughout the cell and concentration changes vs. cell fatigue. High-resolution neutron diffraction independently reveals a direct correlation between losses of active lithium in the graphite anode and these of liquid electrolyte. A non-uniform character of the losses is probed by spatially resolved neutron powder diffraction, thereby displaying the non-trivial character of active lithium/electrolyte losses and complex dynamic of the capacity fading.
Speaker: Dominik Petz
• 5:40 PM
Nondestructive determination of Li concentration and distribution in prismatic Li-ion battery 20m
Li-ion batteries' (LIBs) popularity is a result of their outstanding characteristic, in particular high capacity, long lifetime, no memory effect. Among different form factors, the prismatic cells are mostly used for small electronics, but they also become an attractive option for e-mobility applications in the latest years. Among various experimental tools, the powder diffraction has been a proven one for studies of Li-ion batteries. In particular, synchrotron and neutron radiations enable non-destructive probe of structure for LIB components and their constituents under real and non-ambient operating conditions. 2D Li distribution inside the prismatic cell for the anode and the cathode can be traced by utilizing high-energy X-ray diffraction. Employing neutron diffraction with a thermal monochromatic neutron beam both ex-situ and in-situ/operando studies of prismatic LIBs are possible as it was recently reported.
In the current contribution, a fresh and electrochemically aged commercial prismatic LIBs used in the iPhone 6 were inspected by both X-ray diffraction radiography and neutron powder diffraction. Neutron diffraction allowed the evaluation of the overall structural changes of cell constituencies in the prismatic LIB, which were correlated to electrochemical treatment, where the changes of 2D Li distribution were determined with high energy X-ray diffraction.
Speaker: Volodymyr Baran
• 5:40 PM
NREX - neutron reflectometer with X-ray option 20m
The high resolution neutron/ X-ray contrast reflectometer NREX, operated by the Max Planck Institute for Solid State Research, is designed for the determination of structural and magnetic properties of surfaces, interfaces, and thin film systems.
The instrument is an angle-dispersive fixed-wavelength machine with a default wavelength of 4.28 Å. A horizontal focusing monochromator gives the possibility to switch between modes “high intensity/ relaxed resolution” and “high resolution/ reduced intensity” and provides a beam especially for small samples (down to 5×5 mm2 and below). A Beryllium filter attenuates higher order reflections. Transmittance supermirrors m = 3.5 with a polarizing efficiency of P>99% and high efficiency gradient RF field spin flippers are used for a full 4 spin channel polarization analysis.
The sample is aligned horizontally. By tilting the sample the incident angle is varied. The detector arm can move for GISANS horizontally as well as vertically for specular and diffuse scattering measurements. Neutrons are detected with a 20 x 20 cm² position sensitive or a pencil detector. An X-ray reflectometer can be mounted on the sample table orthogonal to the neutron beam. It allows for the in-situ characterization of sensitive soft matter samples and neutron/ X-ray contrast variation experiments.
Speaker: Yury Khaydukov (Max-Planck Institute for Solid State Research)
• 5:40 PM
Oscillatory dynamics in simple systems at elevated temperatures -- beyond a perturbational treatment of anharmonicity 20m
The importance of anharmonicity for describing fundamental materials properties, starting from finite heat conductivity due to phonon-phonon scattering, can hardly be overemphasized. For crystalline matter, the principal microscopic gauge is constituted by the broadening in energy of the phonon dispersions, corresponding to q-dependent phonon lifetimes, which is also the main unknown for microscopic computations of heat conductivity.
Here the case of elemental Al at temperatures up to the melting point will be considered. I will present experimental data obtained by inelastic neutron scattering with consideration to the necessary steps in data analysis for being able to extract the inherent linewidths. Further, I will present calculations of q-dependent line broadenings on the basis of density-functional theory, both in the standard approach of perturbation theory as well as via ab initio molecular dynamics, and I will discuss why perturbation theory fails at elevated temperatures.
A. Glensk et al., Phys. Rev. Lett. 123, 235501 (2019)
Speaker: Michael Leitner
• 5:40 PM
Out-of-equilibrium processes during phase transitions: An in-situ crystallization study of hybrid perovskites 20m
Processes leading up to nucleation are pedantically known to proceed via the emergence of a low-amplitude, long wavelength instability through the material, creating the disturbances for a nucleation process to transpire. Owing to the thermodynamic instability of the high surface energy nanostructures, the nuclei concatenate to form higher surface area intermediates. The processes spanning from the disturbances to the formation of concatenated species occur in the matter of seconds which require high time resolution and sensitivity to register. Thereafter, the conversion of the stabilized concatenated species to the final crystalline material proceed via dissolution-recrystallization which requires further processing steps such as thermal annealing.
By combining in-situ optoelectronic and structural measurements in a custom-made analytical cell, we unveil previously experimentally inaccessible data during non-trivial phase transition processes during the in-situ crystallization of a prototypical hybrid perovskite thin film.
Speaker: Shambhavi Pratap (Technische Universität München)
• 5:40 PM
PANDA - the cold neutron TAS at MLZ 20m
The cold three axes spectrometer PANDA offers high neutron flux, high resolution in momentum (Q) and energy (E) combined with low instrumental background. The instrument allows the investigation of systems where small sample sizes are available, or samples with weak scattering cross sections. Specialized sample environment is available for experiments under extreme conditions, such as milli-Kelvin temperatures and magnetic fields up to 12 T. Furthermore, high temperatures up to 2100 K and high pressures are possible. The instrument is perfectly suited for the investigations of magnetism and superconductivity on single crystals at low energy range. Typical experiments include quantum magnetism, heavy-fermion or low-dimensional systems, frustrated and multiferroic materials, investigation of magnetic excitations, lattice dynamics and their hybridization. The actual scientific goals are often connected with discovering exotic spin states under extreme conditions and make the PANDA an excellent tool to solve the pressing questions in modern condensed matter physics.
Speaker: Astrid Schneidewind
• 5:40 PM
Pharmaceutical Drug Carriers organized in Nano-Domains – Study and Design upon Neutron Scattering with contrast variation, SAXS and DLS 20m
Specific target Nanoparticles for therapy of cancer and other diseases were assembled from lipids, polymers, and pharmaceutical drugs or mRNA. For cell targeting proteins were bound to the surface (corona). The structure in solution is analyzed by dynamic light scattering DLS combined with neutron small angle scattering SANS, SAXS, metal specific X-ray scattering ASAXS. Material sub-domains in the nanoscaled drug carriers (100 nm, polymer complexes, liposomes) were localized by contrast variation.
The nanoparticles, e.g. biodegradable polymer (PLGA, Carbohydrates), intestinal lipid-bile nanoparticles, lipid particles, proteins and optional bio-target domain are ampiphilic. Thus the internal particle structure forms sub-domains of different material and scattering power, enabling a localization by contrast. For several medical cases we construct and study pharma nanoparticles for parenteral and oral applications, which contain soluble or hydrophobic drugs, or nucleic acid drugs, e.g. mRNA. Cell or tumor recognition and uptake of the drug carriers can obtained by a surface protein, ligand head.
mRNA nano-complexes for immune-vaccination and cancer therapy work by cellular synthesis of the corresponding protein (not the antigen, but the genetic information for it is supplied). Oral nano-drug application is tested with a simulator device of the gastro-intestinal tract with SANS-DLS observation of drug nanoparticles and intermediates.
Speaker: Dr Thomas Nawroth (Gutenberg-University, Pharmaceutical Technology, Staudingerweg 5)
• 5:40 PM
Phase analysis of steel using neutron grating interferometry and bragg edge imaging 20m
Austenitic steel transforms to martensite under applied strain. An undesired modification of the mechanical properties by this process is typically compensated by annealing to restore the austenitic phase. Recently, it has been proposed to introduce a beneficial residual stress state in the material.
A spatially resolved determination of the phase fractions of martensite is required for the quantification of local residual stress introduced by manufacturing [1,2].
We non-destructively tracked the amount of martensite inside drawn austenitic steel samples using neutron bragg edge imaging (BEI) and neutron grating interferometry (nGI).
Differentiation between the two phases is possible by BEI due to their different crystal structure but complicated by strain induced texture.
nGI on the other hand is sensitive to scattering off ferromagnetic domains composed of martensite inside the material [3].
To verify the results of the two techniques we compared them to surface micrographs of the samples.
[1] M. Baumann, A. et al., MATEC web of Conferences 190, 2018
[2] K.A. De, et al., Scripta Materialia 50, pp. 1445-1449, 2004
[3] F. Pfeiffer et al., PRL, 96.21 (2006), 215505
Speaker: Tobias Neuwirth
• 5:40 PM
Phase transition kinetics in a doubly thermo-responsive poly(sulfobetaine)-based block copolymer thin film 20m
Thermo-responsive polymers show a strong change in volume towards slight changes of their surrounding temperature. While this behavior is well understood for polymers in solution, less is known about the underlying mechanisms in thin film geometry. In our work, we investigate the phase transition kinetics upon increasing temperature in a thermo-responsive block copolymer thin film, that shows both, upper and lower critical solution temperature (UCST and LCST) behavior. Time-of-flight neutron reflectometry (ToF-NR) is used to follow the phase transition kinetics with high time resolution. At temperatures, below the UCST, the polymer film is first swollen in D2O atmosphere to increase the mobility of the polymer chains. Subsequent, temperature is increased to an intermediate regime (between UCST and LCST) and high regime (above LCST). In addition ToF grazing incidence small angle neutron scattering (GISANS) measurements are performed at the beginning and in between the kinetic processes to gain detailed information about the thin film morphology at different temperatures.
Speaker: Lucas Kreuzer (TU München, Physik Department, E13)
• 5:40 PM
Phonon renormalization in LaCoO$_3$ 20m
LaCoO3 features two broad crossovers observed around $T_1 = 100 \,\rm{K}$ and $T_2 = 200 \,\rm{K}$. These crossovers are typically associated with the temperature dependent population of excited spin states of the Co$^{3+}$ ion, which evolves upon heating from the low-spin (LS), $S = 0$, to high-spin (HS), $S = 2$, configuration. Since the CoO$_6$ octahedra expands around the larger HS sites, a static LS-HS order was proposed by Goodenough in the 1960's [1] but was never confirmed experimentally. More recent studies [2,3] propose a dynamic short-range order of alternating LS and HS Co sites. A corresponding dynamic distortion of the crystal lattice mimics closely the Co-O breathing mode. Here, we use inelastic neutron scattering to study the lattice dynamics of LaCoO$_3$ over a wide temperature range, $5\,\rm{K} \leq T \leq 700\,\rm{K}$. We find strong phonon renormalization of low- as well as high-energy phonon modes with periodicities corresponding to the proposed superlattice.
[1] P. M. Raccah and J. B. Goodenough, Physical Review 155, 932 (1967).
[2] J. Kuneš and V. Křápek, Physical Review Letters 106, 256401 (2011).
[3] V. Křápek et al., Physical Review B 86, 195104 (2012).
Speaker: Frank Weber (Karlsruhe Institute of Technology)
• 5:40 PM
PMMA-b-PNIPAM thin films display cononsolvency driven response in mixed water-methanol atmospheres 20m
The diblock copolymer PMMA-b-PNIPAM forms micelles in solution that feature a permanently hydrophobic core and a thermo-responsive shell. While a typical shell collapse transition can be induced via a temperature stimulus at the LCST, the PNIPAM block is also sensitive to the composition of the surrounding solvent. Although water and organic cosolvents individually act as good solvents to the PNIPAM chain, mixtures of both act as bad solvent. As a consequence, the transition temperature shifts as a function of the molar fraction of the cosolvent. For PNIPAM, well-known examples of cosolvents include simple alcohols such as methanol or ethanol as well as acetone. We demonstrate that the cononsolvency effect is transferrable from solution to thin film systems. PMMA-b-PNIPAM films swollen in saturated water vapor show a swelling and collapse at the exchange of the surrounding atmosphere to a mixed vapor of water and cosolvent. The film kinetics are investigated with a focus on time-of-flight neutron reflectometry (TOF-NR) and spectral reflectance techniques. In order to differentiate between water and cosolvent distributions along the films’ vertical, sequential experiments with deuterated and non-deuterated water and cosolvent are performed. Complementary FTIR measurements reveal the hydration and cosolvent exchange process at the PNIPAM amide and alkyl functional groups.
Speaker: Christina Geiger (Technical University of Munich, Chair of Functional Materials)
• 5:40 PM
POWTEX – Angular- and Wavelength Dispersive, High-Intensity Neutron TOF Diffractometer 20m
POWTEX is a TOF neutron powder diffractometer under construction at MLZ. Funded by Germany’s Federal Ministry of Education and Research (BMBF), it is built by RWTH Aachen University and FZ Jülich, with contributions for dedicated texture sample environments from the Geo Science Centre of Göttingen University.
An instrument overview and the advances made in neutron instrumentation will be presented. Several new concepts were developed including a novel 10B detector and a double-elliptic neutron-guide system sharing focal points at the positions of pulse chopper and sample. The common focal point is an “eye of a needle” in time and space, optimizing time resolution and reducing the source background. The guide features an octagonal cross section with graded super-mirror coating, which results in Gaussian intensity and divergence distributions. The innovative jalousie detector based on solid 10B is a development for POWTEX that achieves high efficiency for a remarkably large coverage of nine steradians with almost no blind spots.
POWTEX aims for short measurement times and gives access to in situ chemical experiments, e.g., phase transitions as a function of T, p, and B0. For texture analysis, in situ deformation, annealing, simultaneous stress, etc., the large angular coverage drastically reduces the need for sample tilting/rotation. We developed new algorithms for refining angular- and wavelength-dispersive data sets (intensity as function of 2θ and λ).
Speaker: Yannick Meinerzhagen
• 5:40 PM
Precursor engineering of two-step slot-die coated perovskite layers by TBP, MAI and DMSO addition 20m
The progress of hybrid perovskite materials has amazed the scientific community in the photovoltaic field, demonstrating a rapid progress in the performance within the last 10 years reaching above 25% power conversion efficiency. Now the investigation of ways to move from lab methods (e.g spin-coating) to large-scaled production is required. Those methods include, e.g. roll-to-roll deposition, spray coating or sputtering. One of such roll-to-roll compatible methods is slot-die coating, which has several advantages: low waste of the used material, higher speed of production and possibility to print on a flexible substrate.
To reach highly homogeneous and defect-free, uniform films, two-step methylammonium lead iodide (MAPI) deposition is implemented. 4-tert-butylpyridine-assisted and methylammonium iodide-seeded solutions of lead iodide in dymethylformamide/dimethyl sulfoxide with different ratios as well as their combination are synthesized by slot-die coating with a home-built printer. Surface morphology is altered by addition of this solvents and those changes are investigated by SEM and XRD. Preferential orientation is studied by GIWAXS. Conversion to MAPI is tested and analyzed by XRD.
Results of this work can improve the quality of depositing PbI2-films in two-step perovskite deposition method leading to full conversion of perovskite and better quality of final layer.
Speaker: Oleg Shindelov
• 5:40 PM
Printed block copolymer templated ZnO photoanodes for photovoltaic applications 20m
ZnO has received much attention over the past years because it has a wide range of properties, including high transparency, piezoelectricity, wide-bandgap semiconductivity, high electron mobility and low crystallization temperature. To improve the photovoltaic performance of ZnO-based hybrid solar cell devices, an interconnected mesoporous inorganic nanostructure is favorable, which can provide a high surface-to-volume ratio for exciton separation within their lifetime and a good pathway for charge carrier transport. To fabricate the mesoporous inorganic ZnO semiconductors, various methods can be employed, such as chemical vapor deposition, wet chemical method and, hydrothermal synthesis. Among these methods, the diblock copolymer assisted sol-gel synthesis approach has been corroborated by countless reports to be powerful in its morphology tunability.
In the present work, an amphiphilic diblock copolymer is used as the template and suitable printing parameters are selected to fabricate the mesoporous ZnO films with varied morphologies. Grazing-incidence small angle X-ray scattering (GISAXS) is used to probe the inner film morphology without intervening the film formation process or impairing the printed films.
Speaker: Ting Tian
• 5:40 PM
Drug-loaded polymer micelles or nanoparticles are being continuously explored in the fields of drug delivery and nanomedicine. Commonly, a simple core−shell structure is assumed, in which the core incorporates the drug and the corona provides steric shielding, colloidal stability, and prevents protein adsorption. Recently, the interactions of the dissolved drug with the micellar corona have received increasing attention. Here, using small-angle neutron scattering, we provide an in-depth study of the differences in polymer micelle morphology of a small selection of structurally closely related polymer micelles at different loadings with the model compound curcumin. This work supports a previous study using solid-state nuclear magnetic resonance spectroscopy and we confirm that the drug resides predominantly in the core of the micelle at low drug loading. As the drug loading increases, neutron scattering data suggests that an inner shell is formed, which we interpret as the corona also starting to incorporate the drug, whereas the outer shell mainly contains water and the polymer. The presented data clearly shows that a better understanding of the inner morphology and the impact of the hydrophilic block can be important parameters for improved drug loading in polymer micelles as well as provide insights into the structure-property relationship.
Speaker: Benedikt Sochor (University Würzburg)
• 5:40 PM
PUMA: thermal three-axes spectrometer equipped with multi-analyzer and unique polarization option 20m
In addition to the “normal three axes” mode, PUMA is equipped with the multi-analyzer and -detector setup consisting of 11 arbitrarily configurable analyzer-detector channels suited for kinetic experiments to realize an entire momentum and energy scan in a single shot. Moreover, the same setup can be used also for neutron polarization experiments to determine the spin flip and the non-spin flip components simultaneously at the same state of the sample. Here we show the current status of PUMA with the multi-analyzer setup.
Speaker: Jitae Park
• 5:40 PM
Quantum cascade laser-based infrared spectrometer combined with small angle neutron scattering for life science applications 20m
Infrared spectroscopy serves as local probe reporting on specific vibrations in some side chains which are spectrally distant from the complicated infrared spectrum of a protein in solution. But it can also serve as a global probe using the coupling of the amide I or amide II vibrations of the protein backbone. Here, infrared spectroscopy can give information on the fold of the protein and also follow aggregation phenomena. Small angle neutron scattering also reports on the global structure of proteins in solution and can give information on the shape of growing aggregates or folded protein in solution. Both techniques prefer heavy, deuterated water over normal water. Pioneering work has been performed on the combination of SANS and IR spectroscopy using FTIR spectrometers by Prof. Kaneko and coworkers [1].
In the framework of a BMBF-funded project, we would like to explore the capabilities of quantum cascade laser for this combination of methods. Their advantage is superior beam characteristics and spectral density over the glow bar infrared light sources of the FTIR spectrometer. Their disadvantage is the more complicated mode of operation and the limited spectral width they can cover.
This contribution will focus on showing conceptual design considerations and first characterizations of potential samples, since the project just started recently (May 2020).
[1] Kaneko et al, Development of a Simultaneous SANS/FTIR Measuring System, Chemistry Letters, 2015, 44, 497-499
• 5:40 PM
REFSANS: The horizontal time-of-flight reflectometer with GISANS option at the Heinz Maier-Leibnitz Zentrum 20m
REFSANS is the horizontal ToF reflectometer at the MLZ in Garching. It is designed to carry out specular and off-specular reflectivity, as well as GISANS studies of solid/liquid, solid/air and liquid/air interfaces. Through ToF analysis, REFSANS gives simultaneous access to a range of Q values (with Qmax/Qmin up to ≈ 7), useful to study air-liquid interfaces and kinetic phenomena.
A chopper-system composed by six disks allows a tunable wavelength resolution, from 0.2 % up to 10%. The neutron optics of REFSANS comprises neutron guide elements with different channels and special apertures to provide, on the one hand, slit smeared beams for conventional reflectometry and, on the other hand, point focused beams for GISANS measurements. Furthermore, it is possible to independently control the horizontal and vertical beam divergence, in dependence on the sample characteristics.
Given the ToF nature of REFSANS, the investigation of kinetic processes is based on the possibility to embrace a Q-range with a single instrumental setting. Time resolution can be pushed down to 30 s with data recorded in event-mode: this feature makes possible to perform various time re-binnings after the experiment. Beside the typical sample environment, a three-electrode electrochemical compact cell was recently realized for investigation of phenomena at the electrode surface. Currently, the design of a humidity cell is in progress, to allow investigations of processes in a controlled atmosphere.
Speaker: Gaetano Mangiapia (German Engineering Materials Science Centre (GEMS) am Heinz Maier-Leibnitz Zentrum (MLZ))
• 5:40 PM
Replacing MultiView and LabView with NICOS 20m
MGML (Materials Growth & Measurement Laboratory) in Prague is an open research infrastructure providing access to the instrument suite dedicated to measurements of a rich spectrum of physical properties of materials in a wide range of temperatures, magnetic and electrical fields, and hydrostatic uniaxial pressures. Together there are 18 furnaces, 3 diffractometers, 5 room temperature instruments and 8 cryostats.
NICOS software [1] is used on every instrument as an experimental logbook, providing user authentication against user office system and uploading measured data after experiment to central file storage. All cryostats can be controlled with NICOS: Cryogenics 20T magnet, Leiden Cryogenics 9T magnet with dilution insert and Oxford Instruments Triton cryofree dilution refrigerator with 4T vector magnet. All Quantum Design instruments (3x PPMS, 2x MPMS) can be also controlled with NICOS via SECoP [2].
We will present a way how to use NICOS for resistivity measurement (Keithley K6221/K6220 + K2182A, DC using Delta method, up to few GΩ or Lock-In Amplifier SR830(+K6221), SR865, up to 4MHz and also Keithley K6517B, DC, up to and beyond TΩ). Special setup also exists for measurement of thermal conductivity and Seebeck coefficients.
Our users are happy to use instrument friendly software and in addition they are used to NICOS when they will come to neutron facility for the first time.
[1] https://nicos-controls.org/
[2] https://github.com/SampleEnvironment/SECoP
Speaker: Petr Čermák (MGML, Charles University)
• 5:40 PM
Rotational and long range diffusion in a lithium amide–lithium borohydride mixture 20m
On-board hydrogen storage is still a challenge for fuel cell vehicles and other mobile applications. Complex hydrides, which contain ions such as BH4- and NH2-, have a high hydrogen capacity in combination with a low weight of the storage material. For example, Li4BH4(NH2)3 contains 11.1 wt.% hydrogen and desorbs more than 10 wt% at 573-673 K. In previous studies the high desorption temperature was reduced with additives. To understand the chemical behaviour and atomic motions of Li4BH4(NH2)3, we present an in situ phase analysis and quasielastic neutron scattering (QENS) during heating.
In situ X-ray diffraction was measured up to 573 K at P02 (DESY) and QENS was taken at TOFTOF (MLZ) in the temperature range 300-514 K. Li4BH4(NH2)3 melts at 494 K and during heating crystallization of a second phase was detected and identified as LiNH2, which remained a crystalline residue in the melted material. From the quasielastic signal rotational and long range motions were analysed and assigned to BH4- and NH2- of Li4BH4(NH2)3 and of the crystallized LiNH2 phase.
Speaker: Neslihan Aslan (HZG, GEMS at MLZ)
• 5:40 PM
Sample Environment at MLZ 20m
We report on the newest development of sample environment at MLZ
Speaker: Dr Alexander Weber (Forschungszentrum Jülich GmbH)
• 5:40 PM
Separation of the Formation Mechanisms of Residual Stresses in LPBF 316L 20m
Rapid cooling rates and steep temperature gradients are characteristic of additively manufactured parts and important factors for the residual stress (RS) formation.
This study examined the influence of heat accumulation on the distribution of RS in two prisms produced by Laser Powder Bed Fusion (LPBF) of austenitic stainless steel 316L.
The layers of the prisms were exposed using two different border fill scan strategies: one scanned from the centre to the perimeter and the other vice versa. The goal was to reveal the effect of different heat inputs on samples featuring the same solidification shrinkage. RS were characterised in one plane perpendicular to the building direction at the mid height using Neutron and Lab X-ray diffraction. Thermography data obtained during the build process were analysed to correlate cooling rates and apparent surface temperatures with the residual stress results. Optical microscopy and micro computed tomography were used to correlate defect populations with the residual stress distribution.
The two scanning strategies led to RS distributions typical for additively manufactured components: compressive stresses in the bulk and tensile stresses at the surface. However, due to the different heat accumulation, maximum RS levels differed.
We concluded that solidification shrinkage plays the major role in determining the shape of the RS distribution and the temperature gradient mechanism appears to determine the magnitude of peak RS.
Speaker: Alexander Ulbricht
• 5:40 PM
Short-Time Self-Diffusion of Salt- and Temperature-Dependent Protein Clusters 20m
Salt-induced charges in aqueous suspensions of proteins can give rise to complex phase diagrams including homogeneous solutions, large aggregates, and reentrant dissolution regimes. Moreover, depending on the temperature, a liquid-liquid phase separation may occur within the aggregation regime. Here, we systematically explore the phase diagram of the globular protein BSA via its dynamics as a function of temperature $T$ and protein concentration $c_p$ as well as of the concentrations $c_s$ of trivalent salts YCl$_3$ and LaCl$_3$. By employing incoherent neutron backscattering spectroscopy at BASIS (SNS) with energy transfers up to 100 $\mu$eV, we unambiguously access the global and internal short-time self-diffusion of the protein clusters depending on $c_p,c_s$ and $T$. We determine the cluster size in terms of effective hydrodynamic radii as manifested by the cluster center-of-mass diffusion coefficients $D$. For both salts, we find a simple functional form $D(c_p,c_s,T)$ in the parameter range explored. The master-curve observed previously [1] can be confirmed also for different temperatures and different salts. The salt- specific calculated binding probabilities and inter-particle attraction strengths, based on the short-time microscopic diffusive properties, increase with salt concentration and temperature in the regimes investigated and can be linked to the macroscopic behavior and to microscopy data.
[1] M. Grimaldo et al. J. Phys. Chem. Lett. 6 (2015)
Speaker: Tilo Seydel (Institut Max von Laue - Paul Langevin)
• 5:40 PM
Spray deposited anisotropic magnetic hybrid thin films containing PS-b-PMMA and strontium hexaferrite magnetic nanoplates 20m
Spray deposition is employed to fabricate anisotropic ferromagnetic thin films composed of the ultrahigh molecular weight diblock copolymer (DBC) polystyrene-block-poly(methyl methacrylate) and strontium hexaferrite nanoplates functionalized with hydrophilic groups. During spray deposition, the kinetics of structure evolution of the hybrid films is monitored in situ with grazing incidence small angle X-ray scattering. A pure polymer film is also deposited as a reference with same conditions. The obtained final hybrid film is then solvent annealed to increase the domain size of the DBC for the incorporation of more nanoplates. Due to the rearrangement of the nanoplates inside the DBC during solvent annealing, an obvious change in the magnetic behavior of the hybrid film is observed via superconducting quantum interference device investigation. After solvent annealing, the hybrid film shows extremely weak magnetic anisotropy. While it exhibits magnetic anisotropy before solvent annealing.
Speaker: Mr Wei Cao (TU München, Physik-Department)
• 5:40 PM
SPUTTER DEPOSITION OF SILVER ON NANOSTRUCTURED PMMA-b-P3HT COPOLYMER THIN FILMS 20m
Nanostructured polymer-metal-composite films demonstrate great perspectives for optoelectronic applications, e.g. as sensors [1] or organic photovoltaics (OPV) [2]. To enhance properties of such devices the metal cluster self-assembly process needs to be understood [3, 4]. We correlate the emerging nanoscale morphologies with electronic properties and quantify the difference in silver growth, comparing the diblock copolymer template with its corresponding homopolymer thin film counterparts [5]. Hence, we are able to determine the influence of the respective polymer blocks and to observe substrate effects on silver cluster percolation threshold [5], which affects the insulator-to-metal transition (IMT). In this contribution, we investigate the silver cluster morphology during the growth on a PMMA-b-P3HT block copolymer template. Such block copolymer templates are used as to install tailor nanostructures in OPV, as it contains one p-type organic semiconductor (P3HT) [2]. We applied with grazing incidence small-angle X-ray scattering (GISAXS) to observe the cluster formation, as well as the crystallinity of the metal film formation with grazing incidence wide-angle X-ray scattering (GIWAXS) in situ during sputter deposition. Our study reveals the selective wetting of silver on one of the polymer blocks and the influence of the template on the percolation behavior of the silver layer, which is measured with resistivity measurements during the sputter deposition.
Speaker: Marc Gensch (DESY/TUM)
• 5:40 PM
Stimuli-Responsive Micelles from Amphiphilic Diblock Copolymers 20m
Stimuli-responsive block copolymers self-assemble in aqueous solution and respond to changes of their environment, rendering them useful as smart nanocarriers for drug delivery and gene therapy. In the present project, we investigate responsive micelles formed by PDMAEMA-b-PLMA or PDMAEMA-b-PLMA-b-POEGMA [1,2]. PDMAEMA is a weak cationic polyelectrolyte and responsive to pH, ionic strength and temperature, whereas PLMA is strongly hydrophobic, enabling the delivery of hydrophobic drugs. POEGMA is permanently water-soluble and improves biocompatibility. Dynamic light scattering on PDMAEMA70-b-PLMA39 revealed that, at pD 2.8, self-assembled structures form, whose relatively large size points to vesicle formation. At pD 7.8 and 10.4, additional large aggregates are present up to a certain temperature. Detailed structural information is obtained from small-angle neutron scattering (SANS) at KWS-2 at MLZ, confirming the differences of the micellar structures in acid or alkaline solution.
Speaker: Yanan Li (Technische Universität München, Physik-Department, Fachgebiet Physik weicher Materie)
• 5:40 PM
Structural Properties of Micelles formed by Telechelic Pentablock Quaterpolymers with pH-responsive Midblocks and Thermoresponsive End Blocks in Aqueous Solution 20m
Stimuli-responsive polymers are of interest for applications in drug delivery or tissue engineering. Telechelic block copolymers, where a pH-responsive midblock is end-capped by thermo-responsive end blocks, have great potential due to their ability to form highly tunable micelles or hydrogels.
In the present work, micelles formed by the telechelic pentablock quaterpolymer P(nBuMA8-co-TEGMA8)-b-PDMAEMA50-b-PEG46-b-PDMAEMA50-b-P(nBuMA8-co-TEGMA8) in dilute aqueous solution are investigated as a function of temperature and pH. The endblocks are statistical copolymers of the thermo-responsive TEGMA (triethylene glycol methyl ether methacrylate) and the hydrophobic nBuMA (n-butyl methacrylate). The intermediate PDMAEMA poly(2-(dimethylamino)ethyl methacrylate) block is a weak cationic polyelectrolyte. The hydrophilic poly(ethylene glycol) (PEG) block ensures water-solubility. Using small-angle neutron scattering (SANS) at KWS-1, FRM II, we found that the micelles have a spherical core and a strongly swollen corona. Their aggregation number and size depend sensitively on the pH and temperature. At low temperatures, some polymers form dangling ends, especially at low pH values. With increasing temperature, dangling ends transform into loops at high pH values, while the dangling ends are more abundant at low pH values. In summary, the micelles show complex responsive behavior, including crosstalk between the stimuli.
Speaker: Mr Florian A. Jung (Technische Universität München)
• 5:40 PM
Structure and dynamics of polyelectrolytes in water solution 20m
Intrinsically disordered proteins (IDP) challenge the classical structure function paradigm in structural biology as they have specific function without fixed structure. Specifically, the dynamics of flexible chains seems to be of great importance for fast response to environmental conditions. Since proteins and, in particular, IDPs have properties of charged polymer chains, polyelectrolytes (PE) can be used as model system to study response of charged chains to environmental changes.
We explore structure and dynamics of polystyrene sulfonic acid (PSSH) and salt (PSSNa) as a well-known polyelectrolyte in solution with low to high ion concentration (H+ and NaCl). The concentration of PE is well below the overlap concentration to examine the single chain regime. For single PE chains a transition from coils to globules is observed. Moreover, at some conditions ion condensation leads to pearl necklace conformation.
To elucidate the structure and form factor of PSSH SAXS and SANS (MLZ) measurements were conducted. Their combined analysis performed over a large Q-range allows us to examine the NaCl contribution and determine the details of intrachain structure.
NSE experiment (ILL) clearly shows change of polyelectrolyte dynamics as a function of salt concentration. Analysis of relaxation dynamics shows a change from a rigid body behavior (collapsed chains) to Zimm like dynamics as expected for strongly screened flexible polymer chains. The effect is temperature dependent.
Speaker: Ekaterina Buvalaia
• 5:40 PM
Structure of Composite Materials of pNIPAM Brushes and Magnetic Nanoparticles 20m
Polymer chains, grafted to a substrate by one end are usually referred to as polymer brushes (PBs). They are extensively used as thin surface coatings, enabling a tuneable film thickness, as well as high chemical and mechanical stability. Further, a high versatility arises due to the various monomers that can be utilized what may cause sensitivity to external stimuli, e.g. temperature or ionic strength [1].
Since their first description and experimental realization, the range of possible shapes and applications has been growing fast and is still expanding [2]. Apart from the interesting intrinsic properties, PBs are a suitable matrix for the attachment of additional components like nanoparticles or other functional materials. In the last few years, the focus shifted more towards the interplay between PBs and other materials in order to generate specific features, like on-demand drug delivery or sensing [3].
In this work, the adsorption behaviour of citric acid capped magnetic nanoparticles (MNPs) at poly(N-isopropyl acrylamide) brushes is investigated. The MNP concentration as well as the pH value during the adsorption are varied to control the loading of the PBs with MNPs. In order to localize the MNPs at the PB, the structure of the composite material is characterized with neutron reflectometry.
[1] S. Christau et al. Macromolecules, 2017, 50, pp. 7333-7343
[2] W. Chen et al. Macromolecules 2017, 50, pp. 4089−4113
[3] W. Górka et al. Nanomaterials 2019, 9(3), 456
Speaker: Philipp Ritzert (Technische Universität Darmstadt, Institut für Festkörperphysik)
• 5:40 PM
Structured graphite anodes for Li-ion batteries 20m
Laser structured electrodes for Li-ion batteries have been reported as a promising approach for improvement of battery performance. The contact area between the electrolyte and active material in the electrode can be modified as a result of the three-dimensional structured electrode surface. The effective Li-ion diffusion pathways are shortened during the charging and discharging of the cell. Surface structuring can potentially reduce cell internal resistance, which has a positive impact on the battery performance at high C-rates. In this work, electrochemical properties of the laser structured and unstructured graphite anodes in fresh and aged NMC/C cells were studied. The aim was to examine cell performance at high C-rates. NMC/C pouch cells were studied via in-situ neutron diffraction, an important method for characterizing the structural changes before/after the intercalation of Li into graphite. It has been confirmed that the electrochemical performance of the laser structured electrodes has been improved and that there are no structural changes present in the active material caused by laser irradiation. These results bring many insights for our future work in the area of structured 3D electrodes. One possibility to shorten the ion diffusion paths in the battery cells is by preparing the electrodes using additive manufacturing. This method offers many opportunities for innovative electrode and cell design, thanks to its high precision and diversity.
Speaker: Ivana Pivarníková (Technical University of Munich, Heinz Maier-Leibnitz Zentrum (MLZ))
• 5:40 PM
Studying the dynamics of PTB7:PCBM blend films with quasielastic neutron scattering 20m
In organic photovoltaics, donor - acceptor bulk heterojunctions are often used as active layer due to their superior performance compared to e.g. planar structured devices. In this optically active polymer layer, a photon is absorbed and an exciton created. After diffusion to a donor-acceptor interface, the exciton is dissipated and charge carriers can be extracted at the electrodes.
A frequently applied and well-studied system is the combination of P3HT ((C10H14S)n) as electron donor and PCBM (C72H14O2) as electron acceptor. Previous studies have shown that internal dynamics and structural layout of the active layer influence its electronic properties and thus its performance in a device.
A more modern, very promising low-band gap electron donor material is PTB7 ((C41H53FO4S4)n). We investigated films of PTB7, PCBM and a mixture of these two, prepared out of chlorobenzene solutions. On these films we performed first quasielastic neutron scattering experiments at the cold neutron time of flight spectrometer TOFTOF (MLZ, Garching). Hydrogen dynamics of pure compounds as well as blend films are investigated on a pico- to nanosecond timescale in a temperature range from 150 K to 400 K. Results are compared with the established P3HT:PCBM system.
Speaker: Dominik Schwaiger (TUM Physik E13)
• 5:40 PM
Technical design of a levitated dipole for confinement of a low-temperature, long-lived, electron-positron plasma 20m
A low-temperature, long-lived (LTLL) electron-positron pair plasma has never been produced in a laboratory environment. The APEX project aims to do so by accumulating positrons from the NEPOMUC beam at MLZ and inject them into a magnetic trap formed with a levitated coil in order to study the unique plasma behavior pair plasmas are expected to exhibit. We present technical design plans for this experiment. A closed coil wound with high-temperature REBCO superconducting tape will produce the dipole field. The closed dipole coil (floating coil) will be magnetically levitated by use of a water-cooled copper coil (lifting coil) located above the floating coil. A feedback circuit will vary the lifting coil current in response to input from three laser rangefinders. A cooled radiation shield (RS) insulates the floating coil from room temperature radiation. We estimate a total levitation time on the order of hours. The RS is segmented into eleven electrodes. ExB drift is utilized to move incoming positrons onto closed field lines. The floating coil is mechanically lifted into place and cooled by retracting into a small sub-chamber, which is then pressurized with helium to provide thermal contact with the cold faces. The superconducting charging coil is integrated into this sub-chamber, allowing the floating coil to sit on-plane with the charging coil thus enabling efficient inductive charging. Assembly and first tests with positrons are expected early 2021.
Speaker: Alexander Card (Max-Planck-Institut für Plasmaphysik)
• 5:40 PM
The Absolute Direction of the Dzyaloshinskii-Moriya Interaction in Hematite Determined by Polarized Neutron Diffraction 20m
Polarized neutron diffraction (PND) is a powerful method which provides direct access to the scattering contribution from nuclear-magnetic interference and thus reveals the phase difference between the nuclear and magnetic structure. This technique can be utilized to gain a detailed insight in the microscopic spin ordering at the unit cell level even for complex magnetic structures. Since magnetic domains correspond to an overall phase shift between the nuclear and magnetic structure, PND also allows to resolve different magnetic domain configurations providing additional information at the mesoscopic scale. This qualifies PND as a versatile tool to simultaneously address a wide range of scientific issues. By conducting a detailed PND study of the prototypical room-temperature weak ferromagnet α-Fe$_2$O$_3$ (hematite) we could solve the long standing problem of inconsistent asymmetry signs observed within Friedel pairs in hematite. Moreover, using a detailed symmetry analysis the absolute direction of the Dzyaloshinskii-Moriya interaction (DMI) vector in α-Fe$_2$O$_3$ could be determined for the first time. This study is supported by a detailed refinement of the slightly canted magnetic structure and by numerical calculations. It can be used as a reference for further DMI sign determinations, reducing the experimental efforts to the measurement of one representative reflection, making it well suited for highly topical materials often requiring extreme sample environment.
Speaker: Mr Henrik Thoma (Jülich Centre for Neutron Science JCNS at MLZ)
• 5:40 PM
The Coincidence Doppler-Broadening Spectrometer at NEPOMUC 20m
Doppler-broadening spectroscopy (DBS) of the $511 \textrm{keV}$ gamma line generated by positron-electron annihilation provides information on lattice defects. It is sensitive to concentrations as low as 1e-7 vacancies per atom. In addition, the chemical surroundings of defects can be analyzed by coincidence DBS (CDBS). The current status and recent improvements of the CDB-Spectrometer at the Neutron-induced Positron Source Munich (NEPOMUC) is presented.
The maximum probing depth of the positron beam is material dependent and varies from hundreds of nm, for heavy metals to a few micrometers in, e.g., Si. Two beam modes are available: standard measurements with a $\approx 300\ \mu\textrm{m}$ (FWHM) beam spot and high resolution measurements with a micro beam with a spatial resolution of $33\ \mu$m (FWHM). Measurements may either be conducted as DBS, where the signal at each detector is treated separately, or as CDBS, where the detectors are run as coincidence pairs, greatly improving the signal-to-noise ratio. Currently, three different sample holders are available: i) a piezo x-y stage for precision 2D scanning and hence 3D defect imaging, ii) a heatable sample holder with T$_{max}=1100\ \textrm{K}$ for temperature dependent defect spectroscopy, iii) a cryostat with T$_{min}=40\ \textrm{K}$.
The improvements comprise an automated beam optimization system and the increase in the number of detectors combined with an upgrade of the readout electronics.
• 5:40 PM
The cold neutron imaging beam line ANTARES 20m
The cold neutron imaging beam line ANTARES at FRM II is a state of the art facility which combines excellent beam properties with highly flexible experimental conditions. User experiments can be performed with complex sample environment like croystats, furnaces or tensile rigs.
In this poster we give an overview of the beam line layout and possible options of the beam line. Moreover,we will show examples of selected experiments performed at ANTARES to demonstrate the potential of the beam line.
Speaker: Michael Schulz
• 5:40 PM
The effect of CsBr doping on the crystallization kinetics of perovskite films 20m
In recent years, organic-inorganic hybrid perovskite solar cells (PSCs) have made great progress due to the superior optoelectronic properties including high absorption coefficient, high defect tolerance, and long charge carrier diffusion lengths. Benefiting from these excellent properties, the power conversion efficiency (PCE) of PSCs has improved from 3.9% to certified 25.2% with great development prospects. In this work, we demonstrate that doping a small amount of CsBr into the perovskite component, can tune the crystallization behavior and bandgap, promote energy level alignment between perovskite active layer and electron transport layer, and accelerate carrier transport and extraction. In addition, grazing incidence wide angle X-ray scattering (GIWAXS) is used to study the crystal structure and crystal orientation. As a result, we can obtain high performance devices with PCE of 19.24%.
Speaker: yuqin zou
• 5:40 PM
The Fierz interference term and recent PERKEO III measurements 20m
Neutron beta decay is an excellent system to test the Standard Model theory of the weak interaction and the structure of the charged weak interaction. The Fierz term is one of these parameters to study and as such is sensitive to hypothetical scalar and tensor interactions. These interactions are currently most strongly constrained by combining measurements of λ, τ and super-allowed nuclear decays.
In the past, experiments at the ILL have determined the ratio of axial-vector and vector coupling constants λ=g A /g V and the CKM matrix element V_ud in the decay of free neutrons with measurements of the β-asymmetry parameter A and of the neutron lifetime τ. The aim of the current measurement presented in this poster by PERKEO III, which was conducted until recently, is the determination of the Fierz interference term b with a precision of 5×10 -3 from the spectrum of the electrons directly.
The signature of a hypothetical non-zero Fierz term in neutron beta decay is an extra energy-dependent phase-space contribution. Major systematic effects are hence related to the detector response: calibration, temporal stability, spatial uniformity and non-linearity. With the latest measurement at ILL, we aim to obtain for the first time competitive neutron data with an existing and proven instrument, improving on the only previous result by UCNA (Los Alamos) by a factor of 20, and also establishing the necessary understanding of systematics for future measurements with PERC at the FRM II.
Speaker: Max Lamparth (TUM)
• 5:40 PM
The high resolution neutron backscattering spectrometer SPHERES 20m
The neutron backscattering spectrometer SPHERES (SPectrometer for High Energy RESolution) at MLZ is a third generation backscattering spectrometer with focusing optics and phase-space transform (PST) chopper. It covers a dynamic range of ± 31μeV with a high resolution of about 0.66μeV and a good signal-to-noise ratio. The instrument performance has been improved over the recent years by different measures. The intensity has been more than doubled by the upgrade of the PST chopper and the focusing guide. The signal-to-noise ratio can be significantly improved by employing the new background chopper.
SPHERES enables investigations on a broad range of scientific topics from the classical applications of backscattering like hyperfine splitting or rotational tunneling to investigations on new materials like high temperature polymer electrolyte fuel cells or novel nano-composites. It is in particular sensitive to the incoherent scattering from hydrogen and allows to access dynamic processes up to a timescale of a few ns. It is hence well suited to study the dynamics in soft-matter materials like polymers or proteins, or to observe the motion of water in confined geometry. Other typical applications include relaxation in viscous liquids or diffusion processes in various systems.
Speaker: Michaela Zamponi (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science at Heinz Maier-Leibnitz Zentrum)
• 5:40 PM
The Myelin Basic Protein and its Phase Behaviour 20m
The Myelin Basic Protein (MBP) is an essential part of the myelin sheath in almost all vertebrates and, thus, contributes significantly to flawless signal conduction. Here, one of its key properties is the ability to perform a Liquid-Liquid Phase Separation (LLPS), the coexistence of highly concentrated protein phases within a diluted solution.
Microscopy experiments indicated that a LLPS would occur upon the addition of Polyethylene glycol (PEG). By using contrast matched PEG in 100% D2O, USANS experiments at KWS-3 in MLZ confirmed that the optically observed droplets originated from MBP condensates. The droplet size was determined to be in the low μm range, which is in good accordance with DLS measurements. Kinetic studies on the droplet growth have pointed out that an equilibrium size was reached after only a few minutes. Furthermore, the investigations have shown that both coalescence and Ostwald ripening contribute to droplet expansion.
Neutron scattering experiments at KWS-2 revealed unfolding of the proteins as well as increasing size of single MBP molecules upon the addition of PEG. As a complementary technique, CD spectroscopy was used which supported the previous finding.
It is concluded that variations of protein structure and the occurrence of a LLPS are related phenomena which affect each other. Hence, future examinations will cover this effect in detail, as well as droplet growth kinetics of the earliest stages of a LLPS with improved temporal resolution.
Speaker: Igor Graf von Westarp
• 5:40 PM
The new Total Reflection High-Energy Positron Diffractometer at NEPOMUC 20m
It has been shown that Total Reflection High-Energy Positron Diffraction (TRHEPD) is an ideal technique to precisely determine the crystalline structure of the topmost and immediate subsurface layers. Novel materials such as topological insulators or 2D materials can be investigated to determine not only the surface structure, but also the substrate spacing and potential buckling.
We developed a novel TRHEPD apparatus, which is now connected to the high-intensity positron source NEPOMUC at the FRM II. During the first beamtime in spring 2020, it was possible to magnetically guide the positron beam to the experiment, test the electrostatic acceleration up to 15keV and map the direct beam using a micro channel plate (MCP) assembly. We obtained a parallel beam suitable for diffraction with a diameter of less than 4mm. We also tested the optional twofold remoderation device in front of the TRHEPD setup that reduces the beam diameter to about 1mm. These values are in excellent agreement with our simulations. For the next beamtime, we plan to record the first diffraction pattern of a Si(111)-(1x1) hydrogen-terminated surface to benchmark the setup. Recent experimental results will be presented at the meeting.
Speaker: Mr Matthias Dodenhöft (Technische Universität München (TUM) Physik Department E21 und Heinz Maier-Leibnitz Zentrum (MLZ) Lichtenbergstr. 1 85748 Garching, Germany)
• 5:40 PM
The relevance of protein dynamics for protein folding: The case of apomyoglobin 20m
Dynamics of different folding intermediates and denatured states might have implications in understanding protein folding. Apomyoglobin (apoMb) has been investigated using neutron spin-echo spectroscopy (NSE) and SANS [1] and quasielastic neutron scattering (QENS) [2,3] in different states: native-like, partially folded and completely unfolded. Mean square displacements obtained by QENS showed a correlation with the secondary structure content of apoMb [2,3]. However, recent NSE & SANS data offered a detailed picture on the physical nature of slow collective dynamics and different dynamics behavior was observed [1]. While the internal dynamics of the native-like state can be understood using normal mode analysis based on high resolution structural information of myoglobin, for the unfolded and even for the molten globule states, models from polymer science are employed. The Zimm model accurately describes the slowly-relaxing, expanded GdmCl-denaturated state. Dynamics of the acid unfolded and molten globule state are similar in the framework of the Zimm model with internal friction. Transient formation of secondary structure elements in the acid unfolded and presence of α-helices in the molten globule state lead to internal friction to a similar extent, which demonstrates the importance of secondary structure elements as source of internal friction in partially folded proteins.
1. Balacescu et al. Scie Rep, 2020
2. Stadler et al. JPCB, 2015
3. Stadler et al. PCCP, 2016
• 5:40 PM
The resonant neutron spin echo spectrometer RESEDA 20m
The MIEZE (Modulation of Intensity with Zero Effort) technique is in essence a high-resolution spin echo time-of-flight technique. In contrast to classical neutron spin echo, all beam preparation and therefore all spin manipulation is done BEFORE the sample, opening up the possibility of introducing depolarizing conditions at the sample position. Therefore, magnetic or strongly incoherently scattering samples can easily be measured without loss of signal. Furthermore, it is possible to apply large magnetic fields at the sample position, making MIEZE an excellent tool for studying fluctuations at quantum phase transitions as well as other dynamic magnetic phenomena, such as the melting of superconducting vortex lattices. Several highlights of recent results utilizing measurements from RESEDA using the MIEZE technique will be presented.
Speaker: Johanna K. Jochum
• 5:40 PM
The Robot Positioning System at the Materials Science Diffractometer STRESS-SPEC 20m
The diffractometer STRESS-SPEC is optimised for fast strain mapping and pole figure measurements. Our group was the first to pioneer the usage of industrial robots for sample handling at neutron diffractometers. However, the current robot is limited in its use due to insufficient absolute positioning accuracy of up to ± 0.5 mm. Usually, an absolute positioning accuracy of 10% of the smallest gauge volume size – which in case of modern neutron diffractometers is in the order of 1×1×1 mm^3 – is necessary to allow accurate strain tensor determination and correct centering of local texture measurements. Therefore, the original robot setup at the neutron diffractometer STRESS-SPEC is currently being upgraded to a high accuracy positioning/metrology system. We will present the complete measurement process chain for the new robot environment. To achieve a spatial accuracy of 50 µm or better during strain measurements the sample position will be tracked by an optical metrology system and it is going to be actively corrected. The additional use of radial collimators creates more space in the sample environment and enhance the residual stress analysis capabilities for large complex parts. Finally, a newly designed laser furnace can be mounted at the robot flange to conduct texture measurements at elevated temperatures of up to 1300 °C. A brief overview of the STRESS-SPEC instrument and its capabilities as well as first commissioning experiments using the new setup will be given
Speaker: Martin Landesberger (TUM)
• 5:40 PM
The small-angle scattering instrument SANS-1 at MLZ 20m
We present the features of the instrument SANS-1, a joint project of TUM and HZG [1]. SANS-1 features two velocity selectors with 10% and 6% Δλ/λ and a fast TISANE 14-window double chopper, allowing efficiently tuning flux, resolution, duty cycle and frame overlap, including time resolved measurements with repetition rates up to 10 kHz. The polarization analysis option combines a compensated MEOP and an integrated RF-flipper.
A second key feature is the large accessible Q-range facilitated by the sideways movement of the primary 1m$^2$ detector. Particular attention is hence paid to effects like tube shadowing and anisotropic solid angle corrections that arise due to large scattering angles ~40° on an array of single $^3$He tubes, where a standard cos$^3$ solid angle correction is no longer valid. SANS-1 features a flexible, spacious sample stage equipped with a heavy-duty goniometer, allowing hosting a wide range of different sample environment like a set of sample changers, magnets, ovens, a bespoke dilatometer for in-situ rapid quenching/heating [2] and a dedicated HF-coil system for nanomagnetism/hyperthermia [3].
We show selected highlights and present our current developments, e.g. a high temperature furnace that works as an insert for the 5T magnet and a future high magnetic field project.
[1] S. Mühlbauer et al., NIMA 832, 297-305, (2016)
[2] TA Instruments, DIL805A/D/T Quenching dilatometer
[3] NB Nanoscale, D5 HF-Generator for Magnetic Hyperthermia
Speaker: Sebastian Muehlbauer
• 5:40 PM
The SoNDe high-flux neutron detector 20m
New high-flux and high-brilliance neutron sources demand a higher count-rate capability in neutron detectors. In order to achieve that goal, the Solid-State Neutron Detector (SoNDe) project developed a scintillation-based neutron detector. It is capable of fully exploiting the available flux current and coming neutron facilities, such as the European Spallation Source (ESS). [1] In addition to enabling high count-rates, one of the design goals was to develop a modular and scalable solution that can also be used in other instruments or different contexts, such as for laboratory setups. [2]
Since higher brilliance and flux sources call for detectors that can handle high-flux, especially when considering pulsed sources with high peak-flux, SoNDe provides
• Possible to handle a flux of more than 50 MHz on a 1x1 m$^2$ detector area
• Pixel resolution down to 3x3 mm$^2$
• Neutron detection efficiency higher than 80%, good gamma-discrimination
• µs time resolution
Count rates of 250 kHz per module (5 cm x 5 cm) were measured under primary beam conditions at neutron scattering experiments. Combined with the high area coverage of the square modules and the high efficiency of the scintillator this allows to use high flux neutron sources to capacity.
[1] JAKSCH, S., et al. Proceedings of the International Conference on Neutron Optics (NOP2017). 2018. S. 011019.
[2] JAKSCH, S., et al. Cumulative Reports of the SoNDe Project July 2017. arXiv preprint arXiv:1707.08679, 2017.
Speaker: Sebastian Jaksch (Physicist)
• 5:40 PM
The strengths of small-angle neutron scattering for magnetic nanoparticle characterization 20m
In this talk, we will present our recent advances in applying magnetic small-angle neutron scattering (SANS) for the in-depth characterization of magnetic nanoparticles.
In the first part, we will discuss the benefits of a Bayesian analysis as the new standard for fitting magnetic SANS data of nanoparticle samples [1]. Such a standardized protocol for the refinement of magnetic SANS data is especially useful with biomedical applications of magnetic nanoparticles in mind, as regulatory work regarding prior particle characterization is required to guarantee a safe and effective administration of the particles into the human body.
In the second part, we will demonstrate the unique ability of magnetic SANS to detect very small deviations of the magnetization configuration from the homogeneously magnetized state within nanoparticle (NP) systems [2,3]. The SANS technique has been already used in several other studies to investigate the intra- and interparticle magnetization profile in various NP systems. However, in contrast to the previous works, our analysis is focused on model-independent approaches. Moreover, we employ large-scale micromagnetic continuum simulations to support our findings and to disclose the delicate interplay between the particle size and the magnetization profile within NPs.
[1] Bersweiler et al., Nanotechnology, in press (2020)
[2] Bersweiler et al. PRB 100, 144434 (2019)
[3] Vivas et al. arXiv:2003.08694 (2020)
Speaker: Mathias Bersweiler (University of Luxembourg)
• 5:40 PM
The zero step for degrading perovskite solar cells: What atmosphere should we choose? 20m
The power conversion efficiency (PCE) of perovskite solar cells (PSCs) reached the champion value of 25.2 %, making this technique competitive with commercial silicon solar cells. Despite such advantages, the application of PSCs is currently limited by combining high performance and operational stability, because PCE of PSCs can degrade due to the presence of temperature, light, humidity, and oxygen. So far, the degradation research on PSCs is carried out without having an established standard protocol. Therefore, it is necessary to establish a standard protocol for the long-term degradation of PSCs. In this respect, we investigate degradation processes of PSCs under both, AM 1.5G and different atmosphere conditions with in-situ grazing incidence wide-angle X-ray scattering (GIWAXS) and grazing incidence small-angle X-ray scattering (GISAXS). With these approaches, we can follow the evolution of characteristic structures and of the inner morphology under the respective operational conditions. After understanding the degradation mechanisms upon different atmospheres (nitrogen and vacuum), we can suggest a reasonable atmosphere, which enters the protocol for the standard aging routine to guide future industrial development.
Speaker: Renjun Guo (Physics E13, Technical University in Munich)
• 5:40 PM
Thermal effects on nanoscale morphologies and chemical group vibrations of thermoresponsive double hydrophilic block copolymers in aqueous solutions 20m
Thermoresponsive double hydrophilic block copolymers exhibit great interest as model scaffolds for pharmaceutical applications due to their controlled potential in drug encapsulation and release. A thorough elucidation of the nanostructure of the formed self-assemblies and its evolution at different temperatures is mandatory to provide tailored design guidelines in targeted therapeutics. We present a summary on the investigation of the internal morphology for aqueous self-assembled nanostructures of novel double thermoresponsive PNIPAM-b-poly (oligo ethylene glycol methyl ether acrylate) (PNIPAM-b-POEGA) block copolymers by small angle neutron scattering (SANS). Our findings propose a distinct impact of chain-end groups on self-assembled morphologies, as well as on the interchain/intrachain interactions. The lower critical solution temperature (LCST) of these block copolymer solutions defines a transition crossover from hierarchical morphologies to well-defined nanoscale morphologies at temperatures above the LCST. Our scattering results are complemented by Fourier-Transform Infrared (FTIR) Spectroscopy. The combined FTIR and SANS data reveal that temperature-dependent vibrations of chemical moieties do not necessarily correlate to the analogous structural transitions at the nanoscale. Thereby, our study provides important insights into the morphology related to these thermoresponsive double hydrophilic block copolymer scaffolds for pharmaceutical applications.
Speaker: Dr Apostolos Vagias (FRM2 / TUM)
• 5:40 PM
Thin film growth by Molecular Beam Epitaxy for MLZ users 20m
Molecular Beam Epitaxy (MBE) is a versatile tool to fabricate high quality and high purity epitaxial thin films. At MLZ, the Jülich Centre for Neutron Science (JCNS) runs an MBE system to provide samples for users who either do not have the expertise to prepare thin film samples for their neutron experiments and/or the equipment.
In other words: If you need thin film samples for your neutron experiments, let's discuss how we can prepare your samples!
The MBE system is equipped with effusion cells, electron guns for electron beam evaporation and a plasma source for use with oxygen or nitrogen. A large variety of deposition materials can be used. Compounds are produced either by codeposition or by shutter modulated growth of individual layers. For in-situ surface structure analysis reflection high and low energy electron diffraction is utilized while Auger electron spectroscopy is applied for in-situ chemical surface analysis.
Thin film samples which are sensitive to ambient conditions are first fabricated in the MBE system and then measured at the neutron reflectometer MARIA of JCNS utilizing a versatile small ultra high vacuum condition chamber.[1]
In our presentation we will present examples for high quality thin films like e.g. FeN, Fe$_4$N, SrCoO$_3$ or Nb/Al$_2$O$_3$( 1-1 0 2) and link them to neutron experiments.
[1] A. Syed Mohd, S. Pütter, S. Mattauch, A. Koutsioubas, H. Schneider, A. Weber, and T. Brückel, Rev. Sci. Instrum., vol. 87, pp. 123909, 2016
Speaker: Sabine Pütter (Jülich Centre for Neutron Science JCNS, Outstation at MLZ, Forschungszentrum Jülich GmbH)
• 5:40 PM
TOFTOF – cold neutron time-of-flight spectrometer 20m
TOFTOF is a direct geometry disc-chopper timeof-
flight spectrometer located in the Neutron Guide
Hall West. It is suitable for both inelastic and quasielastic
neutron scattering and the scientific questions
addressed range from the dynamics in disordered
materials in hard and soft condensed matter
systems (such as polymer melts, glasses, molecular
liquids, or liquid metal alloys), properties of new
hydrogen storage materials to low-energy magnetic
excitations in multiferroic compounds, and molecular
magnets.
A cascade of seven fast rotating disc choppers
which are housed in four chopper vessels is used
to prepare a monochromatic pulsed beam which is
focussed onto the sample by a converging supermirror
section. The scattered neutrons are detected
by 1000 3He detector tubes with a time resolution up
to 50 ns. The detectors are mounted at a distance of
4 m and cover 12 m2 (or 0.75 sr). The high rotation
speed of the chopper system (up to 22 000 rpm)
together with a high neutron flux in the wavelength
range of 1.4 -14 Å allows free tuning of the energy
resolution between 3 meV and 2 μeV.
Speaker: Marcell Wolf (TUM)
• 5:40 PM
Towards Polarization Analysis for TOPAS 20m
The thermal time-of-flight spectrometer TOPAS has been constructed by the Jülich Centre for Neutron scattering (JCNS) and is now awaiting neutrons in the neutron guide hall east at the Heinz Maier-Leibnitz Zentrum (MLZ). The instrument design provides wide-angle polarization analysis (PA) for the thermal energy range. While recently other thermal time-of-flight spectrometers with PA have been taken into operation, TOPAS relies on novel and unique polarization devices. To gain experience, individual components went through thorough testing on other instruments. We will present tests of the TOPAS polarizer on POLI, which features continuous spin-exchange optical pumping (SEOP). Already in this test, the performance for neutron energy up to beyond 100 meV was excellent and lead to development of dedicated equipment for POLI. Tests of the wide-angle XYZ analysis have been performed during the last year of operation of the NEAT spectrometer at the HZB. Finally, we report on the progress with the installation of the instrument in the guide hall east.
Speaker: Christian Franz
• 5:40 PM
Tracking the formation of MAPbI3 by in situ GIWAXS 20m
Elucidating structure-function relationships in perovskite based materials for photovoltaic and LED application is important to push this material class towards commercialization. Focusing on scaling up methods and working out differences to well established deposition methods, e.g. spin casting, might open up unexpected possibilities for low-cost fabrication.
Slot-die coating is one very promising deposition method for high output production. In this work we investigate the conversion of printed PbI2 on ITO with printed methylammonium iodide (MAI) towards methylammonium lead iodide (MAPbI3) by in situ grazing incidence wide angle X-ray scattering (GIWAXS). Using synchrotron radiation, a time resolution of less than 1 s was achieved and the kinetics of the reaction becomes visible. Time resolved texture evolution during the formation of MAPbI3 shows the connection between preferential orientation of the “precursor” PbI2 thin-film and the final perovskite film, which shows face-on and corner-on orientation (cubic indexing). In contrast, spin-cast MAPbI3 prepared from the same solution and converted with identical parameters shows edge-on orientation. Time resolved deterioration of initially existing solvent-PbI2 complexes is also shown .
The fabrication method and precursor systems have a significant influence on the resulting film morphology, which is highly relevant for optimizing perovskite absorber layers for PV or LED applications.
Speaker: Manuel Scheel (TUM E13)
• 5:40 PM
Translocation of non-ionic synthetic polymers through lipid membranes 20m
Polymers with balanced hydrophilicity can passively translocate through biological membranes without damaging them. In the case of synthetic polymers there are only few reports of translocation using charged polymers. For non-charged polymers translocation phenomena were predicted theoretically but not verified experimentally. Especially these polymers are expected to show weak interactions with bio membranes and are interesting candidates for drug delivery applications.
We have synthesized such balanced amphiphilic polymers which contain alternating low MW hydrophobic and hydrophilic units. We studied translocation properties of the polymers using Pulsed Field Gradient (PFG) NMR and their interactions with lipid membranes using Neutron Reflectometry (NR) and Small Angle Neutron Scattering (SANS). The PFG NMR results show a strong dependence of the translocation rate on polymer molecular weight and hydrophobic block length. The first NR and SANS measurements show that the polymers are partially solubilized in the hydrocarbon part of the bilayer, and the effect is more prominent for less hydrophilic but still water soluble polymers.
Speaker: Ekaterina Kostyurina
• 5:40 PM
URANOS - a voxel engine Neutron Transport Monte Carlo Simulation 20m
URANOS (Ultra RApid Neutron-Only Simulation) is a newly developed 3D neutron transport Monte Carlo from thermal to fast energy domains. Emerging from a problem solver for the CASCADE detector development in collaboration with environmental physics the project aims towards providing a fast computational workflow and an intuitive graphical user interface (GUI) for small to medium sized projects. It features a ray-casting algorithm based on a voxel engine. The simulation domain is defined layerwise, whereas the geometry is extruded from a pixel matrix of materials, identified by specific numbers. Therefore, input files are solely a stack of pictures, all other settings, including the configuration of predefined sources, can be adjusted by the GUI.
The scattering kernel features the treatment of elastic and inelastic collisions, absorption and absorption-like processes like evaporation. Cross sections and distributions are taken from the data bases ENDF/B-VII.1 and JENDL/HE-2007. In order to simulate multi-layer boron detectors it also models the charged particle transport following the conversion by computing the energy loss in the boron and its consecutive layer.
URANOS is freely available and can be used to simulate the response function of boron-lined or epithermal neutron detectors, small-scale laboratory setups and especially transport studies of cosmic-ray induced environmental neutrons.
Speaker: Dr Markus Köhli (Heidelberg University)
• 5:40 PM
Utilizing very low flux nuclear reactors for neutron imaging 20m
In order to provide a basic platform for training and first contact research in the field of neutron science, very low flux facilities represent a sufficient starting point. The training and research reactor (AKR-2) with a maximum continuous power of two Watts can be categorized as such a facility. In the course of the last two years, the experimental field of the AKR-2 has been extended by a thermal neutron imaging radiography system (TRAPY). Currently, this setup utilizes thermal neutrons with a LiF/ZnS(Ag) scintillator, with the prospect to be able to switch to the fast neutron spectrum in a later setup.
Split in two parts, we first introduce the AKR-2 and the boundary conditions, it provides and continue with first achievements in building and characterizing the imaging setup. So far, the characterization has been made through an L/D study. This study builds upon a previous investigation with a less advanced imaging system (DELCam) and is intended to demonstrate the limits in neutron imaging at AKR-2. A two-way cadmium knife-edge with integrated reproduction scale has been used for the slanted edge method in order to estimate the edge response sufficiently. Additionally, first measurement examples are introduced. It is therefore proposed that experiments not ranked sufficiently high enough for the limited beam time at high flux facilities, but with their experimental needs fulfilled by the AKR-2, can be conducted at our facility.
Speaker: Rico Hübscher (TU Dresden)
• 5:40 PM
Validating Molecular Dynamics Computer Simulations with Neutron Scattering Data 20m
Neutron scattering data are usually evaluated by analytical models. Computer simulations, for example using the Molecular Dynamics (MD) technique, can give a description of the sample’s structure and dynamics on the atomic scale. Using this information, neutron (and x-ray) diffraction and spectroscopy curves can be computed. The scattering data can then be used to validate the simulations and vice versa the simulations can be used to evaluate the scattering data. These evaluations are able to capture complicated structures (e.g., amorphous) and motions (e.g., non-Fickean diffusion).
In this contribution, we focus on the comparison of different exemplary simulations of solids and liquids to scattering data.
For the solids, the influence of different parameters of the simulation such as the size of the simulated box on diffraction patterns is evaluated and the accuracy of the computation results using the program SASSENA is discussed for neutron and x-ray diffraction.
In the case of liquids, the simulated structure and dynamics of water as a prototypical liquid is compared to scattering data and other literature values like the diffusion coefficient. Different force fields are investigated and the influence of their base parameters is studied.
Speaker: Veronika Reich
• 5:40 PM
Water dynamics in a concentrated aqueous solution of perdeuterated poly(N-isopropylacrylamide) across the cloud point 20m
In aqueous solutions of the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), the interaction between water and the polymer changes strongly at the demixing transition. Cooperative dehydration causes the polymer chains to collapse and aggregate. Recent quasi-elastic neutron scattering experiments have shown that the susceptibility spectra of hydration water occur at lower frequencies than those of bulk water and that their relative population decreases abruptly at the cloud point [1,2].
In the present study, we investigate the low frequency water dynamics on a perdeuterated PNIPAM sample in H$_2$O using the backscattering spectrometer SPHERES at FRM II with an increased energy resolution near the elastic line of ~0.65 $\mu$eV (FWHM). Deuteration suppresses incoherent scattering from the polymer. We find that, below the cloud point, the previously observed frequency dependence of the relaxation of the hydration water extends to lower frequencies. Below, but even more strongly above the cloud point, an additional slow process is detected and is tentatively attributed to strongly bound water.
1. M. Philipp, C. M. Papadakis et al., J. Phys. Chem. B 2014, 118, 4253
2. B.-J. Niebuur, C. M. Papadakis et al., Macromolecules 2019, 52, 1942
Speaker: Ms Bahar Yazdanshenas (Technische Universität München, Physik-Department, Fachgebiet Physik weicher Materie, Garching, Germany)
• 5:40 PM
Wearable smart skin based on triboelectric nanogenerator and CdSe/CdS quantum rods for pressure and tensile sensing 20m
Over the past few years, wearable smart skin is one of the hottest research topics attracting worldwide attention. Since the birth of the triboelectric nanogenerator (TENG), which is originating from Maxwell’s displacement current, the vertical pressure sensor function can be achieved easily without any external power supplies. However, to mimic human skin better, more functions need to be added into one simple device, especially the basic lateral tensile sensing ability.
In this work, we fabricate a new type of wearable smart skin based on TENG and luminescent effect for both vertical pressure and lateral tensile sensing. Polydimethylsiloxane (PDMS) based single-electrode mode TENG part gives the vertical pressure sensing ability to the whole device. In addition, CdSe/CdS quantum rods (QRs) are introduced into the device as a luminescent layer for lateral tensile sensing. Small angle neutron scattering (SANS) is used to investigate the CdSe/CdS QRs alignment in PDMS thick film under different tensile degrees (from 0% to 100%).
Speaker: Mr TIANXIAO XIAO (TUM)
• 3:00 PM 6:00 PM
MLZ Users 2020 - Webinar NICOS: Tipps and Tricks
Convener: Jens Krueger
• 3:00 PM
NICOS - Tipps and tricks 3h
NICOS is the standard user interface for the instrument control at the MLZ instruments. Because of the success and the user acceptance the ESS made the decision to used it as well for their instrument control. The SINQ at the PSI was also looking for a new instrument control software and found NICOS.
The webinar is splitted into two sections, a presentation about NICOS with the following topics:
• Introduction into NICOS
• Overview about the newest developments since the user meeting in 2019
• NICOS at SINQ
• New graphical user interface at ESS
and an interactive presentation of NICOS to show some tips and tricks for the users
Speaker: Jens Krüger (Heinz Maier - Leibnitz Zentrum)
• Thursday, December 10
• 9:00 AM 12:30 PM
DN2020: Plenary talks: DN2020: Plenary Talks
Conveners: Christine Papadakis (Technische Universität München, Physik-Department, Fachgebiet Physik weicher Materie), Rainer Niewa (Universität Stuttgart)
• 9:00 AM
Neutron Instrumentation - Tools from Scientists for Scientists 30m
Neutron instruments are key to productive and successful experiments. Their continued development and the implementation of new ideas is the basis for enabling excellent science, increase performance of instruments or to open new fields. I will give a brief cross section of ongoing instrument developments in the Endurance program at the Institut Laue-Langevin (ILL).
In particular, recent and future advances in neutron backscattering will be highlighted. The BATS option on IN16B increases the dynamic range of the instrument by an order of magnitude and enabled the development of a new generation of high speed choppers. Currently, a 10m long neutron guide section with variable beam focusing is being implemented to enable an adaptive beam compression around the pulse chopper system.
The BATS project and several others are only possible thanks to the 'Verbundforschung' funding scheme of the German Ministry of Education and Research, which fosters strong collaboration between scientists at research centers and scientists at universities - a prerequisite for a sustainable balance between excellent instrumentation and an excellent user community.
Speaker: Markus Appel (Institut Laue-Langevin)
• 9:30 AM
Advances in the study of stimuli-responsive core-shell microgel particles 30m
The synthesis of complex microgel architectures triggers the necessity of structural characterization. Small angle neutron scattering (SANS) is well suited for this purpose. SANS is capable of simultaneously measuring both average particle sizes and polydispersity, as well as the local structure of colloidal gels. SANS and isotopic substitution may reveal the distribution of monomers within particles, as for example in core-shell microgels having one monomer deuterated. It is the ultimate goal of our approach extracting radial density profiles of the microgel network. The talk will discuss modelling approaches to analyse SANS data from such systems. Contrast matching through the solvent and employing isotopic substitution of either shell or core monomers enables studying the monomers selectively in SANS. One should note that deuteration causes differences in swelling, as published by us for p(NIPMAM) [1], but its impact remains negligible far from the volume phase transition temprature. We add that although SANS is not the only technique to observe microgels [2], it is indeed very powerful due to the possibilities of deuteration allowing differentiating core and shell polymer, in bulk suspensions [3].
[1] Cors M, Wiehemeier L, Oberdisse J, Hellweg T (2019). Polymers 11(4):620
[2] Bergmann S, Wrede O, Huser T, Hellweg T (2018). Phys Chem Chem Phys 20:5074
[3] Cors M, Wrede O, Wiehemeier L, Feoktystov A, Cousin F, Hellweg T, Oberdisse J (2019). Sci Rep 9(1).
Speaker: Prof. Thomas Hellweg (Universität Bielefeld, Physikalische u. Biophysikalische Chemie)
• 10:00 AM
Tests of the Standard Model of Particle Physics in Neutron Beta Decay 30m
Particle Physics with neutrons addresses a number of basic and often unique questions of particle physics and cosmology at very low energies. Within the Standard Model of particle physics, neutron beta decay data serves as important input e.g. to investigate the cosmological abundance of light elements and the energy production in the Sun. Searching for new physics, precision measurements allow to test the unitarity of the CKM quark-mixing matrix and to constrain new effective couplings, as well as exotic decay modes.
In this talk, I will present recent results obtained by the PERKEO group from experiments at the ILL and discuss their implications. The follow-up instrument PERC is currently under construction at the MLZ. Its key component is a 12m long superconducting magnet system, which contains an 8m long decay volume in a novel polarisation-preserving neutron guide. PERC aims to improve measurements of several decay correlations by an order of magnitude. I will discuss its design and status.
Finally, I will present the new neutron depth profiling instrument N4DP at the PGAA beam station of the MLZ, which is an excellent example of multi-disciplinary instrument development. Nuclear reactions allow probing depth profiles of certain nuclides like $^6\mathrm{Li}$. The intense neutron beam, low backgrounds, and excellent energy resolution enable time-resolved in-operando studies of Li-ion batteries.
Speaker: Bastian Märkisch (Physik-Department, TUM)
• 10:30 AM
Break 30m
• 11:00 AM
Novel luminescence materials 30m
While the cation chemistry in many materials has been extensively studied, in the field of mixed anion compounds a lot remains to be discovered yet. Especially mixed-anionic hydrides are receiving a lot of attention at the moment, because the partial substitution can significantly change the physical and chemical properties [1]. Due to the low scattering power of hydride, the combined use of neutron and X-ray diffraction is essential for a complete structural characterization.
Beside the use of these tools, also the use of local probes can be helpful. For instance, rare earth metal ions showing 5d-4f transitions can show very sensitive differences in the polarizabilities in the local environment. Recently, a number of new mixed-anionic hydrides and complex hydrides has been discovered. Here, mixed hydride halides and borohydrides are studied. Furthermore, the first representative of a novel class of materials, the borate hydride Sr5(BO3)3H:Eu2+ is presented. The successful incorporation of hydride in the later compound was shown using a number of independent methods, including neutron powder diffraction, 1H solid state MAS NMR, vibrational spectroscopy and quantum chemical calculations.
Beamtime at the SPODI, Research Neutron Source Heinz Maier-Leibnitz (FRM II) and the D2b, Institute Laue-Langevin is gratefully acknowledged.
[1] Y. Kobayashi, T. Yoshihiro, H. Kageyama, Ann. Rev. Mater. Res. 2018, 48, 11.1.
Speaker: Nathalie Kunkel
• 11:30 AM
Quantum vs. structural disorder in triangular antiferromagnets 30m
Quantum disordered states in frustrated magnets are model cases of quantum entanglement and potential hosts for unconventional, fractionalized excitations. The formation of these states is typically associated with competing exchange couplings, although structural disorder can lead to a somewhat similar phenomenology, including the absence of long-range magnetic order and the presence of excitation continua.
In this talk, I will discuss the interplay of quantum and structural disorder in Yb-based triangular antiferromagnets that were recently proposed as spin-liquid candidates. Thermodynamic measurements and neutron scattering results will be used to analyze the coupling regime and the nature of magnetic ground state in these structurally simple but microscopically very complex materials. The presence of putative fractionalized excitations will be challenged by thermodynamic measurements in the milli-K temperature range, and prospects of reaching a genuine quantum spin liquid state in this family of compounds will be discussed.
Speaker: Alexander Tsirlin (University of Augsburg)
• 12:00 PM
Studying the structural dynamics of proteins by neutron and X-ray scattering 30m
Proteins are the molecular engines of life. Their broad range of biological tasks and functions is reflected in the large diversity of specific structural and dynamical characteristics they display on broad length and time scales. A large number of experimental techniques exist that each opens a specific window onto equilibrium and non-equilibrium protein dynamics. We will illustrate, how among those, both neutron spectroscopy [1] and crystallography at XFELs and synchrotrons [2] can be carried out in a time-resolved manner to studying non-equilibrium dynamics, although on very different time scales. As to equilibrium dynamics, the combination of selective deuteration and neutron spectroscopy is particularly powerful, as will be exemplified by solvent-free protein-polymer hybrids [3] that represent one of the many interesting subjects at the interface of life sciences and soft matter.
[1] Pounot, Chaaban, Fodera, Schiro, Weik, Seydel (2020) Tracking internal and global diffusive dynamics during protein aggregation by high-resolution neutron spectroscopy. J Phys Chem Lett 11: 6299
[2] Woodhouse, Nass Kovacs et al. (2020) Photoswitching mechanism of a fluorescent protein revealed by time-resolved crystallography and transient absorption spectroscopy. Nature Communications 11: 741
[3] Schirò, Fichou, Brogan, Sessions, Lohstroh, Zamponi, Schneider, Gallat, Paciaroni, Tobias, Perriman, Weik. Diffusive-like motions in a solvent free protein-polymer hybrid, under revision
Speaker: martin weik
• 12:30 PM 1:30 PM
Break 1h
• 1:30 PM 3:00 PM
DN2020. Soft Matter: Part 1/2
Convener: Walter Richtering (RWTH Aachen)
• 1:30 PM
Self-similar structure and dynamics of polymer rings. 15m
Small Angle Neutron Scattering (SANS) and Neutron Spin Echo (NSE) results on very large poly(ethylene-oxide) (PEO) rings in the melt are presented [1,2]. The ring conformation demonstrate a clear signature of the theoretically predicted elementary loops. Their size is in the range of an entanglement strand for linear PEO melts and they are characterized by Gaussian statistics. The ring chain length dependence of the radius of gyration Rg follows rather closely the prediction of decorated ring model [3]. Other than extracted from numerous simulations that are interpreted in terms of a cross over to mass fractal, such a cross over was not observed by SANS. We could also clarify the unique topology driven self-similar ring dynamics and distinguish between different scaling theories. While the dynamics of linear and branched polymers is dominated by the celebrated reptation mechanism, where a polymer creeps out of its topological confinement via its ends, polymer rings featuring no ends cannot undergo reptation but are supposed to perform self-similar dynamics. We present NSE experiments on the initial anomalous center of mass diffusion and the internal dynamics of large polymer rings – a field of broad interest that so far has exclusively been accessed by theory and simulations.
[1] M. Kruteva et al. ACS Macro Lett. 9 (2020) 507–511
[2] M. Kruteva et al. Phys. Rev. Lett., submitted
[3] S. Obukhov et al. EPL 105 (4) (2014) 48005
Speaker: Margarita Kruteva (JCNS)
• 1:45 PM
Influence of salt on oppositely charged Polyelectrolyte/Surfactant mixtures: A comparing neutron reflectometry and surface tension study 15m
The surface properties of oppositely charged polyelectrolyte/surfactant mixtures play an important role in colloidal dispersions (foams, emulsions) e.g. for cosmetics, cleaning products and in food technology.
Extensive research on such mixtures was already performed with the focus on different polyelectrolytes as well as surfactants. However, the influence of the ionic strength is still unclear.
This work focuses on the influence of added salt (NaBr or LiBr depending on the polyelectrolyte counterion) on the adsorption behaviour of mixtures of the anionic polyelectrolyte NaPSS or sPSO$_2$-220 with the cationic C$_{14}$TAB. Therefore, surface tension and neutron reflectometry (NR) measurements were performed with a fixed C$_{14}$TAB concentration and a variable polyelectrolyte concentration at different salt concentrations (10$^{-4}$, 10$^{-3}$, 10$^{-2}$ M).
For NaPSS, NaBr reduces the surface tension over the whole studied polyelectrolyte concentration range (10$^{-5}$ – 10$^{-3}$ monoM) and broadens the seen increase of surface tension at the bulk stoichiometric mixing point (BSMP). The surface excess of both components – detected by neutron reflectometry – correlates quite well with these finding. In contrast, LiBr reduces the surface tension of sPSO$_2$-220 only above the BSMP. Here, the finding of NR are not matching the surface tension results. Possible reason such as structural differences of the polyelectrolytes or sensitivity of measurements will be discussed.
Speaker: Larissa Braun
• 2:00 PM
Ion selectivity at the origin of block copolyelectrolyte micelle formation 15m
Novel block copolymers consisting of two anionic polyelectrolyte blocks [NaPA and NaPSS] have been synthesized. In the presence of certain amounts of divalent cations such as Calcium, Strontium and Barium, supramolecular structures are formed. The overall size and molecular weight of these structures have been obtained by combined static and dynamic light scattering (SLS & DLS). Via small-angle neutron scattering (SANS) the existence of core-shell micellar structures could be proven [1, 2]. Additional experiments using a deuterated polyacrylic acid block enabled us to elucidate the micellar composition. Core-shell structures are formed because the two charged polymer blocks possess different complexation affinities with respect to oppositely charged cations.
In a next step, solutions which are still in the single chain region of the phase diagram, but close to the phase boundary, have been prepared. Playing with temperature we succeeded in triggering micelle formation [3]. Decreasing the temperature induced micelle formation with Barium and Strontium, but not with Calcium. Surprisingly, the micellar structures were inverted in composition. Heating these solutions up to ambient temperature led to a dissolution of the micellar aggregates. Heating up even further to 65°C led again to a formation of micelles.
References
[1] N. Carl et al. Soft Matter (2019) 15 8266
[2] N. Carl et al. Colloid Polym. Sci (2020) 298 663
[3] N. Carl et al. Macromolecules (2019) 52 8759...
Speaker: Ralf SCHWEINS (Institut Laue - Langevin)
• 2:15 PM
Structural characterization and rheology of bio-compatible wormlike micelles 15m
Wormlike micelles exhibit a unique viscoelastic behavior, which has been investigated intensely in the past decades, experimentally as well as by theoretical calculations [1,2]. Within our studies we explore the self-assembled structure and the flow behavior of wormlike micelles formed by mixing a short-chained C${}_8$ cationic surfactant and the sodium salts of omega-9 fatty acids [3]. Within the class of the latter one, the alkyl chain length is varied from C${}_{18}$ to C${}_{22}$, yielding an increase of the micellar cross section. The structure of the micelles is characterized by neutron scattering experiments. Beside the thickness of the micelles the persistence length is an important key quantity which strongly influences the flowing properties and is depending on the mixing ratio of both surfactants. Further, it is observed that the dynamical response, i.e., the time scales such as the relaxation or breakage time, of the micelles is influenced by the molecular architecture. Combining the results of rheological measurements with the neutron scattering experiments allows us to get a detailed insight into the micellar structure and dynamics.
[1] C. Dreiss, Soft Matter 3, 956, (2007)
[2] P. D. Olmsted, Rheo. Acta 47, 283, (2008)
[3] Raghavan et al., Langmuir 18, 3797 (2002)
Speaker: Benjamin von Lospichl (Technische Universität Berlin)
• 2:30 PM
Dynamics during an arrested phase transition in a protein system 15m
Phase separation in biological systems is a path for the formation of cellular organelles[1]. To understand the dynamic properties of these compounds we studied a protein solution model system during the phase separation with neutron spin-echo and back scattering spectroscopy as well as with small angle scattering. The phase separation for the investigated sample occurs for temperatures below $T_p$= 21$^\circ$C and is due to the short-range depletion interaction induced by a polymer (polyethylene glycol). The evolution of the phase separation after a quench to temperatures below $T_p$ was monitored though the scattered intensity measured with small angle scattering[2]. For quenches to temperature to 6$^\circ$C and below the phase separation arrests. This phenomenon was previously linked to the dynamical arrest at molecular length scale due to gelation[3]. The collective diffusion agrees with anomalous diffusion and relaxation times which are two orders of magnitude smaller in the arrested state. Using back scattering we can monitor the polymers and proteins self-diffusion, showing Brownian motion on the diffusive short-time scale even in the arrested state, and a significant transition of the protein diffusion when crossing the phase separation temperature.
[1] Berry, J., Brangwynne, P. C. & Haataja, M. , Reports on Progress in Physics 81, 046601 (2018).
[2] Da Vela, S. et al., Soft Matter 13, 8756-8765 (2017).
[3] Zaccarelli, E., J. Phys.: Cond. Matter 19, 323101 (2007).
Speaker: Anita Girelli
• 2:45 PM
Pretreatment of wood using ionic liquids 15m
In the pretreatment of wood it is essential to apply mild conditions for extracting oligomeric cellulose and possibly lignin that can be used for a variety of environmentally friendly products such as polymers. For this, ionic liquids are ideal due to their mixture of polar and non-polar character, which makes them swelling the wood until it bursts to a rather fluffy material. Using small angle neutron scattering in operando studies, the different states of the pretreatment are identified. After impregnation with the liquid, the cellulose is restructured and forms nano-scale voids. At late stages the cellulose is rather amorphous and quite dilutes. This opens possibilities for enzymatic chain scission in a second step of treatment. The findings are complemented by other techniques, which allows for an optimization of the pretreatment process.
Speaker: Henrich Frielinghaus (JCNS)
• 1:30 PM 3:00 PM
DN2020: Instrumentation: Part 1/2
Convener: Wiebke Lohstroh
• 1:30 PM
SKADI: Small-Angle Neutron Scattering at ESS 15m
The Small-K Advanced DIffractometer (SKADI) is a joint in-kind project of French and German partners to deliver a SANS instrument to the ESS. [1] This contribution will detail the current construction status of SKADI. In addition, further practical requirements on components such as the sample area will be considered. SKADI is designed to deliver
- Flexibility (sample area is approx. 3x3 m$^2$, and versatile collimation)
- Very small Q accessible through VSANS
- Polarization for magnetic samples and incoherent background subtraction
- Good wavelength resolution, being the longest SANS instrument at ESS
- High dynamic Q-range over three orders of magnitude.
This will be combined with a neutron flux of 8$\cdot10^8$ n/s cm2 at sample position, which will make it the world SANS instrument in brilliance.
In addition to complex sample environments SKADI will also feature a newly developed detector system, SoNDe, developed within the EU Horizon2020 framework. [2]
SKADI caters for a wide range of scientific areas, such as smart materials, biological and medical research, magnetic materials, as well as experiments on nanomaterials and nanocomposites or colloidal systems. Finally, SKADI is designed to accommodate in-situ measurements with custom made sample environments to provide "real-world" conditions.
[1] JAKSCH, S., et al. NIMA, 2014, 762, S. 22-30.
[2] JAKSCH, S., et al. Proceedings of the International Conference on Neutron Optics (NOP2017). 2018. S. 011019.
Speaker: Sebastian Jaksch (Physicist)
• 1:45 PM
High resolution neutron spectroscopy with the J-NSE "PHOENIX" 15m
Neutron spin echo (NSE) spectroscopy provides the ultimate energy resolution in quasi-elastic thermal and cold neutron scattering spectroscopy. In terms of Fourier-time (τ) high resolution means the extension of τ into to the regime of μs (corresponding to an energy resolution of ~neV). The J-NSE "PHOENIX" with its unique fringe-field compensated, superconducting magnets provides the state of the art in NSE instrument design. One of the most innovative characteristics of the coils is their optimized geometry that maximizes the intrinsic field-integral homogeneity along the flight-path of the neutrons and that enhances the resolution of a factor 2.5 compared to the previous normal conducting setup. The increased resolution may be exploited to reach larger Fourier-times and/or to benefit from significant intensity gains if shorter neutron wavelengths are used at a given Fourier-time. Thus the J-NSE "PHOENIX" meets the needs to look into the microscopic dynamics of soft- or –biological matter with enhanced and new quality. Here we present the results on the performance of the spectrometer in its current configuration and some selected examples from the realm of soft matter dynamics that exploit the unique properties of the new J-NSE.
Speaker: Olaf Holderer
• 2:00 PM
New primary optics for the ‘Energy research with Neutrons’ option at MLZ. 15m
The Energy research with Neutrons (ErwiN) instrument is meant to be used for the investigation of energy storage materials, also integrated in complete components and under real operating conditions. Thus, it is possible to scan a large parameter space (e.g. temperature, state of charge, charge rate, fatigue degree) for the investigation of modern functional materials in kinetic and time-resolved experiments. Diffraction data will be obtained from the entire sample volume or in a spatially resolved mode from individual parts of the sample.
The future development of the ErwiN instrument is presented here: Firstly, the plans of replacing the primary beam optics will be revealed to bring this diffractometer to the same level as the high flux and high resolution instrument D20 at the ILL. The upgraded ErwiN is designed for different scenarios: for very fast measurements at medium resolution, for medium fast measurements at higher resolution and also for very high resolutions still at reasonable velocity. The commissioning and integration of ErwiN will enhance the attractiveness for a wider community in energy research as well as materials science while novel methods for the neutron science community will be developed.
Speaker: Michael Heere
• 2:15 PM
Epsilon - german time-of-flight high resolution neutron diffractometer at the high flux pulsed reactor IBR-2: current status and scientific applications 15m
The TOF diffractometer Epsilon at the beamline 7a of the IBR-2 reactor is dedicated to the high resolution measurements of applied and residual strains of geological samples and functional materials.
A four-axis goniometer permits a rotation around one axis and translation in 3 mutual perpendicular directions. It allows us to measure a strain profile of six independent component of strain tensor. Last years Epsilon had been equipped by variety of dedicated sample environment:
- uniaxial pressure device with possibility of sample rotation under external load with maximal pressure up to 150 MPa for operando measurements;
- an acoustic emission system;
- a laser extensometer for macroscopic deformation measurements of the sample with a resolution of 0.5 μm,
- a triaxial pressure device for operando stress measurements, which allows us in situ determination of Poisson ratio, the bulk modulus and Biot-Willis coefficient.
Epsilon is perfect fitted to the geological application and material sciences, the sample environment is unique and has no analogy through the neutron spectrometers in the world.
Speaker: Dr Birgit Müller (Karlsruhe Institute of Technology)
• 2:30 PM
New polarized neutron diffraction setup with 8 T magnet on POLI 15m
The polarized single-crystal diffractometer POLI offers two types of polarized neutron diffraction experiments: spherical neutron polarimetry (SNP), also known as full three-dimensional polarization analysis in zero magnetic field, and classical polarized neutron diffraction, also called flipping-ratio (FR) method, in high applied magnetic fields. Recently, the available sample environment of POLI has been extended by an asymmetric field magnet of 8 T. Although this new magnet is actively shielded, its stray fields are still too large to be used with the sensitive $^3$He polarizer of the original SNP setup. To overcome this issue, a new, large-beam-cross-section solid-state supermirror (SM) bender polarizer has been developed for POLI. An existing shielded Mezei-type flipper is used between the magnet and SM polarizer. A dedicated guide field construction was numerically simulated, optimized and built to link the magnetic field of the polarizer to the flipper and to the stray field of the magnet. An almost loss-free spin transport within the instrument in the complete field range of the new magnet was achieved. The new setup was successfully implemented and tested. A high polarization efficiency of above 99% for short wavelength neutrons could be experimentally reached with the new solid-state bender. The new high–field FR setup is now available for POLI’s user community.
Speaker: Mr Henrik Thoma (Jülich Centre for Neutron Science JCNS at MLZ)
• 2:45 PM
Design study of a 1-m2 Position Sensitive Neutron Detector (PSND) 15m
Modern Multi-Wire-Proportional-Chambers (MWPC) operating with 10B4C films as solid-state-converter can surpass the performance of ones based on 3He in terms of position resolution and count rate capability at similar detection efficiency [1, 2]. The use of large area coated converters on thin foils forces to develop a mechanical concept to avoid deformations of the neutron sensitive surface due to their own weight and due to acting electrostatic resulting from HV in operation. This concept must allow a parallel stacking of the converter elements in mm distance in order to accumulate conversion efficiency as needed for perpendicular neutron incidence geometry. HZG has introduced [1] and investigated as a contribution to the ESS the idea of stabilizing the converter elements by gas pressure gradient between both sides of the converter to counteract the forces resulting from operation. This concept is applied to the design study of a 1-m2 PSND with a position resolution of 2 mm. The MWPC consists of up to 24x 10B4C coated 0.3 mm thick Aluminum parallel stacked converters with a detection depth < 12 mm each. The deposition method of 10B4C coatings with thicknesses up to 10 µm on pretreated Al substrates was elaborated [2, 3]. The delay-line read-out of the detector coupes for up to 170 kcps per detector plane.
[1] European Patent: EP 17184906.0 (filed at 04.08.2017)
[2] European Patent Application 2 997 174 (14.07.2014)
[3] G. Nowak, et al. J. Appl. Phys. 117, 034901 (2015)
Speaker: Dr Gregor Nowak (Helmholtz-Zentrum Geesthacht)
• 1:30 PM 3:00 PM
DN2020: Magnetism: Part 1/1
Convener: A. Schneidewind (JCNS, Jülich, Germany)
• 1:30 PM
Signature of defect-induced symmetry breaking in magnetic neutron scattering 15m
The antisymmetric Dzyaloshinskii-Moriya interaction (DMI) plays a decisive role for the stabilization and control of chirality of skyrmion textures in various magnetic systems exhibiting a noncentrosymmetric crystal structure. A less studied aspect of the DMI is that this interaction is believed to be operative in the vicinity of lattice imperfections in crystalline magnetic materials, due to the local structural inversion symmetry breaking. If this scenario leads to an effect of sizable magnitude, it implies that the DMI introduces chirality into a very large class of magnetic materials---defect-rich systems such as polycrystalline magnets. Here, we show experimentally that the microstructural-defect-induced DMI gives rise to a polarization-dependent asymmetric term in the small-angle neutron scattering (SANS) cross section of polycrystalline ferromagnets. The results are supported by theoretical predictions using the continuum theory of micromagnetics. This effect, conjectured already by Arrott in 1963, is demonstrated for nanocrystalline terbium and holmium (with a large grain-boundary density), and for mechanically-deformed microcrystalline cobalt (with a large dislocation density). Analysis of the scattering asymmetry allows one to determine the defect-induced DMI constant, $D = 0.45 \pm 0.07 \, \mathrm{mJ/m^2}$ for Tb at $100 \, \mathrm{K}$. Our study proves the generic relevance of the DMI for the magnetic microstructure of defect-rich ferromagnets.
Speaker: Prof. Andreas Michels (University of Luxembourg)
• 1:45 PM
Single crystal investigations on the new multiferroic material LiFe(WO$_4$)$_2$ 15m
Multiferroic materials attract much interest during the last decades as the coupling of electric and magnetic ordering offers an application potential for future memory devices or new type of sensors. The most prominent mechanism for multiferroicity is given by the inverse Dzyaloshinskii-Moriya interaction, where a spiral magnetic structure induces a shift of non-magnetic ligand ions and hence a ferroelectric polarization, which can be controlled by the conjugate field of both ferroic ordering parameters. Recently, experiments on a powdered sample of LiFe(WO$_4$)$_2$ revealed two subsequent magnetic phases, of which the lower one exhibits multiferroic behavior [1]. Beneath MnWO$_4$, LiFe(WO$_4$)$_2$ is thus the second multiferroic system in this family. Here we report on our single crystal studies on LiFe(WO$_4$)$_2$ and on the respective structural and magnetic refinements. Neutron diffraction experiments revealed the magnetic structure of both magnetic phases, where first a spin-density wave and subsequently a chiral magnetic structure evolves. Moreover, polarization analysis on the cold three-axes spectrometer KOMPASS unambiguously proves the chiral magnetic phase and shows that even without an external applied electric field a preferred handedness occurrs.
[1] Liu et al. Phys. Rev. B 95, 195134 (2017)
Speaker: Sebastian Biesenkamp (II. Physikalisches Institut, Universität zu Köln)
• 2:00 PM
Critical magnetic fluctuations in Ca2RuO4 studied by neutron spin-echo and triple-axis spectroscopy 15m
We report on comprehensive high-resolution linewidth measurements of critical antiferromagnetic fluctuations in Ca$_2$RuO$_4$ (CRO214) performed at the neutron resonance spin-echo spectrometer TRISP at FRM II and the cold triple-axis spectrometer FLEXX at BER II. CRO214 is structurally related to the unconventional superconductor Sr$_2$RuO$_4$ [1] and hosts a complex interplay between magnetic and electronic correlations leading to a novel type of soft-magnetism with strong single-ion anisotropy, and ‘Higgs’ amplitude fluctuations in the spin-wave spectrum, as revealed by recent neutron experiments [2].
In contrast to conventional magnetic phase transitions, the magnetic ordering in CRO214 below T$_N$ ~ 110 K emerges from exciton condensation [3]. Therefore, since the magnetic fluctuations in proximity to T$_N$ are fundamentally related to the nature of the magnetic correlations in the system, our study can shed new light on the exceptional ‘excitonic’ magnetism in CRO214.
[1] Nat. 372, 532, (1994).
[2] Nat. Phys. 13, 633, (2017).
[3] Phys. Rev. Lett. 111, 197201, (2013).
Speaker: Heiko Trepka (MPI for Solid State Research, Stuttgart)
• 2:15 PM
Magnetic structure of the frustrated fcc iridate (NH4)2IrCl6: A candidate J_eff=1/2 Mott insulator 15m
Magnetic materials containing octahedrally coordinated Ir$^{4+}$ ions can give rise novel J$_{eff}$ =$\frac{1}{2}$ magnetic moments due to the interplay of strong spin-orbit coupling, onsite Coulomb repulsion and crystalline electric field. The exchange interaction between such moments depends on the geometry of the exchange paths between the magnetic ions and could be highly anisotropic such as the Kitaev exchange in 2D honeycomb lattice. This could lead to a rich variety of magnetic ground states with exotic excitation as has been proposed theoretically and also observed experimentally in several real materials. (NH$_4$)$_2$IrCl$_6$ retains its cubic symmetry (fcc) down to very low temperatures and offer best possible condition for the cubic crystalline electric field to realize genuine J$_{eff}$ =$\frac{1}{2}$ state. The crystal and magnetic structures of the (NH$_4$)$_2$IrCl$_6$ single crystal have been studied using neutron diffraction, synchrotron X-ray diffraction and resonant inelastic X-ray scattering techniques. The study shows that the interplay of geometrical frustration and the bond dependent exchange frustration stabilizes a type-III collinear AFM ordering at $T_{\rm N}$=2.1 K with propagation vector (1 $\frac{1}{2}$ 0). Thus the bond dependent Kitaev interaction in the fcc lattice may oppose the magnetic frustration which is in sharp contrast to the Kiteav interaction in honeycomb lattices promoting quantum spin-liquid ground states.
Speaker: Dr Nazir Khan (Institute for Quantum Materials and Technologies)
• 2:30 PM
Vortex Matter of Intertype Superconductors studied by Neutron Methods and Molecular Dynamics Simulations 15m
In the intermediate mixed state (IMS) in superconducting niobium, the mixed attractive/repulsive vortex interaction leads to the clustering of vortices into domains. Not fitting into the conventional type-I and type-II categories, this regime is denoted intertype superconductivity [1].
Using a combination of neutron techniques, we have studied the hierarchical properties of the IMS in bulk niobium on length scales of the vortex lattice (~100nm, SANS), the domain structure (~10mum, VSANS/USANS) and the sample size (~10mm, NGI). The results give detailed insight into the properties of the IMS focusing on the domain formation as function of temperature, magnetic field and sample quality [2,3,4].
However, the knowledge of the nanoscale vortex arrangement is still incomplete, including the domain structure and the impact of disordered vortices. In order to complement the experiments we have used molecular dynamics simulations. In a novel approach, the vortex interactions are based on an extended Ginzburg-Landau formalism [1]. The focus of the simulations was on the influence of pinning and the external field on the IMS. Our combination of neutron techniques with molecular dynamics simulations pave the way to a quantitative analysis of vortex matter of intertype superconductors.
[1] A.Vagov, et al, Phys.Rev.B 93, 174503, 2016
[2] T.Reimann, et al, Nat.Commun. 6, 8813, 2015
[3] T.Reimann, et al, Phys.Rev.B 96, 144506, 2017
[4] A.Backs, et al, Phys.Rev.B 100, 064503, 2019
Speaker: Alexander Backs (Heinz Maier Leibnitz Zentrum (MLZ))
• 2:45 PM
Discussions 15m
• 3:00 PM 3:30 PM
Break 30m
• 3:30 PM 4:30 PM
DN2020. Soft Matter: Part 2/2
Convener: Regine von Klitzing (TU Darmstadt)
• 3:30 PM
Adaptive microgels: to squeeze or not to squeeze? 15m
Microgels are macromolecular networks swollen by the solvent they are dissolved in. They are unique systems that are distinctly different from common colloids, such as, e.g., rigid nanoparticles, flexible macromolecules, micelles or vesicles. When swollen, they are soft and have a fuzzy surface with dangling chains and the presence of cross-links provides structural integrity. They find applications e.g., in biocatalysis and as sensors.
At high packing density, microgels can deswell, interpenetrate and deform, thus they can behave like particles or / and macromolecules. Due their properties, microgels can be used to tune the particle-to-polymer transition.
We will discuss properties of microgels of different architectures both in aqueous solution and at interfaces. In particular we will address ultra-low cross-linked microgels, hollow and anisotropic microgels which are sensitive to stimuli as, e.g. temperature and pH.
The structure of microgels is investigated by means of scattering methods, especially exploiting the technique of contrast variation in small angle neutron scattering. The results will be compared to data obtained from super resolved fluorescence microscopy, scanning force microscopy and computer simulations.
Scotti, A. et al. Nat Commun 2019, 10, 1418.
Nickel, A. et al. Nano Lett. 2019, 19, 8161.
Keidel, R. et al. Science Advances 2018, 4, eaao7086
Switacz, V. K. et al Biomacromolecules 2020
Brugnoni, M.; et al Polym. Chem. 2019, 10, 2397.
Speaker: Walter Richtering (RWTH Aachen)
• 3:45 PM
Cononsolvency-induced collapse transitions in thermo-responsive block copolymer films 15m
The diblock copolymer PMMA-b-PNIPAM forms micelles in solution that feature a permanently hydrophobic core and a thermo-responsive shell. While a typical shell collapse transition can be induced via a temperature stimulus at the LCST, the PNIPAM block is also sensitive to the composition of the surrounding solvent. Although water and organic cosolvents individually act as good solvents to the PNIPAM chain, mixtures of both act as bad solvent. As a consequence, the transition temperature shifts as a function of the molar fraction of the cosolvent. For PNIPAM, well-known examples of cosolvents include simple alcohols such as methanol or ethanol as well as acetone. We demonstrate that the cononsolvency effect is transferrable from solution to thin film systems. PMMA-b-PNIPAM films swollen in saturated water vapor show a swelling and collapse at the exchange of the surrounding atmosphere to a mixed vapor of water and cosolvent. The film kinetics are investigated with a focus on time-of-flight neutron reflectometry (TOF-NR) and spectral reflectance techniques. In order to differentiate between water and cosolvent distributions along the films’ vertical, sequential experiments with deuterated and non-deuterated water and cosolvent are performed. Complementary FTIR measurements reveal the hydration and cosolvent exchange process at the PNIPAM amide and alkyl functional groups.
Speaker: Christina Geiger (Technical University of Munich, Chair of Functional Materials)
• 4:00 PM
Kinetics of Mesoglobule Disintegration in Aqueous Poly(N-isopropylacrylamide) Solutions Following Pressure Jumps 15m
Stimuli-responsive polymers in aqueous solution form mesoglobules in the two-phase region of the temperature-pressure phase diagram. While the formation of mesoglobules has been amply studied [1], their dissolution and associated structural changes are hardly explored. To elucidate the kinetics of chain swelling and mesoglobule disintegration in a semi-dilute aqueous solution of the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), we use in situ, real-time (50 ms – 1500 s) small-angle neutron scattering (SANS) at instrument D11, ILL. The coexistence line is crossed by applying a fast pressure jump from the two-phase to the one-phase state, and the target pressure is varied. Two limiting mechanisms are identified: 1) The release of single polymers from the surface of the mesoglobules, leading to a semi-dilute solution. 2) Continuous swelling of the mesoglobules due to uptake of water until the entire system is spanned, resulting in a semi-dilute solution.The first mechanism is dominant when the pressure jumps are carried out in the low pressure regime and when the jumps are shallow. The second mechanism is encountered for deep jumps in the low pressure regime and for all target pressures in the high-pressure regime.
Speaker: Alfons Schulte (University of Central Florida, Department of Physics and College of Optics and Photonics)
• 4:15 PM
Conducting polymer infiltration in porous cellulose thin films 15m
Cellulose nanofibrils (CNF) have proven their strengths in conductive and transparent films. A promising route for fabricating porous CNF films on large scale is spray deposition using water-based technologies; the resulting porous CNF templates are excellent candidates to infiltrate conductive polymers for functionalization. We used poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS), widely applied in organic photovoltaics and electronics, to functionalize the CNF template. We studied the infiltration, resulting structural rearrangement within the thin CNF film of 400 nm thickness, and their behavior under cyclic humidity changes by grazing incidence small-angle neutron scattering. We resolve in situ reversible morphological rearrangements within pristine CNF thin films under cyclic humidification, which might be attributed to voids or coiling. When infiltrating PEDOT:PSS, morphological changes within the film are inhibited due to the polymer completely filling any porous structure within the thin film during cyclic humidification. The CNF/PEDOT:PSS composite obtained by infiltration rather shows a swelling process of the PEDOT:PSS component in the film. This behavior is reversible over at least two humidification cycles. As humidity is present in many device applications and during processing and fabrication of conductive CNF composites, our results help to understand the humidity’s nanoscale impact to the meso- or macroscale in a device application.
Speaker: Stephan Roth (DESY / KTH)
• 3:30 PM 4:00 PM
DN2020: Instrumentation: Part 2/2
Convener: Frank Schreiber (Uni Tübingen, Angewandte Physik)
• 3:30 PM
Perspectives for accelerator based neutron sources - The HBS project 15m
Neutrons can be produced by fission in nuclear reactors, spallation using high-power proton accelerators, and nuclear capture reactions with low-energy proton accelerators. While the first two techniques are used very successfully in Europe, the later option only recently gained greater interest. Using high current low energy proton beams bombarding a metal target a neutron flux comparable with current neutron sources is accessible.
In the HBS project a scalable accelerator driven neutron source optimized for scattering and neutron analytics is developed. The whole chain ranging from the accelerator to the target / moderator / shielding assembly and the neutron optics is optimized to the needs of the neutron experiments. This approach makes the HBS very efficient enabling competitive neutron fluxes at sample position equivalant or better to existing ones. Due to the scalability in accelerator power the source can vary from a low power pulsed neutron source with an average power at the target of a few kW to a high performance neutron source with ~100 kW average power serving as a full-fledged national neutron source.
The baseline specification of the HBS is a high current low energy proton accelerator to drive a 100 kW neutron source serving up to 3 independent target stations with up to 6 individual instruments at each station for experiments. We will describe the current status of the project and its perspective within the European landscape of neutron sources.
Speaker: Thomas Gutberlet (Forschungszentrum Jülich GmbH)
• 3:45 PM
Instrumentation at a compact accelerator-based neutron source 15m
Compact accelerator-based neutron sources (CANS) produce neutrons by low energy nuclear reactions well below the spallation threshold making them a cost-efficient and effective alternative to spallation and reactor based neutron sources. Such low energy (p, n)-reactions produce less and lower energy byproducts thus reducing significantly the radiation level. This allows the construction of a very compact target / moderator / reflector (TMR) unit with a thermal and cryogenic moderator placed close to the target providing a high phase space density or in other words a high brilliance neutron source. The compact design and the low radiation level allow the placement of optical elements close to the moderator surface, e.g. neutron guides or choppers, thus allowing the extraction of large phase space volumes with a large brilliance transfer to the sample. The instrumentation at a high power CANS with a proton beam power in the range of 100 kW, which we investigate in the framework of the HBS project, can be competitive to instruments at spallation sources with comparable beam power and current operated research reactors.
At the DN2020, I will present the potential a high power CANS offers for the design of different instruments e.g. reflectometer, SANS or spectrometers.
Speaker: Paul Zakalek (JCNS, Forschungszentrum Jülich GmbH)
• 3:30 PM 5:00 PM
DN2020: Materials: Part 1/1
Convener: Andreas Meyer (German Aerospace Center)
• 3:30 PM
Structure-dynamics relation in Zr-Ti melts 15m
The early transition metals Zirconium and Titanium show very similar chemical and structural properties. The binary Zr-Ti alloys compose a completely miscible system, which is also a boundary system for many bulk metallic glasses (BMGs) and stable quasicrystals. However, the detailed formation mechanisms of these special structures remain largely unknown and are often speculative, since for these chemically reactive, high melting temperature alloys accurate knowledge of melt properties is largely missing. Using containerless levitation techniques, we successfully investigated the microscopic structure and dynamics of the Zr-Ti melts over a large temperature range. Neutron and synchrotron diffraction experiments reveal a melt structure exhibiting barely any chemical short-range order. On the Zr-rich side, the Ti diffusivity obtained by quasi-elastic neutron scattering decreases with increasing Ti content. Such a concentration dependent atomic dynamics can be fully understood according to the prediction of the Mode-Coupling Theory (MCT) on a binary hard-sphere mixture with a small size disparity. Our results indicate the dominant impact of the topological structure on the atomic motion in the Zr-Ti melts.
Speaker: Fan Yang (Deutsches Zentrum für Luft- und Raumfahrt)
• 3:45 PM
Dynamics of porous and amorphous magnesium borohydride to understand solid state Mg-ion-conductors 15m
Rechargeable solid-state magnesium batteries are considered for high energy density storage and usage in mobile applications as well as to store energy from intermittent energy sources. Recently, magnesium borohydride, Mg(BH$_4$)$_2$, was found to be an effective precursor for solid-state Mg-ion conductors. The mechanochemical synthesis tends to form amorphous Mg(BH$_4$)$_2$ and it has been postulated that amorphous Mg(BH$_4$)$_2$ is increasing the conductivity in the Mg-ion conductors. Quasi-elastic neutron scattering (QENS) studies were employed to investigate the dynamics of porous and amorphous Mg(BH$_4$)$_2$. In general, QENS is needed to understand the local structure and dynamics in the precursor at different temperatures as well as at different energy- and momentum transfers. The results show that the low energy excitation spectrum in Mg(BH$_4$)$_2$ is strongly dependent on the local structure as can be seen by the comparison of as-received γ-Mg(BH$_4$)$_2$ and ball milled, amorphous compound. While as-received γ-Mg(BH$_4$)$_2$ shows almost no quasi-elastic scattering at 310 K, the ball milled version displays a significantly different low energy excitation spectrum and a higher rotational mobility of the [BH4] units. A high rotational mobility is proposed to be a fundamental necessity for high Mg-ion conductivity. This is supported by an almost two orders of magnitude higher conductivity in the ball milled sample compared to the as-received γ-Mg(BH$_4$)$_2$ at 353 K.
Speaker: Wiebke Lohstroh (Heinz Maier-Leibnitz Zentrum (MLZ), Technische Universität München)
• 4:00 PM
Phonon renormalization explained by electron-momentum dependent electron-phonon coupling 15m
Electron-phonon coupling, i.e., the scattering of lattice vibrations by electrons and vice versa, is a common phenomenon in solids and can lead to emergent ground states such as superconductivity and charge-density wave order. Signatures of strong electron-phonon coupling, e.g. softening and broadening of phonons on cooling, are typically assigned to the presence of nested parts of the Fermi surface or lattice anharmonicity. Here, we unravel a third scenario in the seminal strong-coupling material YNi$_2$B$_2$C. The three-dimensional Fermi surface features a large value of the electronic joint density-of-states but only fro a particular value of the electron out-of-plane momentum $k_z$. Using a combination of inelastic neutron scattering and angle-resolved photoemission spectroscopy analyzed based on ab-initio lattice dynamical calculations we show that this peak of the electronic joint density-of-states as function of $k_z$ is likely the origin for the spectacular phonon renormalization in YNi$_2$B$_2$C. Thus, our study rationalizes strong phonon anomalies in the absence of both classic, i.e. phonon-momentum dependent nesting and anharmonicity.
Speaker: Frank Weber (Karlsruhe Institute of Technology)
• 4:15 PM
Targeted use of residual stresses in electric sheet to increase energy efficiency 15m
Electrical steel sheets are used in electric drives to guide the magnetic field and their efficiency strongly depends on energy losses during the reversal of magnetization. The energy loss is coupled to the mobility of the magnetic domains, which is negatively affected by stress caused during the manufacturing process [1, 2].
Neutron grating interferometry (nGI) allows to probe the bulk local magnetic properties in samples of technically relevant dimensions, which is not possible with most other techniques by tracking the amount of ultra-small-angle-neutron scattering inside a sample [3]. The DFI image is related to the distribution and size of magnetic domains inside a sample serving as possible scattering centers and allows to track the degradation of magnetic domain wall mobility caused by stress.
In this project we use the degradation of the magnetic domains by targeted stress to actively guide the magnetic field, allowing to build more efficient electrical drives. We will give a overview about the achieved results in flux guidance using various embossing strategies. Moreover we present an outlook on future experiments.
This project is a collaboration with the utg (TUM) and IEM (RWTH Aachen) as part of the DFG priority program SPP2013
[1] H. Weiss et al., J. Magn. Magn. Mater. 474, 643 – 653 (2018)
[2] A. Moses, IEEE Trans. Magn, Vol. 15, 1575-1579 (1979)
[3] C. Grünzweig, PhD thesis (2009)
Speaker: Tobias Neuwirth
• 4:30 PM
Revealing Anion Order in Holmium Hydride Oxide HoHO by Neutron Diffraction 15m
Heteroanionic hydrides are an emerging class of compounds with representatives showing ionic conductivity [1] or catalytic activity [2]. For holmium hydride oxide HoHO, the disordered CaF2 type structure was assigned and confirmed by powder neutron diffraction [2]. However, the analysis showed a deviation from the 1:1:1 composition: REH2+xO1-x. This demands the occupation of either octahedral interstices (H-rich) or the formation of defects (O-rich compound) and the differentiation and quantification of these species (H, O, voids) requests the combination of different methods. As the anionic ordering influences the ionic conductivity substantially [3], we conducted neutron diffraction measurements on both HoHO and the deuteride HoDO.
In contrast to the previous reports, both compounds crystallize in an ordered CaF2 substructure with space group F-43m (Heusler-LiAlSi type; a(HoHO) = 5.27550(13) Å, a(HoDO) = 5.27394(8) Å) with no significant underoccupation, mixing of sites, or occupation of the octahedral interstice. They are the first ionic substances to crystallize in this structure type, which is usually observed for metallic half-Heusler phases. Furthermore, HoHO shows an unusual resistivity towards air, as it decomposes only above 600 K, independent of O2 in the atmosphere.
1 K. Fukui et al., Nat. Commun. 2019, 10, 2578.
2 H. Yamashita et al., J. Am. Chem. Soc. 2018, 140, 11170.
3 H. Ubukata et al., Chem. Mat. 2019, 31, 7360.
Speaker: Nicolas Zapp (Universität Leipzig)
• 4:45 PM
Polymorphic phase transition in liquid and supercritical carbon dioxide 15m
Thermal density fluctuations of supercritical (SC) CO2 were explored using small-angle neutron scattering (SANS) whose amplitude (susceptibility) and correlation length show the expected maximum at the Widom line. The susceptibility is in excellent agreement with the evaluated values on basis of mass density measurements. A surprising observation is droplet formation above the gas-liquid line and between 20 and 60 bar above the Widom line, the corresponding borderline identified as the Frenkel line. The droplets start to form spheres of constant radius of about 45 Å and transform into rods and globules at higher pressure. The droplet formation represents a liquid-liquid (polymorphic) phase transition of same composition but different number density, whose difference defines its order parameter. Polymorphism in CO2 is a new phenomena, it characterizes the gas-like to liquid-like transition in SC fluids and might be of particular interest for better understanding polymorphism, since CO2 represents a “simple” van der Waals liquid in contrast to water, which is the most widely studied liquid showing polymorphism in its supercooled state.
This work has been published in:
Phys. Rev. Lett. 120 (2018) 145701.
Scientific Reports (2020) 10:11861. doi.org/10.1038/s41598-020-68451-y.
Speaker: Dr Vitaliy Pipich (Forschungszentrum Jülich GmbH, Jülich Centre for Neutron Science (JCNS) at Heinz Maier- Leibnitz Zentrum (MLZ))
• 4:00 PM 5:00 PM
DN2020: Life Science/ Biology: Part 1/1
Convener: Frank Schreiber (Uni Tübingen, Angewandte Physik)
• 4:00 PM
Failure of the Zimm model: Thermal unfolding of Ribonuclease A 15m
Disordered regions as found in intrinsically disordered proteins (IDP) or during protein folding define response time to stimuli and protein folding times. Neutron Spinecho Spectroscopy is a powerful tool to access directly collective motions of the unfolded chain to observe conformational relaxations. During thermal unfolding of native Ribonuclease A we examine structure and dynamics of the disordered state within a two-state transition model using polymer models including internal friction. The presence of 4 disulfide bonds alters the disordered configuration to a more compact configuration compared to a Gaussian chain that is defined by the additional links. The dynamics of the disordered chain is described by ZIMM dynamics with internal friction between neighboring amino acids. The mode structure is not changed by the additional links, but relaxation times are dominated by mode independent internal friction. Internal friction relaxation times show an Arrhenius like behavior. The dominating internal friction suppresses the characteristics of the ZIMM dynamics and suggest that the characteristic motions correspond to elastic overdamped modes similar to motions observed for folded proteins.
Speaker: Ralf Biehl (Forschungszentrum Jülich)
• 4:15 PM
Protein Short-Time Diffusion in a Naturally Crowded Environment 15m
Macromolecular crowding, i.e. the presence of macromolecules at high volume fractions, affects reaction rates and transport processes in the cell. For reliable quantitative models of cellular pathways, the mobility of individual proteins is thus a key information. Often, the protein mobility is modeled by the self-diffusion of colloidal systems. The underlying assumption that neither the shape and size of proteins nor the polydisperse nature of the cytosol matters, has not been checked experimentally so far.
Here, we present a combined experimental-simulational study on the mobility of tracer proteins in cellular lysate [1]. Using quasi-elastic neutron backscattering, we study the mobility of immunoglubulin in deuterated cellular lysate from E.coli. Varying the mixing ratio and volume fraction of protein and lysate, we observe that the immunoglubulin mobility depends on the total volume fraction only. Using Stokesian dynamics simulations, we calculate the mobility of tracers in a model system for the lysate. In the polydisperse lysate, proteins with an average size indeed are slowed down similar to a monodisperse solution of same volume fraction, whereas larger/smaller proteins diffuse slower/faster, respectively. As immunoglubulin is close to the average size, we obtain a consistent picture on the protein mobility in a polydisperse cell-like environment, which is promising for a future quantitative understanding of reaction pathways.
[1] Grimaldo et al. JPCL 2019, 10, 1709
Speaker: Felix Roosen-Runge (Faculty of Health and Society, Malmö University, Sweden)
• 4:30 PM
Membrane stiffness and interaction of lipids with myelin basic protein in native and multiple sclerosis diseased myelin mimetic 15m
A major component of the saltatory nerve signal conduction is the multilamellar myelin membrane around axons. In demyelinating diseases like multiple sclerosis, this membrane is damaged which leads to severe problems in nerve conduction. In literature different values for the lipid composition of healthy myelin sheath and myelin in the condition of experimental autoimmune encephalomyelitis - the standard animal model for multiple sclerosis - have been found. In this work we try to elucidate the interaction mechanism of myelin basic protein - the structural protein responsible for the cohesion of the cytoplasmic leaflets of the myelin sheath - with membranes mimicking both compositions. As samples we use unilamellar vesicles and supported bilayer systems. With neutron and x-ray small angle scattering methods combined with cryo-TEM we can follow the rapid aggregation which leads to a slow process in which different structures are formed depending on the lipid composition. This structural information can be associated with the bending rigidity of the respective membrane measured with Neutron Spin Echo. Neutron reflectometry gives insights on how the interaction mechanism between membrane and protein functions and reveals how modified membranes are destabilized by the protein.
Speaker: Benjamin Krugmann (Forschungszentrum Jülich - JCNS 1)
• 4:45 PM
Following the diffusive processes during a non-classical protein crystallization via neutron spectroscopy 15m
Following dynamics during kinetically changing samples is a major challenge. With recent developments of analysis frameworks, accessing the short-time self-diffusive properties of protein solutions by measuring specific energy transfers (FWS) via neutron backscattering, kinetically changing samples can be investigated. More detailed information (internal dynamics and immobile fraction of the proteins) can be extracted from full QENS spectra obtained with a floating average with a lower kinetic time resolution. The immobile fraction, determined by multi-dimensional fits, can be assigned to proteins in a gel-like state or in crystals [1].
Here, we discuss the results of a study performed during crystallization. CdCl$_2$ induces a non-classical crystallization process [2,3] of $\beta$-lactoglobulin (BLG) with a metastable intermediate phase. We investigated the short-time collective and self-diffusion of BLG by neutron spin-echo (IN11), FWS and QENS (IN16b), respectively, of the crystallization process for different sample conditions. Combining the different results, a consistent picture of the process can be drawn, which differs significantly from classical BLG crystallization induced by ZnCl$_2$ [1]. This implies a strong influence of seemingly subtle cation-specific effects on protein crystallization.
[1] C. Beck, et al., Cryst. Growth Des. 2019
[2] A. Sauter, et al., J. Am. Chem. Soc. 2015
[3] A. Sauter, et.al., Faraday Discuss. 2015
Speaker: Christian Beck (Institut Laue Langevin)
• 4:30 PM 5:00 PM
DN2020: Digitalization and Machine Learning: Part 1/1
Convener: Regine von Klitzing (TU Darmstadt)
• 4:30 PM
Towards Reflectivity profile inversion through Artificial Neural Networks 15m
The goal of Specular Neutron and X-ray Reflectometry is to infer materials Scattering Length Density (SLD) profiles from experimental reflectivity curves. This talk will focus on describing an original approach to the ill-posed non-invertible problem which involves the use of Artificial Neural Networks (ANN). In particular, the numerical experiments to be described deal with large data sets of simulated reflectivity curves and SLD profiles, whose aim is to assess the applicability of Data Science and Machine Learning technology to the analysis of data generated at large scale facilities. In fact, under certain circumstances, properly trained Deep Neural Networks are capable of correctly recovering plausible SLD profiles when presented with never-seen-before simulated reflectivity curves. A proper inclusion of such an approach within current data workflows would be able to offer two main advantages over traditional fitting methods when dealing with real experiments, namely, 1. no prior assumptions about the sample physical model are required and 2. the times-to-solution could be shrank by orders of magnitude, enabling faster batch analyses for large datasets.
Speaker: Juan Manuel Carmona Loaiza (Scientific Computing Group; JCNS am MLZ; FZ-Jülich)
• 4:45 PM
How to properly #opendata? 15m
Within the last decade, neutron instrumentation improved in many ways. We are no longer presenting original datasets in consequence of drastic increase in the amount of taken data. Published figures depends on many independent parameters, like binning size, data reduction algorithms, instrumental corrections. It is clear that in order to to keep our data useful we need to change the ways how we treat them.
I will present you detailed process of opening neutron data, touching several topics:
1. Publishing raw data: open access repositories (like figshare) vs. facility repositories (data.ill.eu or SciCatProject)
2. Publishing evaluation scripts: keeping scripts accessible after hundreds of years, docker images for easy data evaluation.
3. Versioning and citability of your code: how to cite github projects, sharing code between community, dissemination of your research with open code approach.
4. Transformation of datasets to open education resources.
5. Data journals: future of data sharing.
Opening my research helped me to stay organized and expanded my collaboration network. I hope I will convince you for #opendata as well.
1: McKiernan, Erin (2020): Connections between open scholarship practices. figshare. Figure. https://doi.org/10.6084/m9.figshare.12592283.v2
Speaker: Petr Čermák (MGML, Charles University)
• 5:00 PM 5:30 PM
DN2020: Closing, Announcement of new KFN
Convener: Christine Papadakis (Technische Universität München, Physik-Department, Fachgebiet Physik weicher Materie)
|
|
# Dissecting a tetrahedron into orthoschemes
Hey,
Is there a way to dissect any tetrahedron into a finite number of orthoschemes.
I know that for a tetrahedron which only has acute angles, one can take the center of the inscribed circle and project the center on all the faces and edges and connect it with the vertices to get the orthoschemes. This however does not work when the tetrahedron is allowed to have obtuse angles since the projection of the center of the inscribed circle on the plane containing a face for instance may fall outside of the tetrahedron.
Thanks
-
Yes, this is known (12 is always enough). Interestingly, in higher dimension it is open whether every simplex in $\Bbb R^d$ can be dissected into finitely many orthoschemes (also called path-simplices). This is called Hadwiger's conjecture. See this survey for results and refs to proofs of the conjecture for $d \le 5$.
P.S. In 1993, Tschirpke showed that 12,598,800 orthoschemes suffices in $\Bbb R^5$.
-
Yet another "Hadwiger conjecture" to add to the ones about clique minors and covering convex bodies with 2^n smaller homothetic bodies? We need more distinctive nomenclature. – David Eppstein May 12 '10 at 23:34
There is yet another "Hadwiger's conjecture" saying that every two polytopes with the same volume and Hadwiger invariants are scissor congruent (generalizing Sydler's thm in $\Bbb R^3$ and $\Bbb R^4$ on the volume and Dehn invariant). Still, is nothing compared to en.wikipedia.org/wiki/Erdos_conjecture – Igor Pak May 12 '10 at 23:39
Hey, Thanks for the response. I read through the paper and unfortunately the references containing the proof that a tetrahedra can be dissected into 12 orthoschemes are in German. Do you know of an English reference that has this material? Thanks – Opt Jun 3 '10 at 15:33
@Sid. I don't know. I would read Tschirpke's paper springerlink.com/content/l213l5l11jt82187 if you want to get the general idea. If you don't care about 12, but say 100 will work for you, this is a relatively easy exercise which I did some time ago. Hint: start with a "barycentric subdivision" from the inscribed sphere. That gives 24 orthoschemes, and works in many (but not all!) cases. Figure out what goes wrong, cut your simplex into two. Repeat. P.S. You can also use Google Translate which can handle .pdf files. – Igor Pak Jun 4 '10 at 4:22
|
|
# Introduction to the Physics of Massive and Mixed Neutrinos Paperback / softback
## Part of the Lecture Notes in Physics series
#### Description
Small neutrino masses are the first signs of new physics beyond the Standard Model of particle physics.
Since the first edition of this textbook appeared in 2010, the Nobel Prize has been awarded "for the discovery of neutrino oscillations, which shows that neutrinos have mass". The measurement of the small neutrino mixing angle $\theta_{13}$ in 2012, launched the precision stage of the investigation of neutrino oscillations.
This measurement now allows such fundamental problems as the three-neutrino mass spectrum - is it normal or inverted? - and the $CP$ violation in the lepton sector to be tackled. In order to understand the origin of small neutrino masses, it remains crucial to reveal the nature of neutrinos with definite masses: are they Dirac neutrinos possessing a conserved lepton number, which distinguishes neutrinos and antineutrinos, or are they Majorana neutrinos with identical neutrinos and antineutrinos?
Experiments searching for the neutrinoless double beta decay are presently under way to answer this fundamental question. The second edition of this book comprehensively discusses all these important recent developments.
Based on numerous lectures given by the author, a pioneer of modern neutrino physics (recipient of the Bruno Pontecorvo Prize 2002), at different institutions and schools, it offers a gentle yet detailed introduction to the physics of massive and mixed neutrinos that prepares graduate students and young researchers entering the field for the exciting years ahead in neutrino physics.
#### Information
£49.99
£38.43
Free delivery within the UK
## Also by Samoil Bilenky
£49.99
£42.49
Available for
### Introduction to the Physics of Massive...
£53.99
£40.34
Available with free
standard delivery
£53.99
£45.89
Available for
## Also in the Lecture Notes in Physics series | View all
### Advanced Lectures on General Relativity
£49.99
£38.43
Available with free
standard delivery
### A Microscopic Theory of Fission...
£54.99
£41.96
Available with free
standard delivery
### An Introduction to Two-Dimensional...
£54.99
£41.96
Available with free
standard delivery
£40.99
£31.55
|
|
# Counting trails in a triangular grid
A triangular grid has $N$ vertices, labeled from 1 to $N$. Two vertices $i$ and $j$ are adjacent if and only if $|i-j|=1$ or $|i-j|=2$. See the figure below for the case $N = 7$.
How many trails are there from $1$ to $N$ in this graph? A trail is allowed to visit a vertex more than once, but it cannot travel along the same edge twice.
I wrote a program to count the trails, and I obtained the following results for $1 \le N \le 17$.
$$1, 1, 2, 4, 9, 23, 62, 174, 497, 1433, 4150, 12044, 34989, 101695, 295642, 859566, 2499277$$
This sequence is not in the OEIS, but Superseeker reports that the sequence satisfies the fourth-order linear recurrence
$$2 a(N) + 3 a(N + 1) - a(N + 2) - 3 a(N + 3) + a(N + 4) = 0.$$
Question: Can anyone prove that this equation holds for all $N$?
-
+1: This is an interesting and very well presented question! – Douglas S. Stones May 7 '11 at 4:19
You can partition $a(N)$ into the number $a_1(N)$ of trails that use the edge $(N-1,N)$, the number $a_2(N)$ of trails that visit the vertex $N-1$ but don't use the edge $(N-1,N)$, and the number $a_3(N)$ of trails that don't visit the vertex $N-1$. Then $a_1(N)=a(N-1)$ and $a_3(N)=a(N-2)$, and it remains to be shown that $a_2(N)=2a(N-1)-3a(N-3)-2a(N-4)$. – joriki May 7 '11 at 8:19
Regard the same graph, but add an edge from $n-1$ to $n$ with weight $x$ (that is, a path passing through this edge contributes $x$ instead of 1).
The enumeration is clearly a linear polynomial in $x$, call it $a(n,x)=c_nx+d_n$ (and we are interested in $a(n,0)=d_n$).
By regarding the three possible edges for the last step, we find $a(1,x)=1$, $a(2,x)=1+x$ and
$$a(n,x)=a(n-2,1+2x)+a(n-1,x)+x\,a(n-1,1)$$
(If the last step passes through the ordinary edge from $n-1$ to $n$, you want a trail from 1 to $n-1$, but there is the ordinary edge from $n-2$ to $n-1$ and a parallel connection via $n$ that passes through the $x$ edge and is thus equivalent to a single edge of weight $x$, so we get $a(n-1,x)$.
If the last step passes through the $x$-weighted edge this gives a factor $x$, and you want a trail from $1$ to $n-1$ and now the parallel connection has weight 1 which gives $x\,a(n-1,1)$.
If the last step passes through the edge $n-2$ to $n$, then we search a trail to $n-2$ and now the parallel connection has the ordinary possibility $n-3$ to $n-2$ and two $x$-weighted possibilities $n-3$ to $n-1$ to $n$ to $n-1$ to $n-2$, in total this gives weight $2x+1$ and thus $a(n-2,2x+1)$.)
Now, plug in the linear polynomial and compare coefficients to get two linear recurrences for $c_n$ and $d_n$.
\begin{align} c_n&=2c_{n-2}+2c_{n-1}+d_{n-1}\\ d_n&=c_{n-2}+d_{n-2}+d_{n-1} \end{align}
Express $c_n$ with the second one, eliminate it from the first and you find the recurrence for $d_n$.
(Note that $c_n$ and $a(n,x)$ are solutions of the same recurrence.)
-
If I do everything correctly, I get $d_n= 3 d_{n-1} + d_{n-2} - 3 d_{n-3} -2 d_{n-4}$ so that looks good. But I have to say, I don't understand what $c_n$ counts (and thereby I also don't understand the equation for $a(n,x)$ you write down). – Fabian May 7 '11 at 12:30
The first sentence describes what $a(n,x)$ counts, paths in a slightly modified weighted triangular grid. A priori, $c_n$ counts nothing, it is just the coefficient of $x$ in $a(n,x)$. – Phira May 7 '11 at 12:33
Same as Fabian: the calculation comes out right, but I don't understand where you get the equation for $a(n,x)$. I've been trying to add up paths for different cases of the final edges, but I never get anything remotely similar. – joriki May 7 '11 at 12:35
@joriki, @user9325: To make my question a bit more concrete: The first sentence indicates that $a(n,x)$ should be the same graph (with $n$ vertices) but the edge from $n-1$ to $n$ with weight $1+x$. So I agree with the results $a(1,x)$ and $a(2,x)$. For $a(3,x)$ I would expect $1+(1+x) = 2+ x$, but you have $2+3x$. – Fabian May 7 '11 at 12:43
@user9325: Very nice indeed! Thanks for the more detailed explanation. The part I was missing was that the weight of the additional edge counts the possibilities of getting to the penultimate vertex using new vertices and edges that weren't in the lower-order graph. That's cool! – joriki May 7 '11 at 13:05
This is not a new answer, just an attempt to slightly demystify user9325's very elegant answer to make it easier to understand and apply to other problems. Of course this is based on what I myself find easier to understand; others may prefer user9325's original formulation.
The crucial insight, in my view, is not the use of a variable weight and a polynomial (which serve as convenient bookkeeping devices), but that the problem becomes more tractable if we generalize it. This becomes apparent when we try a similar approach without this generalization: We might try to decompose $a(n)$ into two contributions corresponding to the two edges from $n-2$ and $n-1$ by which we can get to $n$, and in each case account for the new possibilities arising from the new vertices and edges. The contribution from $n-1$ is straightforward, but the contribution from $n-2$ causes a problem: We can now travel between $n-3$ and $n-2$ either directly or via $n-1$, and we can't just add a factor of $2$ to take this into account because there are trails using both of these possibilities. This is where the idea of an edge parallel to the final edge arises: Even though we're only interested in the final result without a parallel edge, the recurrence leads to parallel edges, so we need to include that possibility. We can do this without edge weights or polynomials by just counting the number $b(n)$ of trails that use the parallel edge separately from the number $a(n)$ of trails that don't. (I'm not saying we should; the polynomial, like a generating function, is an elegant and useful way to keep track of things; I'm just trying to emphasize that the polynomial isn't an essential part of the central idea of generalizing the original problem.)
Counting the number $a(n)$ of trails that don't use the parallel edge, we have a contribution $a(n-1)$ from trails ending with the normal edge from $n-1$, and a contribution $a(n-2)+b(n-2)$ from trails ending with the edge from $n-2$, which may ($b$) or may not ($a$) go via $n-1$:
$$a(n)=a(n-1)+a(n-2)+b(n-2)\;.$$
Counting the number $b(n)$ of trails that do use the parallel edge, we have a contribution $a(n-1)+b(n-1)$ from trails ending with the parallel edge, which may ($b$) or may not ($a$) go via $n$, a contribution $b(n-1)$ from trails ending with the normal edge from $n-1$, which have to go via $n$ (hence $b$), and a contribution $2b(n-2)$ from trails ending with the edge from $n-2$, which have to go via $n-1$ (hence $b$) and can use the normal edge from $n-1$ and the parallel edge in either order (hence the factor $2$):
$$b(n)=a(n-1)+b(n-1)+b(n-1)+2b(n-2)\;.$$
This is precisely user9325's result, with $a(n)=d_n$ and $b(n)=c_n$. There was a tad more work in counting the possibilities, but then we didn't have to compare coefficients.
-
|
|
# [Resource Topic] 2019/1264: Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era
Welcome to the resource topic for 2019/1264
Title:
Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era
Authors: Juan Garay, Aggelos Kiayias, Rafail Ostrovsky, Giorgos Panagiotakos, Vassilis Zikas
Abstract:
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private correlated-randomness setup, such as a PKI, protocols can tolerate up to t<n/3 of the parties being malicious. The introduction of Nakamoto style’’ consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the t<n/3 bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the n/3 lower bound. In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality wrapper, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm—concretely, using PoWs and honest majority of computing power—and that the traditional t<n/3 impossibility results fail when the parties have access to such a network. Our construction is in the {\em fresh} Common Reference String (CRS) model—i.e., it assumes a CRS which becomes available to the parties at the same time as to the adversary. We then present constructions for BA and MPC, which given access to such a network tolerate t<n/2 corruptions without assuming a private correlated randomness setup. We also show how to remove the freshness assumption from the CRS by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
|
|
158 articles – 1990 Notices [english version]
HAL : in2p3-00338288, version 1
arXiv : 0811.0692
We report on results from a Suzaku observation of the narrow-line Seyfert 1 NGC 4051. During our observation, large-amplitude rapid variability was seen, and the averaged 2-10keV flux was 8.1$\times$10$^{-12}$ergs$^{-1}$cm$^{-2}$, which is several times lower than the historical average. The X-ray spectrum hardens when the source flux becomes lower, confirming the trend of spectral variability known for many Seyfert 1 galaxies. The broad-band averaged spectrum and spectra in high- and low-flux intervals were analyzed. The spectra were first fitted with a model consisting of a power-law component, a reflection continuum originating in cold matter, a blackbody component, two zones of ionized absorber, and several Gaussian emission lines. The amount of reflection is rather large ($R$ $\sim$ 7, where $R$ $=$ 1 corresponds to reflection by an infinite slab), while the equivalent width of the Fe-K line at 6.4keV is modest (140eV) for the averaged spectrum. We then modeled the overall spectra by introducing partial covering for the power-law component and reflection continuum independently. The column density for the former is 1$\times$10$^{23}$cm$^{-2}$, while it is fixed at 1$\times$10$^{24}$cm$^{-2}$ for the latter. By comparing the spectra in different flux states, we could identify the causes of spectral variability.
|
|
# Prove decidable
L={⟨M⟩: M is a DFA and for each string in L(M) the number of 1s is more than or equal to the number of 0s }
T = "On input where M is encoded DFA"
1. Construct another DFA D such that L(D)={x|x has more or equal 1s than 0s}
2. For each input x
a. if( x is accepted by DFA M){
if(accepted by D){
"accepted" and return;
}
3. "rejected"
Is this proof true?
• How do you construct $D$? Jul 4 at 15:32
• The title you have chosen is not well suited to representing your question. Please take some time to improve it; we have collected some advice here. Thank you!
– D.W.
Jul 5 at 7:11
• We discourage "please check whether my answer is correct" questions, as only "yes/no" answers are possible, which won't help you or future visitors. See here and here. Can you edit your post to ask about a specific conceptual issue you're uncertain about? As a rule of thumb, a good conceptual question should be useful even to someone who isn't looking at the problem you happen to be working on. If you just need someone to check your work, you might seek out a friend, classmate, or teacher.
– D.W.
Jul 5 at 7:11
No, this proof is not correct. You can't iterate through all inputs $$x\in \Sigma^*$$ since it would take you "infinite time".
The correct way to do this is to construct the complement of $$D$$ (as a pushdown automaton! as @Steven mentioned), which we will call $$D^c$$, then construct the intersection PDA $$D^c\cap M$$ (notice that this can be done since $$M$$ is a DFA), and test if its language is empty. If it is, you can be sure that $$L(M)\subseteq L(D)$$, hence all words in $$M$$ have more $$1$$'s than $$0$$'s
|
|
# Inequalities involving exponentials
I am having trouble to solve this inequality in specific.
For $f(x)=6-2^x$ and $g(x)=4^x$
Solve in $\mathbb{R}$: $$(f+g)(x)<6$$
• Please include your own effort. – S.C.B. Feb 4 '17 at 15:37
Hint. Let $t=2^x$ and solve $6-t+t^2<6$. Can you take it from here?
• So, I was doing the variable exchange wrong, Your hint help me solving the rest of the exercise. Later I had to use the only possible value for y => 2^x=1 <=> x=0 The answer is ]-infinite;0[ Thank you for the help :) – GiuR Feb 4 '17 at 15:53
• @GiuR Well done! Your final result is correct. – Robert Z Feb 4 '17 at 15:55
\begin{align} (f+g)(x) &< 6 \iff \quad \text{using the definitions}\\ 6 - 2^x + 4^x &< 6 \iff \quad \text{subtracting $6$ from both sides} \\ -2^x + 4^x &< 0 \iff \quad \text{adding $2^x$ to both sides} \\ 4^x &< 2^x \iff \quad \text{splitting $4^x=(2^2)^x = 2^{2x} = (2^x)^2$} \\ 2^x 2^x &< 2^x \iff \quad \text{dividing both sides by $2^x \ne 0$} \\ 2^x &< 1 \iff \quad \text{property of exponential functions $b^x$} \\ x &< 0 \end{align}
|
|
Ramification groups
Let $L/K$ be a Galois extension of number fields with Galois group $G$. Let $O_K$ and $O_L$ be the ring of algebraic integers of $K$ and $L$ respectively. Let $P\subseteq O_K$ be a prime. Let $Q\subseteq O_L$ be a prime lying over $P$.
The $n$-th ramification group is defined as $$E_n(Q|P)=\lbrace \sigma\in G:\sigma(x)\equiv x\text{ mod } Q^{n+1}\text{ for all } x\in O_L\rbrace$$ In particular, $n=0$ gives the inertia group. How to prove the following:
1. $E_n$ is a normal subgroup of $G$.
2. $\cap_nE_n=\lbrace 1\rbrace$
• Your $E_n$ is presented as the kernel of map from the Galois group to $\text{Aut}(\mathcal{O}_L/Q^{n+1})$ so its normal. One way to show 2. is to note that your group is obviously contained in the decomposition group, so can be computed locally, and then use the fact that $\mathcal{O}_L$ is complete (so that $\varprojlim \mathcal{O}_L/Q^{n+1}=\mathcal{O}_L$). – Alex Youcis Oct 4 '16 at 12:22
The first part is obvious: if $\sigma \in E_n$ and $\tau \in G$, then $\tau \sigma \tau^{-1}(x) \equiv \tau \tau^{-1}(x) \equiv x \pmod{\mathfrak q^{n+1}}$, so $E_n$ is normal. For the second part, let $\sigma \in \cap_n E_n$, and pick an arbitrary element $x \in \mathcal O_L$. The given condition implies that $x - \sigma(x)$ is divisible by arbitrarily large powers of the prime $\mathfrak q$, however this is impossible unless $x = \sigma(x)$ by unique factorization of ideals. Since $x$ was arbitrary, $\sigma$ acts trivially on $\mathcal O_L$, and thus on the fraction field $L$.
|
|
# Including images in an R code chunk of R sweave using for loop
I am new to sweave and Latex. I am basically using an if-else loop which should display some specific image (taken from the laptop) if certain condition is satisfied. When I run this loop in R, the output is satisfactory in the console. It is also alright when I use "compile pdf" button on top in an Rnw file.
However, I need to produce different reports for each row of a csv, so I use a separate run file where I loop the Rnw through each row of the csv and produce multiple reports (one for each row). This works fine for text based output. But, this does not display the images if I loop through it. Here's a sample of what I have written for this:
\begin{figure}
<<echo=FALSE, fig.show='asis', fig=T>>=
library(magick)
x <- 3
if (x >=0 && x <=2){
#should display image1
}else if (x>=3 && x<=4){
#should display image2
}
@
\end {figure}
To show the images, I tried using print, paste, knitr::include_graphics but none worked for me. I am not sure whether it is a latex problem or an R problem and hence unable to find an appropriate solution. Any help would be sincerely appreciated. If you require a sample of the run file I mentioned, I shall be glad to share it. Thank you!
This should work with knitr (not with Sweave) with both include_graphics and image_read. Avoid the Sweave chunk options and the figure environment using knitr. I would use only include_graphics because image_read cannot work with PDF images and use magic only for load images is rather unnecessary (moreover, I hate packages that print by default a message that you must hide in dynamic reports).
\documentclass{article}
\begin{document}
library(magick)
A <- "/usr/local/texlive/2019/texmf-dist/tex/latex/mwe/example-image-a.png"
B <- "/usr/local/texlive/2019/texmf-dist/tex/latex/mwe/example-image-b.png"
@
<<test1,echo=F, fig.align="center", fig.cap="My test1",out.width="50%">>=
x <- 3
• Hey Fran! So this solution worked. include_graphics worked with your chunk options, perhaps I was missing something last time. What also worked was library (OpenImageR). I used function readImage to import the images and imageShow to print the images in the output PDF. Thanks! Jan 5, 2021 at 14:10
|
|
# Local volatility SVI parametrization
In this paper Gatheral presents the following parametrization of the implied total variance $w(k,T) = \sigma_{BS}(k,T)^2T$ for each slice $k \mapsto w(k,T)$:
$$w(k) = a + b\{\rho (k-m) + \sqrt{(k-m)^2 + \sigma^2} \}.$$
As far as I understand it, for each expiry $T$ one will have to calibrate a set of five parameters $\{a,b,m,\rho,\sigma\}$.
On the other hand I found the following article where in appendix A, a calibrated volatility surface is presented. But in their example there is an explicit dependence on $T$, so then they will have a "simple" specific expression for the whole volatility surface.
Another thing I've noticed when reading articles about various parametrizations that there seems to be some inconsistencies regarding implied total variance. Gatheral defines it as $\sigma_{BS}(k,T)^2T$ but I've seen in other articles people parametrizing on $\sigma_{BS}(k,T)^2$ or $\sigma_{BS}(k,T)$ instead.
Summarizing: My question is primarily if one has to calibrate SVI for each expiry slice or if it is possible to parametrizing the whole surface in a way such that the total number of parameter does not increase if more expiries are added.
• The link to you "following article" is broke. It is better to write out explicitly the title and authors of any cited paper so that it is independent of any link. – Hans Feb 20 at 2:08
Gatheral and Jacquier discuss this issue in section 4 of the paper. Instead of using the raw parameterization of the SVI, they use the natural parameterization of the total implied variance: $$w(k) = \Delta + \frac{\omega}{2} \left\{ 1 + \zeta \rho (k - \mu) + \sqrt{(\zeta (k-\mu) + \rho)^2 + (1-\rho^2)} \right\} (\text{p. 61 of the published paper})$$
In order to fit the entire surface of the total implied variance, they propose the following generalization. To ensure that the fit is free of arbitrage, they define the surface in terms of the log-moneyness and the at-the-money implied total variance $\theta_t := \sigma_{BS}^2(0,t)t$. The Surface SVI then has the form: $$w(k,\theta_t) = \frac{\theta_t}{2} \left\{ 1 + \rho \phi(\theta_t) k + \sqrt{(\phi(\theta_t) k + \rho)^2 + (1-\rho^2)} \right\} (\text{p. 63 of the published paper})$$ Where $\phi$ is a smooth function from $\mathbb{R}_{+}$ to $\mathbb{R}_{+}$ such that the limit $\lim_{t\rightarrow 0} \theta_t \phi(\theta_t)$ exists in $\mathbb{R}$. The parameters that you need to fit for the entire surface are therefore $\rho$ and whatever is needed to fit $\phi$. In practice, you need some interpolation to get $\theta_t$ because you almost never observe a log moneyness of exactly 0.
The function $\phi$ and the parameters have to satisfy certain restrictions for the parameterization to be free of arbitrage. The paper discusses these at length.
Heston and Jacquier propose two possible $\phi$ functions: $$\phi(\theta) = \frac{1}{\lambda \theta} \left( 1 - \frac{1-e^{-\lambda\theta}}{\lambda \theta} \right)$$ Which they call a Heston-like parameterization and the power law $$\phi(\theta) = \eta \theta^{-\gamma}$$ A while back, I implemented the paper in MATLAB. In the end, I didn’t use the codes so they are not extensively tested. I have uploaded them to the file exchange. Maybe they are helpful to you: http://ch.mathworks.com/matlabcentral/fileexchange/49962-gatherals-and-jacquier-s-arbitrage-free-svi-volatility-surfaces
• May I ask you a small question about your code? When you fit the whole surface (SSVI), why are you recalibrate the slice afterwards? according to ( arxiv.org/pdf/1204.0646.pdf ) you chould choose $\phi (\theta) = \frac{\eta}{\theta^\gamma(1+\theta)^{1-\gamma}}$, eq 4.5 on page 17. Given the constraint should result in a complete free of static arbitrage surface, or am I missing someting? – math Aug 15 '15 at 11:44
|
|
# American Institute of Mathematical Sciences
2010, 13(3): 633-646. doi: 10.3934/dcdsb.2010.13.633
## Some new results on explicit traveling wave solutions of $K(m, n)$ equation
1 School of Mathematical Sciences, Peking University, Beijing 100871
Received May 2009 Revised December 2009 Published February 2010
In this paper, we investigate the traveling wave solutions of $K(m, n)$ equation $u_t+a(u^m)_{x}+(u^n)_{x x x}=0$ by using the bifurcation method and numerical simulation approach of dynamical systems. We obtain some new results as follows: (i) For $K(2, 2)$ equation, we extend the expressions of the smooth periodic wave solutions and obtain a new solution, the periodic-cusp wave solution. Further, we demonstrate that the periodic-cusp wave solution may become the peakon wave solution. (ii) For $K(3, 2)$ equation, we extend the expression of the elliptic smooth periodic wave solution and obtain a new solution, the elliptic periodic-blow-up solution. From the limit forms of the two solutions, we get other three types of new solutions, the smooth solitary wave solutions, the hyperbolic 1-blow-up solutions and the trigonometric periodic-blow-up solutions. (iii) For $K(4, 2)$ equation, we construct two new solutions, the 1-blow-up and 2-blow-up solutions.
Citation: Rui Liu. Some new results on explicit traveling wave solutions of $K(m, n)$ equation. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 633-646. doi: 10.3934/dcdsb.2010.13.633
[1] Helin Guo, Yimin Zhang, Huansong Zhou. Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1875-1897. doi: 10.3934/cpaa.2018089 [2] István Győri, Yukihiko Nakata, Gergely Röst. Unbounded and blow-up solutions for a delay logistic equation with positive feedback. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2845-2854. doi: 10.3934/cpaa.2018134 [3] C. Y. Chan. Recent advances in quenching and blow-up of solutions. Conference Publications, 2001, 2001 (Special) : 88-95. doi: 10.3934/proc.2001.2001.88 [4] Marek Fila, Hiroshi Matano. Connecting equilibria by blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 155-164. doi: 10.3934/dcds.2000.6.155 [5] Petri Juutinen. Convexity of solutions to boundary blow-up problems. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2267-2275. doi: 10.3934/cpaa.2013.12.2267 [6] Yongsheng Mi, Boling Guo, Chunlai Mu. Well-posedness and blow-up scenario for a new integrable four-component system with peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2171-2191. doi: 10.3934/dcds.2016.36.2171 [7] Akmel Dé Godefroy. Existence, decay and blow-up for solutions to the sixth-order generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 117-137. doi: 10.3934/dcds.2015.35.117 [8] Binhua Feng. On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1785-1804. doi: 10.3934/cpaa.2018085 [9] Min Li, Zhaoyang Yin. Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter-Saxton equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6471-6485. doi: 10.3934/dcds.2017280 [10] Yuta Wakasugi. Blow-up of solutions to the one-dimensional semilinear wave equation with damping depending on time and space variables. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3831-3846. doi: 10.3934/dcds.2014.34.3831 [11] Hristo Genev, George Venkov. Soliton and blow-up solutions to the time-dependent Schrödinger-Hartree equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 903-923. doi: 10.3934/dcdss.2012.5.903 [12] Min Zhu, Shuanghu Zhang. Blow-up of solutions to the periodic modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7235-7256. doi: 10.3934/dcds.2016115 [13] Pablo Álvarez-Caudevilla, V. A. Galaktionov. Blow-up scaling and global behaviour of solutions of the bi-Laplace equation via pencil operators. Communications on Pure & Applied Analysis, 2016, 15 (1) : 261-286. doi: 10.3934/cpaa.2016.15.261 [14] Min Zhu, Ying Wang. Blow-up of solutions to the periodic generalized modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 645-661. doi: 10.3934/dcds.2017027 [15] Min Zhu, Shuanghu Zhang. On the blow-up of solutions to the periodic modified integrable Camassa--Holm equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2347-2364. doi: 10.3934/dcds.2016.36.2347 [16] Xi Tu, Zhaoyang Yin. Local well-posedness and blow-up phenomena for a generalized Camassa-Holm equation with peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2781-2801. doi: 10.3934/dcds.2016.36.2781 [17] Haitao Yang, Yibin Chang. On the blow-up boundary solutions of the Monge -Ampére equation with singular weights. Communications on Pure & Applied Analysis, 2012, 11 (2) : 697-708. doi: 10.3934/cpaa.2012.11.697 [18] Maria Antonietta Farina, Monica Marras, Giuseppe Viglialoro. On explicit lower bounds and blow-up times in a model of chemotaxis. Conference Publications, 2015, 2015 (special) : 409-417. doi: 10.3934/proc.2015.0409 [19] Huyuan Chen, Hichem Hajaiej, Ying Wang. Boundary blow-up solutions to fractional elliptic equations in a measure framework. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1881-1903. doi: 10.3934/dcds.2016.36.1881 [20] Zhifu Xie. General uniqueness results and examples for blow-up solutions of elliptic equations. Conference Publications, 2009, 2009 (Special) : 828-837. doi: 10.3934/proc.2009.2009.828
2017 Impact Factor: 0.972
|
|
Volume 17, issue 2 (2017)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Author Index To Appear Other MSP Journals
Stable functorial decompositions of $F(\mathbb{R}^{n+1},j)^{+}\wedge_{\Sigma_j}X^{(j)}$
Jie Wu and Zihong Yuan
Algebraic & Geometric Topology 17 (2017) 895–915
Abstract
We first construct a functorial homotopy retract of ${\Omega }^{n+1}{\Sigma }^{n+1}X$ for each natural coalgebra-split sub-Hopf algebra of the tensor algebra. Then, by computing their homology, we find a collection of stable functorial homotopy retracts of $F{\left({ℝ}^{n+1},j\right)}^{+}\phantom{\rule{0.3em}{0ex}}{\wedge }_{{\Sigma }_{j}}\phantom{\rule{0.3em}{0ex}}{X}^{\left(j\right)}$.
Keywords
Snaith splitting, iterated loop suspension, functorial homotopy decomposition, coalgebra-split sub-Hopf algebra
Mathematical Subject Classification 2010
Primary: 55P35
Secondary: 55P48, 55P65
|
|
Browse ORBi by ORBi project The Open Access movement
ORBi is a project of
References of "Goupil, M. J" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 1 to 20 of 28 1 2 Mixed modes in red giants: a window on stellar evolutionMosser, B.; Benomar, O.; Belkacem, K. et alin Astronomy and Astrophysics (2014), 572Context. The detection of oscillations with a mixed character in subgiants and red giants allows us to probe the physical conditions in their cores.
Aims: With these mixed modes, we aim at ... [more ▼]Context. The detection of oscillations with a mixed character in subgiants and red giants allows us to probe the physical conditions in their cores.
Aims: With these mixed modes, we aim at determining seismic markers of stellar evolution.
Methods: Kepler asteroseismic data were selected to map various evolutionary stages and stellar masses. Seismic evolutionary tracks were then drawn with the combination of the frequency and period spacings.
Results: We measured the asymptotic period spacing for 1178 stars at various evolutionary stages. This allows us to monitor stellar evolution from the main sequence to the asymptotic giant branch and draw seismic evolutionary tracks. We present clear quantified asteroseismic definitions that characterize the change in the evolutionary stages, in particular the transition from the subgiant stage to the early red giant branch, and the end of the horizontal branch.
Conclusions: The seismic information is so precise that clear conclusions can be drawn independently of evolution models. The quantitative seismic information can now be used for stellar modeling, especially for studying the energy transport in the helium-burning core or for specifying the inner properties of stars entering the red or asymptotic giant branches. Modeling will also allow us to study stars that are identified to be in the helium-subflash stage, high-mass stars either arriving or quitting the secondary clump, or stars that could be in the blue-loop stage. Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/572/L5 [less ▲]Detailed reference viewed: 8 (2 ULg) VizieR Online Data Catalog: Mixed modes in red giants (Mosser+, 2014)Mosser, B.; Benomar, O.; Belkacem, K. et alin VizieR Online Data Catalog (2014), 357Seismic global parameters of the stars listed in the paper. Each star is identified with its KIC number (Kepler Input Catalog). The asymptotic frequency and period spacing are derived from the fit of the ... [more ▼]Seismic global parameters of the stars listed in the paper. Each star is identified with its KIC number (Kepler Input Catalog). The asymptotic frequency and period spacing are derived from the fit of the radial and dipole oscillation modes. The stellar mass is derived from the seismic scaling relations. The evolutionary status is derived according to the location of the star in the DPi1 - Dnu diagram (Fig. 1) (1 data file). [less ▲]Detailed reference viewed: 5 (0 ULg) Solar-like oscillations in distant stars as seen by CoRoT : the special case of HD 42618, a solar sisterBarban, C.; Deheuvels, S.; Goupil, M. J. et alin Journal of Physics: Conference Series (2013), 440We report the observations of a main-sequence star, HD 42618 (T[SUB]eff[/SUB] = 5765 K, G3V) by the space telescope CoRoT. This is the closest star to the Sun ever observed by CoRoT in term of its ... [more ▼]We report the observations of a main-sequence star, HD 42618 (T[SUB]eff[/SUB] = 5765 K, G3V) by the space telescope CoRoT. This is the closest star to the Sun ever observed by CoRoT in term of its fundamental parameters. Using a preliminary version of CoRoT light curves of HD 42618, p modes are detected around 3.2 mHz associated to l = 0, 1 and 2 modes with a large spacing of 142 μHz. Various methods are then used to derive the mass and radius of this star (scaling relations from solar values as well as comparison between theoretical and observationnal frequencies) giving values in the range of (0.80 - 1.02)M[SUB]solar[/SUB] and (0.91 - 1.01)R[SUB]solar[/SUB]. A preliminary analysis of l = 0 and 1 modes allows us also to study the amount of penetrative convection at the base of the convective envelope. [less ▲]Detailed reference viewed: 18 (1 ULg) Non-perturbative effect of rotation on dipolar mixed modes in red giant starsOuazzani, R.-M.; Goupil, M. J.; Dupret, Marc-Antoine et alin Astronomy and Astrophysics (2013), 554Context. The space missions CoRoT and Kepler provide high-quality data that allow us to test the transport of angular momentum in stars by the seismic determination of the internal rotation profile.
Aims: Our aim is to test the validity of seismic diagnostics for red giant rotation that are based on a perturbative method and to investigate the oscillation spectra when the validity does not hold.
Methods: We use a non-perturbative approach implemented in the ACOR code that accounts for the effect of rotation on pulsations and solves the pulsation eigenproblem directly for dipolar oscillation modes.
Aims: The G6 giant HR 2582 (HD 50890) was observed by CoRoT for approximately 55 days. We present here the analysis of its light curve and the characterisation of the star using different observables, such as its location in the Hertzsprung-Russell diagram and seismic observables.
Methods: Mode frequencies are extracted from the observed Fourier spectrum of the light curve. Numerical stellar models are then computed to determine the characteristics of the star (mass, age, etc.) from the comparison with observational constraints.
Results: We provide evidence for the presence of solar-like oscillations at low frequency, between 10 and 20 μHz, with a regular spacing of (1.7 ± 0.1) μHz between consecutive radial orders. Only radial modes are clearly visible. From the models compatible with the observational constraints used here, We find that HR 2582 (HD 50890) is a massive star with a mass in the range (3-5 M[SUB]&sun;[/SUB]), clearly above the red clump. It oscillates with rather low radial order (n = 5-12) modes. Its evolutionary stage cannot be determined with precision: the star could be on the ascending red giant branch (hydrogen shell burning) with an age of approximately 155 Myr or in a later phase (helium burning). In order to obtain a reasonable helium amount, the metallicity of the star must be quite subsolar. Our best models are obtained with a mixing length significantly smaller than that obtained for the Sun with the same physical description (except overshoot). The amount of core overshoot during the main-sequence phase is found to be mild, of the order of 0.1 H[SUB]p[/SUB].
Conclusions: HR 2582 (HD 50890) is an interesting case as only a few massive stars can be observed due to their rapid evolution compared to less massive red giants. HR 2582 (HD 50890) is also one of the few cases that can be used to validate the scaling relations for massive red giants stars and its sensitivity to the physics of the star. The CoRoT space mission, launched on 2006 December 27, was developed and is operated by the CNES with participation of the Science Programs of ESA; ESA's RSSD, Austria, Belgium, Brazil, Germany and Spain. [less ▲]Detailed reference viewed: 9 (0 ULg) Mixed modes in red-giant stars observed with CoRoTMosser, B.; Barban, C.; Montalban Iglesias, Josefa et alin Astronomy and Astrophysics (2011), 532Context. The CoRoT mission has provided thousands of red-giant light curves. The analysis of their solar-like oscillations allows us to characterize their stellar properties.
Aims: Up to now, the ... [more ▼]Context. The CoRoT mission has provided thousands of red-giant light curves. The analysis of their solar-like oscillations allows us to characterize their stellar properties.
Aims: Up to now, the global seismic parameters of the pressure modes have been unable to distinguish red-clump giants from members of the red-giant branch. As recently done with Kepler red giants, we intend to analyze and use the so-called mixed modes to determine the evolutionary status of the red giants observed with CoRoT. We also aim at deriving different seismic characteristics depending on evolution.
Methods: The complete identification of the pressure eigenmodes provided by the red-giant universal oscillation pattern allows us to aim at the mixed modes surrounding the ℓ = 1 expected eigenfrequencies. A dedicated method based on the envelope autocorrelation function is proposed to analyze their period separation.
Results: We have identified the mixed-mode signature separation thanks to their pattern that is compatible with the asymptotic law of gravity modes. We have shown that, independent of any modeling, the g-mode spacings help to distinguish the evolutionary status of a red-giant star. We then report the different seismic and fundamental properties of the stars, depending on their evolutionary status. In particular, we show that high-mass stars of the secondary clump present very specific seismic properties. We emphasize that stars belonging to the clump were affected by significant mass loss. We also note significant population and/or evolution differences in the different fields observed by CoRoT. The CoRoT space mission, launched 2006 December 27, was developed and is operated by the CNES, with participation of the Science Programs of ESA, ESAŠs RSSD, Austria, Belgium, Brazil, Germany, and Spain.Apeendix A is available in electronic form at http://www.aanda.org [less ▲]Detailed reference viewed: 20 (0 ULg) The underlying physical meaning of the νmax - νc relationBelkacem, K.; Goupil, M. J.; Dupret, Marc-Antoine et alin Astronomy and Astrophysics (2011), 530Asteroseismology of stars that exhibit solar-like oscillations are enjoying a growing interest with the wealth of observational results obtained with the CoRoT and Kepler missions. In this framework ... [more ▼]Asteroseismology of stars that exhibit solar-like oscillations are enjoying a growing interest with the wealth of observational results obtained with the CoRoT and Kepler missions. In this framework, scaling laws between asteroseismic quantities and stellar parameters are becoming essential tools to study a rich variety of stars. However, the physical underlying mechanisms of those scaling laws are still poorly known. Our objective is to provide a theoretical basis for the scaling between the frequency of the maximum in the power spectrum (ν[SUB]max[/SUB]) of solar-like oscillations and the cut-off frequency (ν[SUB]c[/SUB]). Using the SoHO GOLF observations together with theoretical considerations, we first confirm that the maximum of the height in oscillation power spectrum is determined by the so-called plateau of the damping rates. The physical origin of the plateau can be traced to the destabilizing effect of the Lagrangian perturbation of entropy in the upper-most layers, which becomes important when the modal period and the local thermal relaxation time-scale are comparable. Based on this analysis, we then find a linear relation between ν[SUB]max[/SUB] and ν[SUB]c[/SUB], with a coefficient that depends on the ratio of the Mach number of the exciting turbulence to the third power to the mixing-length parameter. [less ▲]Detailed reference viewed: 12 (2 ULg) Effect of stellar rotation on oscillation frequenciesOuazzani, R. M.; Goupil, M. J.; Dupret, Marc-Antoine et alin Astrophysics & Space Science (2010), 328We investigate whether the rotational splittings of β Cephei stars can give some clue about the existence of a differential rotation in latitude, and if they are contaminated by the cubic order effects ... [more ▼]We investigate whether the rotational splittings of β Cephei stars can give some clue about the existence of a differential rotation in latitude, and if they are contaminated by the cubic order effects of rotation on oscillation frequencies. We also study some properties of splitting asymmetries and axisymmetric mode frequencies which provide seismic constrains on the distortion of the star. We find that only non-perturbative methods are able to reproduce those two seismic characteristics within 0.01% error bars for stars when they rotate faster than 3.3% Ω [SUB] k [/SUB]. If error bars of 1% are acceptable, the threshold of validity of perturbative methods is extended to 10% Ω [SUB] k [/SUB]. [less ▲]Detailed reference viewed: 8 (1 ULg) Stochastic excitation of gravity modes in massive main-sequence starsSamadi, R.; Belkacem, Kevin ; Goupil, M. J. et alin Astrophysics & Space Science (2010), 328We investigate the possibility that gravity modes can be stochastically excited by turbulent convection in massive main-sequence (MS) stars. We build stellar models of MS stars with masses M=10 M [SUB]ȯ ... [more ▼]We investigate the possibility that gravity modes can be stochastically excited by turbulent convection in massive main-sequence (MS) stars. We build stellar models of MS stars with masses M=10 M [SUB]ȯ[/SUB],15 M [SUB]ȯ[/SUB], and 20 M [SUB]ȯ[/SUB]. For each model, we then compute the power supplied to the modes by turbulent eddies in the convective core (CC) and the outer convective zones (OCZ). We found that, for asymptotic gravity modes, the major part of the driving occurs within the outer iron convective zone, while the excitation of low n order modes mainly occurs within the CC. We compute the mode lifetimes and deduce the expected mode amplitudes. We finally discuss the possibility of detecting such stochastically-excited gravity modes with the CoRoT space-based mission. [less ▲]Detailed reference viewed: 17 (4 ULg) Survival of a convective core in low-mass solar-like pulsator HD 203608Deheuvels, S.; Michel, Eric; Goupil, M. J. et alin Astronomy and Astrophysics (2010), 514Context. A 5-night asteroseismic observation of the F8V star HD 203608 was conducted in August 2006 with harps, followed by an analysis of the data, and a preliminary modeling of the star (Mosser et al ... [more ▼]Context. A 5-night asteroseismic observation of the F8V star HD 203608 was conducted in August 2006 with harps, followed by an analysis of the data, and a preliminary modeling of the star (Mosser et al. 2008). The stellar parameters were significantly constrained, but the behavior of one of the seismic indexes (the small spacing δν[SUB]01[/SUB]) could not be fitted with the observed one, even with the best considered models.
Aims: We study the possibility of improving the agreement between models and observations by changing the physical properties of the inner parts of the star (to which δν[SUB]01[/SUB] is sensitive).
Methods: We show that, in spite of its low mass, it is possible to produce models of HD 203608 with a convective core. No such model was considered in the preliminary modeling. In practice, we obtain these models here by assuming some extra mixing at the edge of the early convective core. We optimized the model parameters using the Levenberg-Marquardt algorithm.
Results: The agreement between the new best model with a convective core and the observations is much better than for the models without. All the observational parameters are fitted within 1-Ï observational error bars. This is the first observational evidence of a convective core in an old and low-mass star such as HD 203608. In standard models of low-mass stars, the core withdraws shortly after the ZAMS. The survival of the core until the present age of HD 203608 provides very strong constraints on the size of the mixed zone associated to the convective core. Using overshooting as a proxy to model the processes of transport at the edge of the core, we find that to reproduce both global and seismic observations, we must have α[SUB]{ov[/SUB]} = 0.17 ± 0.03 H[SUB]p[/SUB] for HD 203608. We revisit the process of the extension of the core lifetime due to overshooting in the particular case of HD 203608. [less ▲]Detailed reference viewed: 8 (0 ULg) The Asteroseismic Potential of Kepler: First Results for Solar-Type StarsChaplin, W. J.; Appourchaux, T.; Elsworth, Y. et alin Astrophysical Journal Letters (2010), 713We present preliminary asteroseismic results from Kepler on three G-type stars. The observations, made at one-minute cadence during the first 33.5 days of science operations, reveal high signal-to-noise ... [more ▼]We present preliminary asteroseismic results from Kepler on three G-type stars. The observations, made at one-minute cadence during the first 33.5 days of science operations, reveal high signal-to-noise solar-like oscillation spectra in all three stars: about 20 modes of oscillation may be clearly distinguished in each star. We discuss the appearance of the oscillation spectra, use the frequencies and frequency separations to provide first results on the radii, masses, and ages of the stars, and comment in the light of these results on prospects for inference on other solar-type stars that Kepler will observe. [less ▲]Detailed reference viewed: 17 (0 ULg) 2D non-perturbative modeling of oscillations in rapidly rotating starsOuazzani, Rhita-Maria ; Dupret, Marc-Antoine ; Goupil, M. J. et alin Astronomical Notes (2010), 331We present and discuss results of a recently developped two dimensional non-perturbative method to compute accurate adiabatic oscillation modes of rapidly rotating stars . The 2D calculations fully take ... [more ▼]We present and discuss results of a recently developped two dimensional non-perturbative method to compute accurate adiabatic oscillation modes of rapidly rotating stars . The 2D calculations fully take into account the centrifugal distorsion of the star while the non-perturbative method includes the full influence of the Coriolis acceleration. These characteristics allows us to compute oscillation modes of rapid rotators - from high order p-modes in $\delta$Scuti stars, to low order p- and g-modes in $\beta$ Cephei or Be stars. [less ▲]Detailed reference viewed: 4 (0 ULg) The CoRoT target HD 49933 . II. Comparison of theoretical mode amplitudes with observationsSamadi, R.; Ludwig, H*-G; Belkacem, Kevin et alin Astronomy and Astrophysics (2010), 509Context. The seismic data obtained by CoRoT for the star HD 49933 enable us for the first time to measure directly the amplitudes and linewidths of solar-like oscillations for a star other than the Sun ... [more ▼]Context. The seismic data obtained by CoRoT for the star HD 49933 enable us for the first time to measure directly the amplitudes and linewidths of solar-like oscillations for a star other than the Sun. From those measurements it is possible, as was done for the Sun, to constrain models of the excitation of acoustic modes by turbulent convection.
Aims: We compare a stochastic excitation model described in Paper I with the asteroseismology data for HD 49933, a star that is rather metal poor and significantly hotter than the Sun.
Methods: Using the seismic determinations of the mode linewidths detected by CoRoT for HD 49933 and the theoretical mode excitation rates computed in Paper I for the specific case of HD 49933, we derive the expected surface velocity amplitudes of the acoustic modes detected in HD 49933. Using a calibrated quasi-adiabatic approximation relating the mode amplitudes in intensity to those in velocity, we derive the expected values of the mode amplitude in intensity.
Results: Except at rather high frequency, our amplitude calculations are within 1-Ï error bars of the mode surface velocity spectrum derived with the HARPS spectrograph. The same is found with respect to the mode amplitudes in intensity derived for HD 49933 from the CoRoT data. On the other hand, at high frequency (ν ⪠1.9 mHz), our calculations depart significantly from the CoRoT and HARPS measurements. We show that assuming a solar metal abundance rather than the actual metal abundance of the star would result in a larger discrepancy with the seismic data. Furthermore, we present calculations which assume the â newâ solar chemical mixture to be in better agreement with the seismic data than those that assumed the â oldâ solar chemical mixture.
Conclusions: These results validate in the case of a star significantly hotter than the Sun and α Cen A the main assumptions in the model of stochastic excitation. However, the discrepancies seen at high frequency highlight some deficiencies of the modelling, whose origin remains to be understood. We also show that it is important to take the surface metal abundance of the solar-like pulsators into account. The CoRoT space mission, launched on December 27 2006, has been developped and is operated by CNES, with the contribution of Austria, Belgium, Brasil, ESA, Germany and Spain. [less ▲]Detailed reference viewed: 21 (1 ULg) The CoRoT target HD 49933 . I. Effect of the metal abundance on the mode excitation ratesSamadi, R.; Ludwig, H*-G; Belkacem, Kevin et alin Astronomy and Astrophysics (2010), 509Context. Solar-like oscillations are stochastically excited by turbulent convection at the surface layers of the stars.
Aims: We study the role of the surface metal abundance on the efficiency of ... [more ▼]Context. Solar-like oscillations are stochastically excited by turbulent convection at the surface layers of the stars.
Aims: We study the role of the surface metal abundance on the efficiency of the stochastic driving in the case of the CoRoT target HD 49933.
Methods: We compute two 3D hydrodynamical simulations representative - in effective temperature and gravity - of the surface layers of the CoRoT target HD 49933, a star that is rather metal poor and significantly hotter than the Sun. One 3D simulation has a solar metal abundance, and the other has a surface iron-to-hydrogen, [Fe/H], abundance ten times smaller. For each 3D simulation we match an associated global 1D model, and we compute the associated acoustic modes using a theoretical model of stochastic excitation validated in the case of the Sun and α Cen A.
Results: The rate at which energy is supplied per unit time into the acoustic modes associated with the 3D simulation with [Fe/H] = -1 is found to be about three times smaller than those associated with the 3D simulation with [Fe/H] = 0. As shown here, these differences are related to the fact that low metallicity implies surface layers with a higher mean density. In turn, a higher mean density favors smaller convective velocities and hence less efficient driving of the acoustic modes.
Conclusions: Our result shows the importance of taking the surface metal abundance into account in the modeling of the mode driving by turbulent convection. A comparison with observational data is presented in a companion paper using seismic data obtained for the CoRoT target HD 49933. The CoRoT space mission, launched on December 27, 2006, has been developped and is operated by CNES, with the contribution of Austria, Belgium, Brasil, ESA, Germany and Spain. [less ▲]Detailed reference viewed: 10 (0 ULg) 1 2
|
|
$\small{H_{\infty}}$ Robust Yaw-Moment Control Based on Brake Switching for the Enhancement of Vehicle Performance and Stability
Title & Authors
$\small{H_{\infty}}$ Robust Yaw-Moment Control Based on Brake Switching for the Enhancement of Vehicle Performance and Stability
Ahn, Woo-Sung; Park, Jong-Hyeon;
Abstract
This paper proposes a new $\small{H_{\infty}}$ yaw moment control scheme using brake torque switching for improving vehicle performance and stability especially in high speed driving. In the scheme, one wheel is selected, depending on the vehicle states, at which a brake torque for control is applied. Steering angles are modeled as a disturbance to the system and the $\small{H_{\infty}}$ controller is designed to minimize the difference between the performance of the vehicle and that of the desired model. Its performance robustness as well as stability robustness to system parameter variations is assured through $\small{{\mu}}$-analysis. Various simulations with a nonlinear 8-DOF vehicle model show that proposed controller enhances the vehicle performance and stability under disturbances and parameter variations as well as under the normal driving condition.
Keywords
$\small{H_{\infty}}$ control;Yaw Moment Control;Yaw Rate;Side Slip Angle;$\small{{\mu}}$-Analysis;Switching Control Scheme;Vehicle Stability;
Language
Korean
Cited by
1.
외란 관측기를 이용한 견실한 차량 안정성 제어,한진오;이경수;강수준;이교일;
대한기계학회논문집A, 2002. vol.26. 12, pp.2519-2526
References
1.
Nagai M., Hirano Y. and Yamanaka S., 1996, 'Integrated Control Law of Active Rear Wheel Steering and Direct Yaw Moment Control,' Proceedings of AVEC, Vol. 1, pp. 451-469
2.
Abe M., Ohkubo N., and Kano Y., 1996, 'A Direct Yaw Moment Control for Improving Limt Performance of Vehicle Handling -Comparison and Cooperation with 4WS-,' Vehicle System Dynamics, Vol. 25, pp. 3-23
3.
Koibuchi K., Yamamoto M., Fukuda Y., and Inagaki S., 1996, 'Vehicle Stability Control in Limit Cornering by Active Brake,' SAE 960487
4.
Matsumoto S., Yamaguchi H., Inoue H., and Yasuno Y., 1992, 'Improvement of Vehicle Dynamics through Braking Force Distribution Control,' SAE 920645
5.
Alleyne A., 1996, 'A Comparison of Alternative Intervention Strategies for Unintended Roadway Departure (URD) Control,' Proceedings of AVEC, Vol. 1, pp. 485-506
6.
Van Zanten A., Erhardt R., and Pfaff G., 1995, 'VDC, the Vehicle Dynamics Control System of Bosch,' SAE 950759
7.
Dugoff H., Francher P. S., and Segel L., 1970, 'An Analysis of Tire Properties and Their Influence on the Vehicle Dynamic Performance,' SAE 700377
8.
Doyle J. C., Glover K., Khargonekar P. P., and Francis B. A., 1989, 'State-Space Solutions to $H_2$ and $H_{infty}$ Control Problems,' IEEE Trans. Automatic Control, Vol. 34, No. 8, pp. 831-847
9.
Balas G. and Packard A., 1996, 'The Structured Singular Value () Framework,' in Control Handbook (W. S. Levine, ed.), CRC Press
10.
Zhou K., Doyle J. C., and Glover K., 1996, Robust and Optimal Control, Prentice Hall
11.
Jang J. H. and Han C. S., 1997, 'The Sensitivity Analysis of Yaw Rate for a Front Wheel Steering Vehicle: In the Frequency Domain,' KSME Int. J, Vol. 11, No. 1, pp. 56-66
|
|
# What is the total square on the dual Steenrod algebra?
The dual Steenrod algebra ($$p=2$$) has generators $$\xi_n$$ and these have conjugates that are often labeled $$\zeta_n$$. I am curious about the left and right actions of the Steenrod algebra on its dual, and in particular, what the total square is. I have seen in papers that $$(\xi_n)Sq = \xi_n + \xi_{n-1}$$ and $$Sq(\xi_n) = \xi_n + \xi_{n-1}^2$$ [1]. On the other hand, I have seen that $$(\zeta_n)Sq = \zeta_n + \zeta_{n-1}^2 + \dots + \zeta_1^{2^{n-1}} + 1$$ [2]. I can't find a reference anywhere for the left total square on $$\zeta_n$$. I am not sure how to prove these actions, although it seems to me that it should follow from fairly elementary Kronecker product arithmetic along with duality knowledge.
I am interested in either a reference for the left total square, or a way to prove it.
[1] See, for example, Mahowald -- bo-resolutions, page 369.
[2] Bruner, May, McClure, Steinberger -- $$H_\infty$$ Ring Spectra and their Applications, page 78. (There is a typo: 1 should be $$i$$.)
• I assume you're working at the prime 2? Mar 17 '19 at 7:31
• Yeah, working at p=2.
– Ekie
Mar 17 '19 at 13:52
I don't have a reference for you, but here is a comment on how to prove these formulas using the Kronecker pairing that you alluded to.
The Steenrod operation $$Sq^m$$ is dual to the element $$\xi_1^m$$ in the monomial basis of the dual Steenrod algebra; the left and right actions of the Steenrod algebra on $$\mathcal{A}_*$$ are composites of the coproduct in the dual Steenrod algebra and the action on the right or left side. If the coproduct satisfies $$\Delta x = \sum x' \otimes x''$$, we then get \begin{align*} x \cdot Sq^m &= \sum (\xi_1^m)^*(x') x'',\\ Sq^m \cdot x &= \sum x' (\xi_1^m)^* (x''). \end{align*} (The apparent order reversal is necessary to make this into a left/right action.) We'd like to apply this to the comultiplication formulas $$\Delta \xi_n = \sum_{i+j=n} \xi_i^{2^j} \xi_j$$ and $$\Delta \zeta_n = \sum_{i+j=n} \zeta_i \zeta_j^{2^i}$$. Here by convention $$\xi_0 = \zeta_0 = 1$$.
To apply this to the $$\xi_n$$, we first remark that $$\sum_m (\xi_1^m)^*(\xi_i^{2^j}) = \begin{cases}1 &\text{if }i=0,1,\\0&\text{otherwise.}\end{cases}$$ Therefore: \begin{align*} \xi_n \cdot Sq &= \sum (\xi_1^m)^* (\xi_i^{2^j}) \xi_j = \xi_n + \xi_{n-1}\\ Sq \cdot \xi_n &= \sum \xi_i^{2^j} (\xi_1^m)^* (\xi_j) = \xi_n + \xi_{n-1}^2. \end{align*} To figure out the corresponding result for the $$\zeta_n$$, we have to figure out what the coefficient of $$\xi_1^{2^n-1}$$ is in the formula for $$\zeta_n$$. The $$\zeta_i$$ are defined inductively, for $$n > 0$$, using the formula $$\sum_{i+j=n} \xi_i^{2^j} \zeta_j = 0.$$ If we take the quotient by the ideal generated by $$\xi_2, \xi_3, \dots$$ we find that this formula reduces to $$\zeta_n + \xi_1^{2^{n-1}} \zeta_{n-1} \equiv 0$$ and so inductively $$\zeta_n \equiv \xi_1^{2^n - 1}$$ mod the higher $$\xi_i$$. This means $$\sum_m (\xi_1^m)^*(\zeta_j^{2^i}) = 1$$ for any $$i$$ and $$j$$.
Therefore: \begin{align*} \zeta_n \cdot Sq &= \sum (\xi_1^m)^* (\zeta_i) \zeta_j^{2^i} = \zeta_n + \zeta_{n-1}^2 + \dots + \zeta_1^{2^{n-1}} + 1\\ Sq \cdot \zeta_n &= \sum \zeta_i (\xi_1^m)^* (\zeta_j^{2^i}) = \zeta_n + \zeta_{n-1} + \dots + \zeta_1 + 1. \end{align*}
We bothered to write it down in our paper. Look at pg 6, we give some of references that we know of.
I did not find a formula for the left action of the $$Sq$$ on $$\zeta_i$$s in the literature. But from the formula for left action of $$Sq$$ on $$\xi_i$$ and formulas relating $$\xi_i$$s and $$\zeta_i$$s one can do an extensive combinatorial argument to see that $$Sq(\zeta_i) = \zeta_i + \zeta_{i-1} + \dots + \zeta_1 + 1$$.
(In my experience the combinatorial inductive argument was tedious but straightforward!)
[ For example, let's consider the first nontrivial case, ie calculate $$Sq(\zeta_2)$$. Keep in mind that $$\zeta_2 = \xi_2 + \xi_1^3$$ and $$\zeta_1 = \xi_1$$. Then $$Sq(\zeta_2) = Sq(\xi_2 + \xi_1^3) = (\xi_2 + \xi_1^2) + (\xi_1 +1)^3 = \zeta_2 + \zeta_1 + 1.$$ Keep going inductively to get the formulas for $$Sq(\zeta_i)$$... ]
|
|
Journal article Open Access
# ANTIDIABETIC ACTIVITY OF AQUEOUS EXTRACT FROM VIGNA RADIATA IN STREPTOZOTOCIN INDUCED DIABETIC MICE.
Kassahun Dires Ayenewu
### Dublin Core Export
<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Kassahun Dires Ayenewu</dc:creator>
<dc:date>2017-02-28</dc:date>
<dc:description>Vigna radiata is an important medicinal plant that belongs to family Fabaceae, which is widely used in the traditional medicine all over the word. Behind its nutritional acceptance, the grain of this plant is cocked traditionally and consumed for the purpose of lowering the blood glucose level in diabetic patients especially in rural areas. This study was designed to scientifically validate its antidiabetic activity in diabetic mice. Mice obtained from Ethiopia public health institute were allowed to adapt the experimental room for 3 days before the actual experiment. They consumed the standard food (pellet) throughout the experiment. After dissolving streptozotocin with 0.9% normal saline, the fresh solution was injected to all mice through intraperitoneal route at a dose of 35 mg/kg. The diabetic mice were divided in to four groups. The first group was treated with glibenclamide (5mg/kg), the second group was treated with normal saline (10 ml/kg) and the last two groups were treated with aqueous extract of v.radiata at 200 and 300 mg/kg. Finally, blood glucose levels were measured 1hr, 2hr, 3hr and 4hr after administration of each treatment. The aqueous crude extract at 200 and 300 mg/kg decreased the blood glucose level as compared to the control group (p<0.05). The antidiabetic activity of aqueous extract of v.radiata at 200mg/kg was lower than that of the aqueous extract at 300mg/kg. In conclusion, the aqueous crude extract of v.radiata possesses a significant antidiabetic activity.</dc:description>
<dc:identifier>https://zenodo.org/record/2383533</dc:identifier>
<dc:identifier>10.5281/zenodo.2383533</dc:identifier>
<dc:identifier>oai:zenodo.org:2383533</dc:identifier>
<dc:relation>doi:10.5281/zenodo.2383532</dc:relation>
<dc:relation>url:https://zenodo.org/communities/iajpr</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:title>ANTIDIABETIC ACTIVITY OF AQUEOUS EXTRACT FROM VIGNA RADIATA IN STREPTOZOTOCIN INDUCED DIABETIC MICE.</dc:title>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>publication-article</dc:type>
</oai_dc:dc>
124
87
views
|
|
# Polynomial Rings - Gauss's Lemma
• Apr 12th 2013, 09:52 PM
Bernhard
Polynomial Rings - Gauss's Lemma
I am trying to understand the proof of Gauss's Lemma as given in Dummit and Foote Section 9.3 pages 303-304 (see attached)
On page 304, part way through the proof, D&F write:
"Assume d is not a unit (in R) and write d as a product of irreducibles in R, say $d = p_1p_2 ... p_n$ . Since $p_1$ is irreducible in R, the ideal $(p_1)$ is prime (cf Proposition 12, Section 8.3 - see attached) so by Proposition 2 above (see attached) the ideal $p_1R[x]$ is prime in R[x] and $(R/p_1R)[x]$ is an integral domain. ..."
My problems with the D&F statement above are as follows:
(1) I cannot see why the ideal $(p_1)$ is a prime ideal. Certainly Proposition 12 states that "In a UFD a non-zero element is prime if and only if it is irreducible" so this means $p_1$ is prime since we were given that it was irreducible. But does that make the principal ideal $(p_1)$ a prime ideal? I am not sure! Can anyone show rigorously that $(p_1)$ a prime ideal?
(2) Despite reading Proposition 12 in Section 8.3 I cannot see why the ideal $p_1R[x]$ is prime in R[x] and $(R/p_1R)[x]$ is an integral domain. ...". (Indeed, I am unsure that $p_1R[x]$ is an ideal!) Can anyone show explicitly and rigorously why this is true?
I would really appreciate clarification of the above matters.
Peter
• Apr 12th 2013, 11:56 PM
Bernhard
Re: Polynomial Rings - Gauss's Lemma
In trying to answer my problem (1) above - I cannot see why the ideal $(p_1)$ is a prime ideal - I was looking at definitions of prime ideals and trying to reason from there.
I just looked up the definition of a prime element in D&F to find the following on page 284:
The non-zero element $p \in R$ is called prime if the ideal (p) generated by p is a prime ideal!
So the answer to my question seems obvious:
$p_1 irreducible \Longrightarrow p_1$ prime $\Longrightarrow (p_1)$ prime ideal
Although this now seems obvious, I would like someone to confirm my reasoning (which as I said now seems blindingly obvious! :-)
Peter
• Apr 13th 2013, 03:01 AM
rushton
Re: Polynomial Rings - Gauss's Lemma
If we have $f \in K[x]$ for some polynomial ring $K[x]$ then the following are equivalent
1) f is irreducible
2) (f) is prime
3) (f) is maximal
$\\ Proof: (3) \Rightarrow (2) \Rightarrow (1) \Rightarrow (3) \\ \\ (3) \Rightarrow (2) is obvious by definition of maximal and prime ideals. \\ \\ (2) \Rightarrow (1) \\ Suppose f is not irreducible. \\ Then f=g \cdotp h with deg(g),deg(h) < deg(f) . \\ Now we get g,h are not in (f) but g \cdotp h is in (f). \\ This is a contradiction to (f) being a prime ideal. \\ \\ (1) \Rightarrow (3) \\ We do this by showing if J \supset (f) then J contains a unit. \\ Let J be generated by a single element, say g, so J=(g) . \\ If f \in (g) then f = q \cdotp g for some poly q. \\ f is irreducible so either q or g is a unit. \\ Suppose q is a unit. \\ Then g = q^{-1} \cdotp f \Rightarrow g \in (f) \Rightarrow (g)=(f) . \\ This contradicts J \supset (f) . \\ Thus g is a unit.$
• Apr 13th 2013, 03:08 AM
rushton
Re: Polynomial Rings - Gauss's Lemma
You can't say a polynomial itself is prime (at least to my knowledge).
The idea of a prime ideal as far as I know comes from the ideals of the integers generated by a prime number.
But a prime number and an irreducible polynomial are somewhat related, neither can be factored in their given ring/field.
• Apr 13th 2013, 03:18 AM
rushton
Re: Polynomial Rings - Gauss's Lemma
Sorry about the piecewise answers but I am doing one part at a time haha.
$p_{1}R[x] is just the ideal (p_{1}) as for an ideal if r \in R and i \in I then r \cdotp i \in I$
• Apr 13th 2013, 03:38 AM
rushton
Re: Polynomial Rings - Gauss's Lemma
$\\ R/I is an integral domain \Leftrightarrow I is prime. \\ \\ Proof \\ \\ \Rightarrow \\ Given a \cdotp b \in I we need to show a \in I or b \in I. \\ \bar{a} \cdotp \bar{b} = 0 \ \Rightarrow \ \bar{a} = 0 or \bar{b} = 0 \ \Rightarrow a \in I or b \in I. \\ \\ \Leftarrow \\ Given a \cdotp b \in I we must have \bar{a} \cdotp \bar{b} = 0 as I is the kernel of the canonical homomorphism thus \bar{a} = 0 or \bar{b} = 0.$
• Apr 13th 2013, 03:56 AM
Gusbob
Re: Polynomial Rings - Gauss's Lemma
Quote:
Originally Posted by rushton
The idea of a prime ideal as far as I know comes from the ideals of the integers generated by a prime number.
Not quite. For example, $(2)$ and $(3)$ are not actually prime in the ring of integers $\mathbb{Z}[\sqrt{-5}]$
The idea of a prime ideal comes from attempts to extend the fundamental theorem of arithmetic. In the same way we have unique (up to sign) prime factorisations of integers in rationals, we want to have to have some sort of phenomena in the ring of integers in other fields, particularly imaginary quadratic fields. However, in the ring of integers $\mathbb{Z}[\sqrt{-5}]$ of $\mathbb{Q}[\sqrt{-5}]$, we have $2\cdot 3 =6= (1+\sqrt{-5})(1-\sqrt{-5})$, so factorisation is certainly not unique. However, setting $A=(2,1+\sqrt{-5}), \overline{A}=(2,1-\sqrt{-5}),B=(3,1+\sqrt{-5}),\overline{B}=(3,1-\sqrt{-5})$, we have
$(6)=(2)(3)=(\overline{A}A)(\overline{B}B)=( \overline{A} \overline{B})(AB)=(1-\sqrt{-5})(1+\sqrt{-5})$.
So in this case, $(2)$ and $(3)$ are not actually prime. It can be shown that the prime 'factors' of the ideal generated by 6 are $A,\overline{A},B,\overline{B}$
It turns out that there is unique factorisation of prime ideals in the ring of integers in any imaginary quadratic field.
• Apr 13th 2013, 04:06 AM
rushton
Re: Polynomial Rings - Gauss's Lemma
Yeah but technically isn't $\mathbb{Z} [ \sqrt{-5}]$ the ring of polynomials with coefficients in $\mathbb{Z}$ adjoining $\sqrt{-5}$? So it wouldnt actually be the field of integers?
• Apr 13th 2013, 04:37 AM
Gusbob
Re: Polynomial Rings - Gauss's Lemma
Quote:
Originally Posted by rushton
Yeah but technically isn't $\mathbb{Z} [ \sqrt{-5}]$ the ring of polynomials with coefficients in $\mathbb{Z}$ adjoining $\sqrt{-5}$? So it wouldnt actually be the field of integers?
$\mathbb{Z} [ \sqrt{-5}]$ is the ring of algebraic integers in the field $\mathbb{Q} [ \sqrt{-5}]$.
• Apr 13th 2013, 04:45 AM
rushton
Re: Polynomial Rings - Gauss's Lemma
Ah I get what you mean now, yeah your totally right.
field of integers .......lol
• Apr 13th 2013, 05:09 PM
Bernhard
Re: Polynomial Rings - Gauss's Lemma
Thanks Rushton
You write:
"You can't say a polynomial itself is prime (at least to my knowledge).
The idea of a prime ideal as far as I know comes from the ideals of the integers generated by a prime number.
But a prime number and an irreducible polynomial are somewhat related, neither can be factored in their given ring/field. "
Dummit and Foote on page 284 give the following definitions of irreducible and prime for integral domains.
-------------------------------------------------------------------------------------------------------------------------------
"Definition Let R be an integral domain.
(1) Suppose $r \in R$ is non-zero and not a unit. Then r is called irreducible in R if whenever r = ab with $a, b \in R$ at least one of a or b must be a unit in R. Otherwise r is said to be reducible.
(2) The non-zero element $p \in R$ is called prime in R if the ideal (p) generated by p is a prime ideal. In other words, a non-zero element p is a prime if it is not a unit and whenever p|ab for any $a,b \in R$, then either p|a or p|b."
--------------------------------------------------------------------------------------------------------------------------------
So where a ring of polynomials is an integral domain we have a definition of prime and irreducible elements (polynomials). Do you agree? What do you think?
Mind you most algebra books I have referenced just talk about irreducible polynomials - so maybe for polynomials (for some reason) irreducible and prime are the same thing? Can someone clarify this point?
Another point is that I am unsure why D&F restrict these definitions to an integral domain thus leaving the terms undefined for general rings that are not integral domains. Can someone clarify?
Yet another problem I have with the above definitions by D&F is the following: D&F write: "In other words, a non-zero element p is a prime if it is not a unit and whenever p|ab for any $a,b \in R$, then either p|a or p|b." - How does this follow from (p) being a prime ideal.
Peter
Note: D&F's definition of prime ideal is on page 255 and is as follows:
Definition: Assume R is commutative. An ideal P is called a prime ideal if $P \ne R$ and whenever the product of two elements $a,b \in R$ is an element of P, then at least on of a and b is an element of P.
|
|
# Filters
Any combination of passive (R, L, and C) and/or active (transistors or operational amplifiers) elements designed to select or reject a band of frequencies is called a filter.
In communication systems, filters are employed to pass those frequencies containing the desired information and to reject the remaining frequencies. In stereo systems, filters can be used to isolate particular bands of frequencies for increased or decreased emphasis by the output acoustical system (amplifier, speaker, etc.). Filters are employed to filter out any unwanted frequencies, commonly called noise, due to the nonlinear characteristics of some electronic devices or signals picked up from the surrounding medium. In general, there are two classifications of filters:
• Passive filters are those filters composed of series or parallel combinations of R, L, and C elements.
• Active filters are filters that employ active devices such as transistors and operational amplifiers in combination with R, L, and C elements.
The subject of filters is a very broad one that continues to receive extensive research support from industry and the government as new communication systems are developed to meet the demands of increased volume and speed. There are courses and texts devoted solely to the analysis and design of filter systems that can become quite complex and sophisticated. In general, however, all filters belong to the four broad categories of low-pass, high-pass, pass-band, and stop-band, as depicted in Fig. 1.
Fig. 1: Defining the four broad categories of filters.
For each form there are critical frequencies that define the regions of pass-bands and stop-bands (often called reject bands). Any frequency in the pass-band will pass through to the next stage with at least 70.7% of the maximum output voltage. Recall the use of the 0.707 level to define the bandwidth of a series or parallel resonant circuit (both with the general shape of the pass-band filter).
For some stop-band filters, the stop-band is defined by conditions other than the 0.707 level. In fact, for many stop-band filters, the condition that $Vo = 1/1000V_{max}$ (corresponding with $-60 \,dB$ in the discussion to follow) is used to define the stop-band region, with the passband continuing to be defined by the 0.707-V level. The resulting frequencies between the two regions are then called the transition frequencies and establish the transition region.
|
|
# Thread: contour integral, limiting contour theorem with residue
1. ## contour integral, limiting contour theorem with residue
$\displaystyle \int_{0}^{\infty} \frac{x^{-1/3}}{x^2+1} dx$
I did try to take the contour, and take notice the three "bad points" are $0$, $i$, and $-i$.
I used residue theorem that $\displaystyle\oint_{\Gamma_{R,\epsilon}} \frac{dz}{\sqrt[3]{z}(z^2+1)}=2\pi i\displaystyle \sum_{poles\ in\ the\ plane}Res(f(z), a_j)$.
I can use limiting contour theorem to get one integral is $0$.
However, I'm really having trouble solve this question, I thought my methods are right, but I can't get the right answer which is $\frac{\sqrt{3}{\pi}}{3}$. One friend told me I need to worry about choosing branch because of that $\sqrt[3]{z}$, but I don't quite understand it and what I supposed to do.
Can anyone please show me some precise steps on solving this question? Thanks a lot.
2. Originally Posted by tsang
$\displaystyle \int_{0}^{\infty} \frac{x^{-1/3}}{x^2+1} dx$
I did try to take the contour, and take notice the three "bad points" are $0$, $i$, and $-i$.
I used residue theorem that $\displaystyle\oint_{\Gamma_{R,\epsilon}} \frac{dz}{\sqrt[3]{z}(z^2+1)}=2\pi i\displaystyle \sum_{poles\ in\ the\ plane}Res(f(z), a_j)$.
I can use limiting contour theorem to get one integral is $0$.
However, I'm really having trouble solve this question, I thought my methods are right, but I can't get the right answer which is $\frac{\sqrt{\pi}}{3}$. One friend told me I need to worry about choosing branch because of that $\sqrt[3]{z}$, but I don't quite understand it and what I supposed to do.
Can anyone please show me some precise steps on solving this question? Thanks a lot.
The function to be integrated has two 'simple poles' in z=i and z=-i and one 'brantch point' in z=0. The last type of singularity has to be excluded from the integration path so that may be that the best integration path is illustrated in the figure...
Kind regards
$\chi$ $\sigma$
3. Originally Posted by chisigma
The function to be integrated has two 'simple poles' in z=i and z=-i and one 'brantch point' in z=0. The last type of singularity has to be excluded from the integration path so that may be that the best integration path is illustrated in the figure...
Kind regards
$\chi$ $\sigma$
Hi, thank you for your help. Yes, I used the same contour as your graph.
But, could you please give me a bit more details? I'm still confused on what I should do. I can get the whole contour is made by four parts, but I still can't get the right answer. Please help me a bit more. Thanks a lot.
4. Originally Posted by tsang
Hi, thank you for your help. Yes, I used the same contour as your graph.
But, could you please give me a bit more details? I'm still confused on what I should do. I can get the whole contour is made by four parts, but I still can't get the right answer. Please help me a bit more. Thanks a lot.
All right!... may be it is useful to examine the more general case...
$I= \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx\ ,\ -1 (1)
As said above the solution is in the computation of the integral...
$\int_{\gamma} f(z)\ dz = \int_{\gamma} \frac{z^{p}}{1+z^{2}}\ dz$ (2)
... along the 'red path' $\gamma$ of the figure...
The procedure is a lottle long and it is better to devide it into two steps. The first step is the computation of the integral (2) that is done by the Cauchy integral formula...
$\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}$ (3)
In our case f(z) inside $\gamma$ has two simple poles: $z=i$ and $\z=-i$ and their residues are...
$r_{1}= \lim_{z \rightarrow i} (z-i)\ \frac{z^{p}}{1+z^{2}}= \frac{e^{i\ \frac{\pi\ p}{2}}}{2i}$ (4)
$r_{2}= \lim_{z \rightarrow -i} (z+i)\ \frac{z^{p}}{1+z^{2}}= -\frac{e^{-i\ \frac{\pi\ p}{2}}}{2i}$ (5)
... so that the integral (3) is...
$\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}= 2 \pi\ i\ \sin \frac{\pi\ p}{2}$ (6)
... and the first step is done. Are You able to proceed?...
Kind regards
$\chi$ $\sigma$
5. Originally Posted by chisigma
All right!... may be it is useful to examine the more general case...
$I= \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx\ ,\ -1 (1)
As said above the solution is in the computation of the integral...
$\int_{\gamma} f(z)\ dz = \int_{\gamma} \frac{z^{p}}{1+z^{2}}\ dz$ (2)
... along the 'red path' $\gamma$ of the figure...
The procedure is a lottle long and it is better to devide it into two steps. The first step is the computation of the integral (2) that is done by the Cauchy integral formula...
$\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}$ (3)
In our case f(z) inside $\gamma$ has two simple poles: $z=i$ and $\z=-i$ and their residues are...
$r_{1}= \lim_{z \rightarrow i} (z-i)\ \frac{z^{p}}{1+z^{2}}= \frac{e^{i\ \frac{\pi\ p}{2}}}{2i}$ (4)
$r_{2}= \lim_{z \rightarrow -i} (z+i)\ \frac{z^{p}}{1+z^{2}}= -\frac{e^{-i\ \frac{\pi\ p}{2}}}{2i}$ (5)
... so that the integral (3) is...
$\int_{\gamma} f(z)\ dz = 2 \pi\ i\ \sum_{i} r_{i}= 2 \pi\ i\ \sin \frac{\pi\ p}{2}$ (6)
... and the first step is done...
The second step is the division of integral (6) into four distinct integrals...
$\int_{\gamma} f(z)\ dz = \int_{r}^{R} \frac{x^{p}}{1+x^{2}}\ dx + i\ \int_{0}^{2 \pi} \frac{R^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+R^{2}\ e^{2 i \theta}}\ d \theta +$
$+ \int_{R}^{r} \frac{x^{p}\ e^{2\ \pi\ i\ p}}{1+x^{2}\ e^{4\ \pi\ i}}\ dx + i\ \int_{2\ \pi}^{0} \frac{r^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+r^{2}\ e^{2 i \theta}}\ d \theta$ (7)
Now if $-1 the second integral in (7) vanishes if R tends to infinity and the fourth integral in (7) also vanishes if r tends to 0, so that, taking into account (6), is...
$(1- e^{2\ \pi\ i\ p})\ \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = 2\ \pi\ i\ \sin \frac{\pi\ p}{2}$ (8)
... and from (8) with simple steps we arrive at the result...
$\int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = (-1)^{1-p}\ \pi\ \frac{\sin \frac{\pi\ p}{2}}{\sin \pi\ p}$ (9)
The result (9) is in some way 'a little enbarassing'... for $p=-\frac{1}{3}$ we have the correct result...
$\int_{0}^{\infty} \frac{x^{-\frac{1}{3}}}{1+x^{2}}\ dx = \frac{\pi}{\sqrt{3}}$ (10)
... as in...
int x^(-1/3)/(1+x^2) dx, x=0..infinity - Wolfram|Alpha
... but for other values of p (9) [for istance $p=-\frac{1}{2}$ ...] that is not true... an interesting problem for the 'experts' [unless some mistake of me]...
Kind regards
$\chi$ $\sigma$
6. Originally Posted by chisigma
The second step is the division of integral (6) into four distinct integrals...
$\int_{\gamma} f(z)\ dz = \int_{r}^{R} \frac{x^{p}}{1+x^{2}}\ dx + i\ \int_{0}^{2 \pi} \frac{R^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+R^{2}\ e^{2 i \theta}}\ d \theta +$
$+ \int_{R}^{r} \frac{x^{p}\ e^{2\ \pi\ i\ p}}{1+x^{2}\ e^{4\ \pi\ i}}\ dx + i\ \int_{2\ \pi}^{0} \frac{r^{p+1}\ \theta\ e^{i\ \theta\ (p+1)}}{1+r^{2}\ e^{2 i \theta}}\ d \theta$ (7)
Now if $-1 the second integral in (7) vanishes if R tends to infinity and the fourth integral in (7) also vanishes if r tends to 0, so that, taking into account (6), is...
$(1- e^{2\ \pi\ i\ p})\ \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = 2\ \pi\ i\ \sin \frac{\pi\ p}{2}$ (8)
... and from (8) with simple steps we arrive at the result...
$\int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = (-1)^{1-p}\ \pi\ \frac{\sin \frac{\pi\ p}{2}}{\sin \pi\ p}$ (9)
The result (9) is in some way 'a little enbarassing'... for $p=-\frac{1}{3}$ we have the correct result...
$\int_{0}^{\infty} \frac{x^{-\frac{1}{3}}}{1+x^{2}}\ dx = \frac{\pi}{\sqrt{3}}$ (10)
... as in...
int x^(-1/3)/(1+x^2) dx, x=0..infinity - Wolfram|Alpha
... but for other values of p (9) [for istance $p=-\frac{1}{2}$ ...] that is not true... an interesting problem for the 'experts' [unless some mistake of me]...
Kind regards
$\chi$ $\sigma$
Hi, sorry I just realised there's something confusing from your (8) to (10). This is because I do understand everything you did up to part (8), but I can't see how you get from part (8) to part (9), then, I used Matlab to double check, if I subsititute $p=-\frac{1}{3}$in your equation (9), I don't actually get $\frac{\pi}{\sqrt{3}}$, it is different answer.
7. Originally Posted by tsang
Hi, sorry I just realised there's something confusing from your (8) to (10). This is because I do understand everything you did up to part (8), but I can't see how you get from part (8) to part (9), then, I used Matlab to double check, if I subsititute $p=-\frac{1}{3}$in your equation (9), I don't actually get $\frac{\pi}{\sqrt{3}}$, it is different answer.
In my opinion for $-1 is 'with great probability'...
$\int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = \pi\ \frac{\sin \frac{\pi\ p}{2}}{\sin \pi\ p} = \frac{\frac{\pi}{2}}{\cos \frac{\pi\ p}{2}}$ (1)
For example $p=\pm \frac{1}{2}$ gives...
$\int_{0}^{\infty} \frac{x^{\pm \frac{1}{2}}}{1+x^{2}} = \frac{\pi}{\sqrt{2}}$ (2)
... as in...
int x^(1/2)/(1+x^2) ; dx, x=0..infinity - Wolfram|Alpha
... and in...
int x^(-1/2)/(1+x^2) dx, x=0..infinity - Wolfram|Alpha
My only problem is... I am unable to demonstrate that ... if the result I arrived to is correct...
$(1-e^{2 \pi i p})\ \int_{0}^{\infty} \frac{x^{p}}{1+x^{2}}\ dx = 2 \pi i \sin \frac{\pi p}{2}$ (3)
... then I'm in a trouble because is...
$1-e^{2 \pi i p} = - e^{\pi i p}\ (e^{\pi i p}-e^{-\pi i p}) = 2 i e^{\pi i (p-1)}\ \sin \pi p$ (4)
... and the fastidious term $e^{\pi i (p-1)}= (-1)^{p-1}$ is not eliminable... that's why I'm expecting some 'help' from the 'experts' of the forum...
Kind regards
$\chi$ $\sigma$
|
|
#### HACKING.tex16 KB History Raw
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462 %% -*- mode: text; -*- %% $QuaggaId: Format:%an, %ai, %h$ $\documentclass[oneside]{article} \usepackage{parskip} \usepackage[bookmarks,colorlinks=true]{hyperref} \title{Conventions for working on Quagga} \begin{document} \maketitle This is a living document. Suggestions for updates, via the \href{http://lists.quagga.net/mailman/listinfo/quagga-dev}{quagga-dev list}, are welcome. \tableofcontents \section{GUIDELINES FOR HACKING ON QUAGGA} \label{sec:guidelines} GNU coding standards apply. Indentation follows the result of invoking GNU indent (as of 2.2.8a) with no arguments. Note that this uses tabs instead of spaces where possible for leading whitespace, and assumes that tabs are every 8 columns. Do not attempt to redefine the location of tab stops. Note also that some indentation does not follow GNU style. This is a historical accident, and we generally only clean up whitespace when code is unmaintainable due to whitespace issues, to minimise merging conflicts. For GNU emacs, use indentation style gnu''. For Vim, use the following lines (note that tabs are at 8, and that softtabstop sets the indentation level): set tabstop=8 set softtabstop=2 set shiftwidth=2 set noexpandtab Be particularly careful not to break platforms/protocols that you cannot test. New code should have good comments, which explain why the code is correct. Changes to existing code should in many cases upgrade the comments when necessary for a reviewer to conclude that the change has no unintended consequences. Each file in the Git repository should have a git format-placeholder (like an RCS Id keyword), somewhere very near the top, commented out appropriately for the file type. The placeholder used for Quagga (replacing with \$) is: \verb|$QuaggaId: Format:%an, %ai, %h$| See line 2 of HACKING.tex, the source for this document, for an example. This placeholder string will be expanded out by the git archive' commands, wihch is used to generate the tar archives for snapshots and releases. Please document fully the proper use of a new function in the header file in which it is declared. And please consult existing headers for documentation on how to use existing functions. In particular, please consult these header files: \begin{description} \item{lib/log.h} logging levels and usage guidance \item{[more to be added]} \end{description} If changing an exported interface, please try to deprecate the interface in an orderly manner. If at all possible, try to retain the old deprecated interface as is, or functionally equivalent. Make a note of when the interface was deprecated and guard the deprecated interface definitions in the header file, ie: \begin{verbatim} /* Deprecated: 20050406 */ #if !defined(QUAGGA_NO_DEPRECATED_INTERFACES) #warning "Using deprecated (interface(s)|function(s))" ... #endif /* QUAGGA_NO_DEPRECATED_INTERFACES */ \end{verbatim} This is to ensure that the core Quagga sources do not use the deprecated interfaces (you should update Quagga sources to use new interfaces, if applicable), while allowing external sources to continue to build. Deprecated interfaces should be excised in the next unstable cycle. Note: If you wish, you can test for GCC and use a function marked with the 'deprecated' attribute. However, you must provide the warning for other compilers. If changing or removing a command definition, \emph{ensure} that you properly deprecate it - use the \_DEPRECATED form of the appropriate DEFUN macro. This is \emph{critical}. Even if the command can no longer function, you \emph{MUST} still implement it as a do-nothing stub. Failure to follow this causes grief for systems administrators, as an upgrade may cause daemons to fail to start because of unrecognised commands. Deprecated commands should be excised in the next unstable cycle. A list of deprecated commands should be collated for each release. See also section~\ref{sec:dll-versioning} below regarding SHARED LIBRARY VERSIONING. \section{COMPILE-TIME CONDITIONAL CODE} Please think very carefully before making code conditional at compile time, as it increases maintenance burdens and user confusion. In particular, please avoid gratuitious --enable-\ldots switches to the configure script - typically code should be good enough to be in Quagga, or it shouldn't be there at all. When code must be compile-time conditional, try have the compiler make it conditional rather than the C pre-processor - so that it will still be checked by the compiler, even if disabled. I.e. this: \begin{verbatim} if (SOME_SYMBOL) frobnicate(); \end{verbatim} rather than: \begin{verbatim} #ifdef SOME_SYMBOL frobnicate (); #endif /* SOME_SYMBOL */ \end{verbatim} Note that the former approach requires ensuring that SOME\_SYMBOL will be defined (watch your AC\_DEFINEs). \section{COMMIT MESSAGES} The commit message requirements are: \begin{itemize} \item The message \emph{MUST} provide a suitable one-line summary followed by a blank line as the very first line of the message, in the form: \verb|topic: high-level, one line summary| Where topic would tend to be name of a subdirectory, and/or daemon, unless there's a more suitable topic (e.g. 'build'). This topic is used to organise change summaries in release announcements. \item It should have a suitable "body", which tries to address the following areas, so as to help reviewers and future browsers of the code-base understand why the change is correct (note also the code comment requirements): \begin{itemize} \item The motivation for the change (does it fix a bug, if so which? add a feature?) \item The general approach taken, and trade-offs versus any other approaches. \item Any testing undertaken or other information affecting the confidence that can be had in the change. \item Information to allow reviewers to be able to tell which specific changes to the code are intended (and hence be able to spot any accidental unintended changes). \end{itemize} \end{itemize} The one-line summary must be limited to 54 characters, and all other lines to 72 characters. Commit message bodies in the Quagga project have typically taken the following form: \begin{itemize} \item An optional introduction, describing the change generally. \item A short description of each specific change made, preferably: \begin{itemize} \item file by file \begin{itemize} \item function by function (use of "ditto", or globs is allowed) \end{itemize} \end{itemize} \end{itemize} Contributors are strongly encouraged to follow this form. This itemised commit messages allows reviewers to have confidence that the author has self-reviewed every line of the patch, as well as providing reviewers a clear index of which changes are intended, and descriptions for them (C-to-english descriptions are not desireable - some discretion is useful). For short patches, a per-function/file break-down may be redundant. For longer patches, such a break-down may be essential. A contrived example (where the general discussion is obviously somewhat redundant, given the one-line summary): \begin{quote}\begin{verbatim} zebra: Enhance frob FSM to detect loss of frob Add a new DOWN state to the frob state machine to allow the barinator to detect loss of frob. * frob.h: (struct frob) Add DOWN state flag. * frob.c: (frob\_change) set/clear DOWN appropriately on state change. * bar.c: (barinate) Check frob for DOWN state. \end{verbatim}\end{quote} Please have a look at the git commit logs to get a feel for what the norms are. Note that the commit message format follows git norms, so that git log --oneline'' will have useful output. \section{HACKING THE BUILD SYSTEM} If you change or add to the build system (configure.ac, any Makefile.am, etc.), try to check that the following things still work: \begin{itemize} \item make dist \item resulting dist tarball builds \item out-of-tree builds \end{itemize} The quagga.net site relies on make dist to work to generate snapshots. It must work. Common problems are to forget to have some additional file included in the dist, or to have a make rule refer to a source file without using the srcdir variable. \section{RELEASE PROCEDURE} \begin{itemize} \item Tag the apppropriate commit with a release tag (follow existing conventions). [This enables recreating the release, and is just good CM practice.] \item Create a fresh tar archive of the quagga.net repository, and do a test build: \begin{verbatim} git-clone git:///code.quagga.net/quagga.git quagga git-archive --remote=git://code.quagga.net/quagga.git \ --prefix=quagga-release/ master | tar -xf - cd quagga-release autoreconf -i ./configure make make dist \end{verbatim} \end{itemize} The tarball which make dist' creates is the tarball to be released! The git-archive step ensures you're working with code corresponding to that in the official repository, and also carries out keyword expansion. If any errors occur, move tags as needed and start over from the fresh checkouts. Do not append to tarballs, as this has produced non-standards-conforming tarballs in the past. See also: \url{http://wiki.quagga.net/index.php/Main/Processes} [TODO: collation of a list of deprecated commands. Possibly can be scripted to extract from vtysh/vtysh\_cmd.c] \section{TOOL VERSIONS} Require versions of support tools are listed in INSTALL.quagga.txt. Required versions should only be done with due deliberation, as it can cause environments to no longer be able to compile quagga. \section{SHARED LIBRARY VERSIONING} \label{sec:dll-versioning} [this section is at the moment just gdt's opinion] Quagga builds several shared libaries (lib/libzebra, ospfd/libospf, ospfclient/libsopfapiclient). These may be used by external programs, e.g. a new routing protocol that works with the zebra daemon, or ospfapi clients. The libtool info pages (node Versioning) explain when major and minor version numbers should be changed. These values are set in Makefile.am near the definition of the library. If you make a change that requires changing the shared library version, please update Makefile.am. libospf exports far more than it should, and is needed by ospfapi clients. Only bump libospf for changes to functions for which it is reasonable for a user of ospfapi to call, and please err on the side of not bumping. There is no support intended for installing part of zebra. The core library libzebra and the included daemons should always be built and installed together. \section{GIT COMMIT SUBMISSION} \label{sec:git-submission} The preferred method for submitting changes is to provide git commits via a publically-accessible git repository, which the maintainers can easily pull. The commits should be in a branch based off the Quagga.net master - a "feature branch". Ideally there should be no commits to this branch other than those in master, and those intended to be submitted. However, merge commits to this branch from the Quagga master are permitted, though strongly discouraged - use another (potentially local and throw-away) branch to test merge with the latest Quagga master. Recommended practice is to keep different logical sets of changes on separate branches - "topic" or "feature" branches. This allows you to still merge them together to one branch (potentially local and/or "throw-away") for testing or use, while retaining smaller, independent branches that are easier to merge. All content guidelines in section \ref{sec:patch-submission}, PATCH SUBMISSION apply. \section{PATCH SUBMISSION} \label{sec:patch-submission} \begin{itemize} \item For complex changes, contributors are strongly encouraged to first start a design discussion on the quagga-dev list \emph{before} starting any coding. \item Send a clean diff against the 'master' branch of the quagga.git repository, in unified diff format, preferably with the '-p' argument to show C function affected by any chunk, and with the -w and -b arguments to minimise changes. E.g: git diff -up mybranch..remotes/quagga.net/master It is preferable to use git format-patch, and even more preferred to publish a git repository (see GIT COMMIT SUBMISSION, section \ref{sec:git-submission}). If not using git format-patch, Include the commit message in the email. \item After a commit, code should have comments explaining to the reviewer why it is correct, without reference to history. The commit message should explain why the change is correct. \item Include NEWS entries as appropriate. \item Include only one semantic change or group of changes per patch. \item Do not make gratuitous changes to whitespace. See the w and b arguments to diff. \item Changes should be arranged so that the least contraversial and most trivial are first, and the most complex or more contraversial are last. This will maximise how many the Quagga maintainers can merge, even if some other commits need further work. \item Providing a unit-test is strongly encouraged. Doing so will make it much easier for maintainers to have confidence that they will be able to support your change. \item New code should be arranged so that it easy to verify and test. E.g. stateful logic should be separated out from functional logic as much as possible: wherever possible, move complex logic out to smaller helper functions which access no state other than their arguments. \item State on which platforms and with what daemons the patch has been tested. Understand that if the set of testing locations is small, and the patch might have unforeseen or hard to fix consequences that there may be a call for testers on quagga-dev, and that the patch may be blocked until test results appear. If there are no users for a platform on quagga-dev who are able and willing to verify -current occasionally, that platform may be dropped from the "should be checked" list. \end{itemize} \section{PATCH APPLICATION} \begin{itemize} \item Only apply patches that meet the submission guidelines. \item If the patch might break something, issue a call for testing on the mailinglist. \item Give an appropriate commit message (see above), and use the --author argument to git-commit, if required, to ensure proper attribution (you should still be listed as committer) \item Immediately after commiting, double-check (with git-log and/or gitk). If there's a small mistake you can easily fix it with git commit --amend ..' \item When merging a branch, always use an explicit merge commit. Giving --no-ff ensures a merge commit is created which documents this human decided to merge this branch at this time''. \end{itemize} \section{STABLE PLATFORMS AND DAEMONS} The list of platforms that should be tested follow. This is a list derived from what quagga is thought to run on and for which maintainers can test or there are people on quagga-dev who are able and willing to verify that -current does or does not work correctly. \begin{itemize} \item BSD (Free, Net or Open, any platform) \item GNU/Linux (any distribution, i386) \item Solaris (strict alignment, any platform) \item future: NetBSD/sparc64 \end{itemize} The list of daemons that are thought to be stable and that should be tested are: \begin{itemize} \item zebra \item bgpd \item ripd \item ospfd \item ripngd \end{itemize} Daemons which are in a testing phase are \begin{itemize} \item ospf6d \item isisd \item watchquagga \end{itemize} \section{IMPORT OR UPDATE VENDOR SPECIFIC ROUTING PROTOCOLS} The source code of Quagga is based on two vendors: \verb|zebra_org| (\url{http://www.zebra.org/}) \verb|isisd_sf| (\url{http://isisd.sf.net/}) To import code from further sources, e.g. for archival purposes without necessarily having to review and/or fix some changeset, create a branch from master': \begin{verbatim} git checkout -b archive/foo master git commit -a "Joe Bar " git push quagga archive/foo \end{verbatim} presuming quagga' corresponds to a file in your .git/remotes with configuration for the appropriate Quagga.net repository. \end{document} `
|
|
# Symmetries versus the spectrum of $J\bar T$-deformed CFTs
### Submission summary
As Contributors: Monica Guica Arxiv Link: https://arxiv.org/abs/2012.15806v2 (pdf) Date accepted: 2021-03-02 Date submitted: 2021-01-14 07:08 Submitted by: Guica, Monica Submitted to: SciPost Physics Academic field: Physics Specialties: High-Energy Physics - Theory Approach: Theoretical
### Abstract
It has been recently shown that classical $J\bar T$ - deformed CFTs possess an infinite-dimensional Witt-Ka\v{c}-Moody symmetry, generated by certain field-dependent coordinate and gauge transformations. On a cylinder, however, the equal spacing of the descendants' energies predicted by such a symmetry algebra is inconsistent with the known finite-size spectrum of $J\bar T$ - deformed CFTs. Also, the associated quantum symmetry generators do not have a proper action on the Hilbert space. In this article, we resolve this tension by finding a new set of (classical) conserved charges, whose action is consistent with semi-classical quantization, and which are related to the previous symmetry generators by a type of energy-dependent spectral flow. The previous inconsistency between the algebra and the spectrum is resolved because the energy operator does not belong to the spectrally flowed sector.
Published as SciPost Phys. 10, 065 (2021)
### Submission & Refereeing History
Submission 2012.15806v2 on 14 January 2021
## Reports on this Submission
### Anonymous Report 2 on 2021-2-28 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2012.15806v2, delivered 2021-02-28, doi: 10.21468/SciPost.Report.2631
see report
see report
### Report
This manuscript studies the apparent tension between the deformed energy level formula for a $J\bar{T}$ deformed CFT on the cylinder and the equally-spaced energies that follow from the symmetries. It is shown that the previously found infinite set of symmetry generators (which are field-dependent) does not act properly on the semiclassical phase space of the theory, but a new (infinite) set of symmetry generators does. These new generators preserve the algebra and have the correct charge and momentum quantization.
The manuscript is well-written, although sometimes a bit formal, and addresses a very important relevant issue in the $J\bar{T}$ literature. The proposed solution is valuable, not only for the $J\bar{T}$ deformation, but potentially also other deformations such as the $T\bar{T}$ deformation.
One confusion I had was with the rather formal expressions 3.40 and 3.49. Is it clear that these sums are convergent? Is the radius of convergence related to the complexification of the energy levels in 1.1?
Typo: It is Kac and not Ka\v{c}
Other than this minor confusion I recommend this manuscript for publication.
• validity: top
• significance: high
• originality: high
• clarity: high
• formatting: excellent
• grammar: excellent
### Anonymous Report 1 on 2021-2-11 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2012.15806v2, delivered 2021-02-11, doi: 10.21468/SciPost.Report.2539
### Report
This work is a direct continuation of previous works of the author
on this topic, which deals with subtleties concerning symmetries
of solvable irrelevant deformed CFTs, and addresses new ideas
relevant for their resolution.
It is suitable for publication
• validity: -
• significance: -
• originality: -
• clarity: -
• formatting: -
• grammar: -
|
|
Courses
# Test: Determinants (CBSE Level) - 1
## 25 Questions MCQ Test Mathematics (Maths) Class 12 | Test: Determinants (CBSE Level) - 1
Description
This mock test of Test: Determinants (CBSE Level) - 1 for JEE helps you for every JEE entrance exam. This contains 25 Multiple Choice Questions for JEE Test: Determinants (CBSE Level) - 1 (mcq) to study with solutions a complete question bank. The solved questions answers in this Test: Determinants (CBSE Level) - 1 quiz give you a good mix of easy questions and tough questions. JEE students definitely take this Test: Determinants (CBSE Level) - 1 exercise for a better result in the exam. You can find other Test: Determinants (CBSE Level) - 1 extra questions, long questions & short questions for JEE on EduRev as well by searching above.
QUESTION: 1
### Let a = , then Det. A is
Solution:
Apply C2 → C2 + C3,
QUESTION: 2
Solution:
because , the value of the determinant is zero only when , the two of its rows are identical., Which is possible only when Either x = 3 or x = 4 .
QUESTION: 3
Solution:
Apply , R1 → R1+R2+R3,
Apply , C3→ C- C1, C2C2 - C1,
=(a+b+c)3
QUESTION: 4
If A and B are invertible matrices of order 3 , then det (adj A) =
Solution:
Let A be a non singular square matrix of order n . then , |adj.A| = A−1
QUESTION: 5
If A and B matrices are of same order and A + B = B + A, this law is known as
Solution:
Commutative law, in mathematics, either of two laws relating to number operations of addition and multiplication, stated symbolically: a + b = b + a and ab = ba
QUESTION: 6
If A’ is the transpose of a square matrix A , then
Solution:
The determinant of a matrix A and its transpose always same.
QUESTION: 7
The value of the determinant is
Solution:
, Apply , C2 → C2 + C3
= 0, (∵ C1 = C2)
QUESTION: 8
The roots of the equation det. are
Solution:
⇒ (1-x)(2-x)(3-x) = 0 ⇒x = 1,2,3
QUESTION: 9
If A is a square matrix of order 2 , then det (adj A) = x
Solution:
Let A be a square matrix of order 2 . then ,|adj.A| = |A|
QUESTION: 10
If A is a symmetric matrix, then At =
Solution:
QUESTION: 11
Solution:
, because , row 1 and row 3 are identical.
QUESTION: 12
is equal to
Solution:
Apply , C1→C1 - C3, C2→C2-C3
= 10 - 12 = -2
QUESTION: 13
If A+B+C = π, then the value of
Solution:
QUESTION: 14
If A is a non singular matrix of order 3 , then |adj(A3)| =
Solution:
If A is anon singular matrix of order , then
QUESTION: 15
If A and B are any 2 × 2 matrices , then det. (A+B) = 0 implies
Solution:
Det.(A+B) ≠ Det.A + Det.B.
QUESTION: 16
If A B be two square matrices such that AB = O, then
Solution:
If A B be two square matrices such that AB = O, then,
QUESTION: 17
Solution:
Apply , C1 → C1 - C2, C2 → C2 - C3,
Because here row 1 and 2 are identical
QUESTION: 18
If , then equals
Solution:
Because , the determinant of a skew symmetric matrix of odd order is always zero and of even order is a non zero perfect square.
QUESTION: 19
If I3 is the identity matrix of order 3 , then 13−1 is
Solution:
Because , the inverse of an identity matrix is an identity matrix.
QUESTION: 20
If A and B are square matrices of same order and A’ denotes the transpose of A , then
Solution:
By the property of transpose of a matrix ,(AB)’ = B’A’.
QUESTION: 21
A square matrix A is invertible iff det A is equal to
Solution:
Only non-singular matrices possess inverse.
QUESTION: 22
Solution:
Apply, C1→ C1+ C2+C3+C4,
Apply, R1 →R1 - R2,
Apply, R1 R- R2, R→ R- R3
=(x+3a) (a -x)3 (1) = (x+3a)(a-x)3
QUESTION: 23
If the entries in a 3 x 3 determinant are either 0 or 1 , then the greatest value of this determinant is :
Solution:
Greatest value = 2
QUESTION: 24
The roots of the equation are
Solution:
Operate,
Apply R3→R3- R1, R2 R2 -R1,
⇒ -6(5x2 - 20) +15(2x-4) = 0
⇒ (x- 2)(x+1) = 0⇒x=2, -1
⇒-6(5x2 - 20) + 15(2x-4) = 0
⇒(x-2)(x+1) = 0 ⇒ x=2, -1
QUESTION: 25
In a third order determinant, each element of the first column consists of sum of two terms, each element of the second column consists of sum of three terms and each element of the third column consists of sum of four terms. Then it can be decomposed into n determinants, where n has value
Solution:
N = 2 ×3 × 4 = 24.
|
|
# Using binomial theorem, expand each of the following:
Question:
Using binomial theorem, expand each of the following:
$\left(\frac{2 x}{3}-\frac{3}{2 x}\right)^{6}$
Solution:
To find: Expansion of $\left(\frac{2 x}{3}-\frac{3}{2 x}\right)^{6}$
Formula used: (i) ${ }^{n} C_{r}=\frac{n !}{(n-r) !(r) !}$
(ii) $(a+b)^{n}={ }^{n} C_{0} a^{n}+{ }^{n} C_{1} a^{n-1} b+{ }^{n} C_{2} a^{n-2} b^{2}+\ldots \ldots+{ }^{n} C_{n-1} a b^{n-1}+{ }^{n} C_{n} b^{n}$
We have, $\left(\frac{2 x}{3}-\frac{3}{2 x}\right)^{6}$
$\Rightarrow\left[{ }^{6} C_{0}\left(\frac{2 x}{3}\right)^{6-0}\right]+\left[{ }^{6} C_{1}\left(\frac{2 x}{3}\right)^{6-1}\left(-\frac{3}{2 x}\right)^{1}\right]+\left[{ }^{6} C_{2}\left(\frac{2 x}{3}\right)^{6-2}\left(-\frac{3}{2 x}\right)^{2}\right]+$
$\left[{ }^{6} C_{3}\left(\frac{2 x}{3}\right)^{6-3}\left(-\frac{3}{2 x}\right)^{3}\right]+\left[{ }^{6} C_{4}\left(\frac{2 x}{3}\right)^{6-4}\left(-\frac{3}{2 x}\right)^{4}\right]$
$+\left[{ }^{6} C_{5}\left(\frac{2 x}{3}\right)^{6-5}\left(-\frac{3}{2 x}\right)^{5}\right]+\left[{ }^{6} C_{6}\left(-\frac{3}{2 x}\right)^{6}\right]$
$\Rightarrow\left[\frac{6 !}{0 !(6-0) !}\left(\frac{2 x}{3}\right)^{6}\right]-\left[\frac{6 !}{1 !(6-1) !}\left(\frac{2 x}{3}\right)^{5}\left(\frac{3}{2 x}\right)\right]+$
$\left[\frac{6 !}{2 !(6-2) !}\left(\frac{2 x}{3}\right)^{4}\left(\frac{9}{4 x^{2}}\right)\right]-\left[\frac{6 !}{3 !(6-3) !}\left(\frac{2 x}{3}\right)^{3}\left(\frac{27}{8 x^{3}}\right)\right]+$
$\left[\frac{6 !}{4 !(6-4) !}\left(\frac{2 x}{3}\right)^{2}\left(\frac{81}{16 x^{4}}\right)\right]-\left[\frac{6 !}{5 !(6-5) !}\left(\frac{2 x}{3}\right)^{1}\left(\frac{243}{32 x^{5}}\right)\right]$
$+\left[\frac{6 !}{6 !(6-6) !}\left(\frac{729}{64 x^{6}}\right)\right]$
$\Rightarrow\left[1\left(\frac{64 x^{6}}{729}\right)\right]-\left[6\left(\frac{32 x^{5}}{243}\right)\left(\frac{3}{2 x}\right)\right]+\left[15\left(\frac{16 x^{4}}{81}\right)\left(\frac{9}{4 x^{2}}\right)\right]-\left[20\left(\frac{8 x^{3}}{27}\right)\right.$
$\left.\left(\frac{27}{8 x^{3}}\right)\right]+\left[15\left(\frac{4 x^{2}}{9}\right)\left(\frac{81}{16 x^{4}}\right)\right]-\left[6\left(\frac{2 x}{3}\right)\left(\frac{243}{32 x^{5}}\right)\right]+\left[1\left(\frac{729}{64 x^{6}}\right)\right]$
$\Rightarrow \frac{64}{729} x^{6}-\frac{32}{27} x^{4}+\frac{20}{3} x^{2}-20+\frac{135}{4} \frac{1}{x^{2}}-\frac{243}{8} \frac{1}{x^{4}}+\frac{729}{64} \frac{1}{x^{6}}$
Ans) $\frac{64}{729} x^{6}-\frac{32}{27} x^{4}+\frac{20}{3} x^{2}-20+\frac{135}{4} \frac{1}{x^{2}}-\frac{243}{8} \frac{1}{x^{4}}+\frac{729}{64} \frac{1}{x^{6}}$
|
|
# Tag Info
15
It depends on the car. If it's a big displacement high performance engine, then you may not be able to get the rear wheels to turn unless you're in the highest gear. If it's got an itty bitty engine, then push-starting in 1st may work best. While I don't have personal experience, because weight transfers to the front on deceleration a front wheel drive ...
14
Have you looked at the size of one of those maritime diesel engines? They are larger than your car and need to deliver a lot of power to move and power the ship. That takes a lot of fuel so it's cheaper to burn more cheaper fuel even if it is of inferior quality. The bigger size also lets it use wider fuel lines so the viscosity is less of an issue. You also ...
8
Each cylinder produces a power stroke for every (2-stroke engine) or every other (4-stroke engine) rotation of the crankshaft. A 3-cylinder, 4-stroke engine will produce 3 power strokes for every 2 rotations (720°), or one for every 720°/3 = 240° of rotation.
8
The primary issue (ignoring secondary issues like internal wear) is whether or not the engine can get rid of the waste heat. Assuming that all engines for a given fuel have roughly the same overall thermal efficiency, for a given power output, a certain amount of waste heat needs to be dissipated regardless of the physical size of the engine. The heat is ...
7
A higher gear ratio means that less force is needed to turn over the engine by pushing the car. Aside from the issue of the tires slipping, humans are more likely to be able to maintain the speed of the car long enough to allow the engine to start if it is in a higher gear. If you are tow-starting the car, the same thing applies: in a higher gear, there is ...
7
They are very nearly equal for typical four-stroke non-turbo diesels under load. A turbo diesel under load should have slightly more radiator loss than exhaust loss. At the bottom is a link to the technical spec sheet for a Cat 3412 powered genset. It's a probably a bit bigger than what you had in mind. It is a turbo with aftercooler (A/C in the doc below). ...
7
Yes - for certain applications. Low-energy, low-temperature, small installations, possibly 'accessory' power where providing a bit of weak rotary power to some point of machinery would be difficult but a good heat gradient is at hand and can be utilized - generally things where you have a heat source above your alcohol boiling point but not providing nearly ...
5
Simple economics. Marine engines consume enormous amounts of fuel, so in order to reduce operating costs, they use the cheapest, least desirable sludge that the oil refineries can produce.
5
From the Wikipedia article: All diesel engines can be considered to be lean-burning with respect to the total volume, however the fuel and air is not well mixed before the combustion. Most of the combustion occurs in rich zones around small droplets of fuel. Locally rich combustion like this is a source of NOx and particles.
5
Most (modern) small and large car engines are designed for 100% duty cycle. This means that at 100% rated power(gas pedal all the way down) the engine can run continuously. Heat dissipation is the limiting factor like Dave Tweed stated. Cars that are not designed to continuously dissipate 100% of the heat generated at max power require the driver to watch ...
5
A diesel engine for a car needs a fuel which is liquid even in winter. This fuel should contain a very small amount of sulfur to limit air pollution. The marine bunker oil is not liquid at room temperature, it has to be heated to about 50 °C before pumping out of the tank and to about 130 to 140 °C before injecting it into the cylinders. It contains a lot ...
5
A stroke consists of either one expansion, or one contraction. As such it corresponds to half of a rotation of the shaft. In a three cylinder engine, each cylinder will stroke with 2 strokes per revolution, so that's 6 strokes per revolution. However, in a four stroke engine, it takes four strokes for each cylinder to complete one cycle, and only one of ...
5
Brakes are used. And tyre wedges are also used. Turbo prop has the blades feathered to not produce any thrust. A jet is producing enough mass flow to run and little thrust.
4
That's because when you upshift, you select a lower ratio, so your clutch speed drops. Since you took your foot of the gas, the engine rpm also dropped, and now the clutch and engine speed are close to synced, and hence little force is felt when you engage the clutch. The best way to upshift is to relieve the throttle a little(not fully), so the rpm will ...
4
There is no reason in principle why a custom engine builder couldn't cast a block themselves. Everything done by the level of automation in this process could by done by hand, given enough highly skilled craftsmen. https://www.youtube.com/watch?v=5oUDzkkdkpQ Whether there are customers who would want to pay for a hand-made block which costs as much as the ...
3
Gaskets are selected based upon the mating surface materials, pressure, surface area, surface condition and what they are sealing. As far as I know, harder gaskets are used in harsher locations. But there are many factors involved with modern to do with making them lighter and stronger. A strong steel gasket may be applied to a cylinder head because it adds ...
3
The crosshead bearing slides within a track and connects to the conrod and then to the crankshaft. In a vertically oriented engine the pressure on the bearing is always downwards, resulting in a depleted lubrication film on the lower contact surfaces of the bearing shell. To supply lubricant to the entire bearing, high pressure oil is injected into the ...
3
One possible reason for placing the engine and most of the weight at the back of the car would be to improve rear wheel tire friction. This would be useful if the car has a rear-wheel drive, since the car's ability to accelerate is limited by the amount of torque the wheels can support before slipping. Adding weight to the back of the car would increase ...
3
"Let me say this about that" (extra points to those who recognize the originator of that phrase). First: yeah, they've invented a cool-looking device. But, they appear to be getting nowhere so far as either advanced funding or a fieldable model produced. Second: I have seen at least 5 or 6 "groundbreaking new powerful/efficient/wunderdevice" internal ...
3
The answer depends on factors like the bypass ratio of the engine and the design of the thrust reverser, e.g. bucket doors at the back of the engine or vanes to deflect the bypass airflow from the fan. By the OP's proposed measurement (reverse thrust / forward thrust), the efficiency is also strongly dependent on the engine speed. At maximum thrust the ...
3
I never heard of pushing a car is 1st; have you tried it ? In the good old days when cars were not so reliable , I started more than a few in 3 rd ( of 3 speed ). Occasionally started one in second by letting out the clutch after it was moving then quickly pushing the clutch back in and hoping it started instead of sliding the wheels. Maybe with a small ...
3
Having pushed many cars on many occasions, on snow as well as gravel and tarmacadam. I can categorically state; on gravel 3rd best nothing lower than 2nd, on snow 4th you can try 3rd and you may be lucky, that's if you have chains or studs on, on Tarmacadam 2nd gear, no lower. Downhill you could use 1st but I would still recommend 2nd. The reason is ...
3
For every action there is an equal, but opposite, reaction. Never found a case that this is not true. Torque reaction on the P51 even caused uneven tire wear: https://www.aopa.org/news-and-media/all-news/2007/august/pilot/north-american-aviation-p-51d-mustang So, if you open the bonnet or hood of a car and run the engine with it in neutral, then blip the ...
2
OK, fair warning: I am answering my own question and am not a engines person. So this could be wrong. The real limit in the engine is how hot certain parts can get without breaking. This temperature is related to the gas temperature after combustion via the cooling system and the cylinder design (convective heat transfer between the gas and the cylinder ...
2
Your second attempt is correct, you just had two mistakes: If you re-calculated $V_1$ you'll find that it equals $0.01503$ not $0.0158$ $m^3$. same issue with $P_2$. $$P_2 = \frac{138*10^{3}*0.01503^{1.4}}{(8.3529411*10^{-4})^{1.4}}= 7.889*10^6\ Pa$$ Substituting in isentropic work equation: $$W = \frac{P_2V_2 - P_1V_1}{k-1} = \frac{(7.867*10^6 * 8.... 2 Having a master rod means that the bottom bearings of the connecting rods follow a fixed path throughout the cycle. If they are all attached to the crankshaft via a spider bearing it the spider bearing itself has an extra degree of freedom to rotate around the crank bearing, having a master rod constraints this rotation. Because the con rod bottom ... 2 It is possible to calculate this under the assumption that all of the energy from the expansion goes into mechanical energy instead of being lost to heat transfer through the walls. In this case the process is called adiabatic, and the work done is given by$$ W = P_0V_0^\gamma\frac{V_f^{1-\gamma}-V_0^{1-\gamma}}{1-\gamma}, $$where$$ \gamma=\frac{C_p}{...
2
The torque indicated by the car manufacturer is usually measured at the engine's output, without use of the gearbox. However it is still a good indicator of the car's global performance, but only if you pay attention to which rpm provides the higher torque. Example : 100 Nm @3000 rpm is a better performance than 100 Nm @7000 rpm since the first one is ...
2
One important reason is that diesel fuel had a high molecular weight compared to gasoline this means that is is more difficult to disperse as it forms liquid droplets as opposed to vapour and even more importantly there are many more intermediate reactions involved in complete combustion. For example Hydrogen, H2 burns very easily in oxygen as combustion ...
2
Theoretically Ramjet should start working from 0.5 Mach (this is the speed where compressibility of fluids becomes significant). But it won't be much efficient until ramjet reaches around Mach 3. Because RamJet compresses incoming air by slowing down the air speed. More the speed of incoming air more compression can be achieved. Around Mack 3, diffuser can ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# Find the the locus from the arg of a ratio of two complex numbers
What is the locus given by the following equation?
$$arg\left(\frac{z+1}{z-1}\right)=\frac{\pi}{2}$$
I know that $$arg\left(\frac{z_1}{z_2}\right)=arg(z_1)-arg(z_2)$$
and that the loci must emanate from -1 and +1 on the real axis. The $\pi/2$ also suggest to me that there is a $90^\circ$ angle at work, but I don't know how to proceed from there. So a more substantive explanation would be most appreciated!
It's pure high-school geometry: the upper semi-circle with diameter $[-1,1]$.
Algebraic alternative: first note that $\arg w = \frac{\pi}{2}$ iff $\operatorname{Re} w = 0$ and $\operatorname{Im} w \gt 0\,$. Then:
• $\require{cancel}0 = 2 \, \operatorname{Re} \cfrac{z+1}{z-1} = \cfrac{z+1}{z-1}+\cfrac{\bar z+1}{\bar z-1}=\cfrac{z \bar z - \cancel{z} + \bcancel{\bar z} - 1 + z \bar z - \bcancel{\bar z} + \cancel{z} - 1}{|z-1|^2}=\cfrac{2(|z|^2 - 1)}{|z-1|^2} \implies \bbox[3px, border:1px solid black]{|z|^2 = 1}\,$
• $0 \lt 2 \,\operatorname{Im} \cfrac{z+1}{z-1} = \cfrac{1}{i}\left(\cfrac{z+1}{z-1}-\cfrac{\bar z+1}{\bar z-1}\right)=\cfrac{1}{i} \, \cfrac{-2(z-\bar z)}{|z-1|^2}=- \cfrac{4 \operatorname{Im} z}{|z-1|^2}$ $\implies$ $\bbox[3px, border:1px solid black]{\operatorname{Im} z \lt 0}$
|
|
# Do *all* non-dividend paying assets have the risk-free instantaneous return rate under the risk-neutral measure?
For simplicity let's consider a 1D BS world. The only source of randomness comes from the Brownian motion dynamics $$dB_t$$. The risk-free rate is $$r$$ (one may assume it as constant for the time being). I know that, by virtue of Girsanov's theorem, the Brownian motion under the risk-neutral measure is defined by $$dB_t^{\Bbb Q} = \lambda dt + dB_t$$ where $$\lambda$$ is the unique market price of risk, or the so-called Sharpe ratio.
Under the risk-neutral measure, any non-dividend paying stock price process $$S_t$$ thus follows $$\frac{dS_t}{S_t} = rdt + \sigma_SdB_t^{\Bbb Q}.$$
However, in Kerry Back's A Course in Derivative Securities page 220, the author claimed without a proof that the instantaneous rate of return for a call option on the stock price $$C_t$$ is also $$r$$, i.e. $$\frac{dC_t}{C_t} = rdt + \sigma_C d B_t^{\Bbb Q}$$ where $$\sigma_C$$ is some stochastic process that we're not interested in. The author make crucial use of the above formula (i.e. the drift of $$C_t$$ is $$rC_tdt$$) to derive the BS PDE.
Question: is it true that under the risk neutral measure, any non-dividend paying asset price $$X_t$$ must have its instantaneous rate of return equal to $$r$$? If so, what would be a rigorous explanation for this?
Edit: Antoine is spot on. Under the risk neutral measure, any discounted asset price $$Y_t=e^{-rt}X_t$$ must be a martingale or equivalently an Ito integral without drift. Hence $$\frac{dY_t}{Y_t}=\sigma_Y dB_t^{\Bbb Q}.$$ where $$\sigma_Y$$ can be a quite general stochastic process. On the other hand, by the compounding rule of Ito processes, $$\frac{dY_t}{Y_t}=-rdt+\frac{dX_t}{X_t}$$ Therefore it follows $$\frac{dX_t}{X_t}=rdt+\sigma_Y dB_t^{\Bbb Q}.$$
|
|
0
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# The cosmic-ray ionisation rate in the pre-stellar core L1544
Preprint
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
Context. Cosmic rays (CRs) play an important role in the chemistry and dynamics of the interstellar medium. In dense environments, they represent the main ionising agent, driving the rich chemistry of molecular ions and determining the ionisation fraction, which regulates the degree of coupling between the gas and magnetic fields. Estimates of the CR ionisation rate ($$\zeta_2$$) span several orders of magnitude, depending on the targeted sources and on the used method. Aims. Recent theoretical models have characterised the CR attenuation with increasing density. We aim to test these models for the attenuation of CRs in the low-mass pre-stellar core L1544. Methods. We use a state-of-the-art gas-grain chemical model, which accepts the CR ionisation rate profile as input, to predict the abundance profiles of four ions: $$\rm N_2H^+$$, $$\rm N_2D^+$$, $$\rm HC^{18}O^+$$, and $$\rm DCO^+$$. Non-LTE radiative transfer is performed to produce synthetic spectra based on the derived abundances. These are compared with observations obtained with the Institut de Radioastronomie Millim\'etrique (IRAM) 30m telescope. Results. Our results indicate that a model with $$\zeta_2 > 10^{-16} \rm \, s^{-1}$$ is excluded by the observations. Also the model with the standard $$\zeta_2 = 1.3 \times 10^{-17} \rm \, s^{-1}$$ produces a worse agreement with respect to the attenuation model based on Voyager observations, which has an average $$\zeta_2 = 3 \times 10^{-17} \rm \, s^{-1}$$ at the column densities typical of L1544. The single-dish data, however, are not sensitive to the attenuation of the CR profile, which changes only by a factor of two in the range of column densities spanned by the core model. Interferometric observations at higher spatial resolution, combined with observations of transitions with lower critical density are needed to observe a decrease of $$\zeta_2$$ with density.
### Author and article information
###### Journal
16 September 2021
###### Article
2109.08169
|
|
# How to explain the behaviour of TreeForm? [duplicate]
Just try this:
g=1;a=1;TreeForm[Hold[g[a]]]
The weird thing in the outcome is that the node of g displays 1 ,and a is still a. I do not know how to explain the behaviour of TreeForm.
I would call this a bug. Please report it to Wolfram.
IGraph/M's IGExpressionTree can handle this:
IGExpressionTree[Hold[g[a]]]
The documentation has examples that show how to get a more TreeForm-like output. Do keep in mind that the purpose of IGExpressionTree is not a perfect visualization of expressions, but simply converting fairly benign expressions to Graph (or rather to generate tree graphs which are easier to assemble as expressions). It is definitely possible to construct examples where it leaks evaluation, and I am not going to fix these if the fix would affect performance.
If the symbols with OwnValues are all on the context path, then one idea is to write the expression to a string or file, where contexts are not included, and then to block the context and context path so that all symbols are loaded in a temporary context. The symbols in the temporary context will not have OwnValues, so you can use TreeForm without worrying about leaky symbols.
There is one issue to deal with. The new symbols should only be temporary, which means the output should not contain any references to these symbols. The way to fix this is to convert the TreeForm output to boxes, and then remove the symbols before returning the TreeForm box output. I will use Export[..., "Text"] to convert the expression to a string, there may be better options:
sealedTreeForm[expr_] := With[{es = ExportString[expr, "Text"]},
Block[{$$Context="FOO",$$ContextPath={"System", "FOO"}},
InternalWithLocalSettings[
Null,
Quiet[
RawBoxes @ ToBoxes @ TreeForm @ ImportString[es],
General::shdw
],
Remove["FOO*"]
]
]
]
Example:
g=1; a=1; sealedTreeForm[Hold[g[a]]]
`
|
|
Lenovo Legion Y540-17irh Rtx 2060, Studiologic Numa Compact 2 Price, Aglio E Olio Recipe, Perfect Active Indicative Latin, When Was From The Dark Tower By Countee Cullen Written, Sharepoint News Connector, Mountain Guide Uk, Acer Predator Triton 700 Specs, Café Manna Catering, " /> Lenovo Legion Y540-17irh Rtx 2060, Studiologic Numa Compact 2 Price, Aglio E Olio Recipe, Perfect Active Indicative Latin, When Was From The Dark Tower By Countee Cullen Written, Sharepoint News Connector, Mountain Guide Uk, Acer Predator Triton 700 Specs, Café Manna Catering, " />
# covariance of multinomial distribution
### covariance of multinomial distribution
For this distribution: Let X= (X1,X2,...,XK) be a collection of integer-valued random variables representing event counts, where Xk rep- More precisely: the binomial distribution describes the behavior of a observations, if the following conditions are satisfied: The binomial distribution is denoted as , with denoting Example of a multinomial coe cient A counting problem Of 30 graduating students, how many ways are there for 15 to be employed in a job related to their eld of study, 10 to be employed in a job unrelated to their eld of study, and 5 unemployed? The multinomial distribution As a final example, let us consider the multinomial distribution. Before giving the distribution function we will try to explain what is meant with a multinomial distribution. states several theorems about the inverses of tridiagonal and semiseparable To me this implied that it is technically incorrect. distribution is used in several examples, for example in: The distribution function for the binomial distribution satisfies If you perform times an experiment that can have only two outcomes (either success or failure), then the number of times you obtain one of the two outcomes (success) is a binomial random variable. For the Bernoulli process, this corresponds to , (success and failure).Therefore this is a generalization of a Bernoulli trials process. The multinomial distribution is a generalization of the binomial distribution. Mean, Variance and Covariance of Multinomial Distribution. and 5. the number of observations and the chance of success. The Multinomial Distribution Basic Theory Multinomial trials A multinomial trials process is a sequence of independent, identically distributed random variables X=(X1,X2,...) each taking k possible values. matrices. The multinomial distribution is parametrized by a positive integer n and a vector {p 1, p 2, …, p m} of non-negative real numbers satisfying , which together define the associated mean, variance, and covariance of the distribution. A multinomial trials process is a sequence of independent, identically distributed random variables , where each random variable can take now values. Active 2 years, 11 months ago. is the simple expression of the inverses of this type of It is technically correct it just might be slightly misleading. Multinomial distribution. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … occurred, where in the binomial case one variable for counting was formulas: The main reason of writing the covariance matrices in this form, Thus, let (with we denote the cardinality of the set). This The resulting exponential family distribution is known as the Fisher-von Mises distribution. Let Xi denote the number of times that outcome Oi occurs in the n repetitions of the experiment. The book [95] }p_1^xp_2^yp_3^z[/itex], with $n=x+y+z$ The Attempt at a Solution The probabilities for drawing a red ball, $p_1=\frac{r}{r+g+b}$, green $p_2=\frac{g}{r+g+b}$ and black $p_3=\frac{b}{r+g+b}$ I thought X and Y was i.d.d to the binomial distribution and … Thus, the multinomial trials process is a simple generalization of the Bernoulli trials process (which corresponds to k=2). in the variables counting the number of times each outcome has Before giving the distribution function we will try to explain what is meant with a multinomial distribution. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … RS – 4 – Multivariate Distributions 3 Example: The Multinomial distribution Suppose that we observe an experiment that has k possible outcomes {O1, O2, …, Ok} independently n times.Let p1, p2, …, pk denote probabilities of O1, O2, …, Ok respectively. Multinomial distribution for 3 different balls: [itex]P(X=x, Y=y, Z=z)=\frac{n!}{x!y!z! matrices, resulting in an inversion formula for this type of matrices, namely: Each observation represents one of two outcomes (success'' or the following equation: As with the binomial distribution, we are interested failure''). cosines of general spherical coordinates. Viewed 2k times 0 $\begingroup$ I'm working … For the Bernoulli process, this corresponds to , (success and failure).Therefore this is a generalization of a Bernoulli trials process. As an example we calculate the mean using the following binonium, and multinonium The categorical distribution is a special case of the multinomial distribution so I don't consider that to be a conflation of the two things. count variable , which counts the number of successes in A multinomial trials process is a sequence of independent, identically distributed random variables , where each random variable can take now values. I was going to edit this section to make this clearer but wanted to make sure I wasn't misinterpreting anything. . is a partition of the index set {1,2,...,k} into nonempty subsets. The expected number of times the outcome i was observed over n trials is Ask Question Asked 2 years, 11 months ago. Specifically, suppose that (A 1 ,A 2 ,...,A m ). , The multinomial distribution is preserved when the counting variables are combined. It would thus seem that the most likely distribution for the number of particles that fall into each of the regions is the multinomial distribution with p i = 1/m (This, of course, would correspond to each particle, independent of the others, being equally likely to fall in any of the m regions.) enough, here we need . by Marco Taboga, PhD.
|
|
# Getting words matter of a pdf record in Evince
Is there any kind of means I can get words matter of a PDF record that I'm watching in Evince, Ubuntu is default pdf visitor? I'm able to transform the records to message documents and also get words matter from the terminal, yet I would certainly fairly such as to be able to promptly access them without needing to make use of the terminal. Exists any kind of plugin that can do this, or is it currently constructed in and also I'm simply missing it?
P.S. I would certainly favor not to transform my visitor as Evince is the default PDF visitor in Ubuntu, and also I would certainly fairly such as to do as high as feasible making use of the default applications given that a great deal of them, Evince consisted of, are actually wonderful.
0
2019-12-02 03:02:39
Source Share
A feedback from Olaf Leidinger on the Evince newsletter :
I assume such an attribute is much better matched for record editors, as they have even more details on the record as a simple visitor and also counting words is unimportant. Take a PDF documents as an instance. What you view as message could in fact be some sort of vector visuals form. Also if the message is had thus in the PDF documents, those words you see could be made up of numerous "draw message at placement (y, x) " - regulates - - as an example in instance of umlauts or end of line. So a solitary word could count as numerous words. Consequently I assume it could be tough to implement such an attribute accurately. Look at pdftotext to see what I suggest.
0
2019-12-03 05:04:37
Source
You can do this using command line :
pdftotext filename.pdf - | tr -d '.' | wc -w
0
2019-12-03 04:51:27
Source
How around a fast celebration manuscript calling for zenity and also evince. When called without an argument, it'll offer you a discussion box so you can pick a documents. When called with an argument (or after claimed discussion box), it'll both open the documents in evince and also offer you a discussion box with a word matter.
To put it simply, replicate the adhering to right into a message documents, called evince-word-count.sh or something, wait someplace in your course (as an example, ~/bin/), make it executable (either via Nautilus is appropriate click and also buildings or with chmod +x ~/bin/evince-word-count.sh),
#!/bin/bash
if [ "$#" -gt "0" ] ; then filename="$1"
else
filename="$(zenity --file-selection)" fi evince "$filename" &
zenity --info --text "This PDF has $(pdftotext "$filename" - | tr -d '.' | wc -w) words"
exit 0
Now, appropriate click some on some PDF in nautilus, pick "Open with ..." and afterwards have it open with evince - word - count.sh. Currently, when you open a PDF, it'll both open in evince, and also offer you a word matter.
0
2019-12-03 04:43:24
Source
I do not think that is feasible (well it is practically feasible yet hasn't been applied).
You need to bear in mind Evince is a record visitor and also a word matter is an attribute extra generally called for in an editor (yes I recognize this isn't constantly the instance).
You could such as to contact the Evince developers and also ask if they would certainly have any kind of passion in applying this attribute.
0
2019-12-03 04:32:51
Source
|
|
# Bias corrected calibration curve from scratch
## 2021/03/10
library(ggplot2); theme_set(theme_bw(base_size = 14))
library(rms)
In the last post, we saw how to fit a bias-corrected calibration curve using the rms package. In this post we see how to do the same thing without loading the rms package. Of course, we need a model to start things off.
set.seed(125)
dat <- lme4::InstEval[sample(nrow(lme4::InstEval), 800), ]
fit <- glm(y > 3 ~ lectage + studage + service + dept, binomial, dat)
In trying to reproduce my own version of calibrate, I am a bit disappointed that the documentation for calibrate does not provide any references. That means this must be easy!
First let’s recall that calibration can be internal or external. If a model is internally calibrated, then (say) if the model predicts $$Pr\{\textrm{outcome}|\textrm{covariates}\} = 0.2$$, then the outcome occured darn close to 20% of the time in the training data given the covariates. Replace 20% with the range of possible predictions that the model could make, and you’ve got calibration over the range of possible outcomes. The makes calibration curves broadly useful for regression modeling.
This is nice, but misleading, because optimal internal calibration means the model is likely overfitted. We have to do some extra work to correct for this easy trap.
External calibration is the solution. If a model is externally calibrated then it is calibrated to new, unseen data. My fascination with (external) calibration is three-fold:
1. Calibration measures model performance without using accuracy, or other improper scoring rules.
2. No new data is actually required to estimate external calibration. We can do so nearly unbiasedly using a bootstrap procedure.
3. Calibration is best diagnosed with graphs, rather than simple summaries and hypothesis tests.
Like the previous post, we can simply plot internal calibration using ggplot2:
pdat <- with(dat, data.frame(y = ifelse(y > 3, 1, 0),
prob = predict(fit, type = "response")))
apparent.cal <- data.frame(with(pdat, lowess(prob, y, iter = 0)))
p <- ggplot(pdat, aes(prob, y)) +
geom_point(shape = 21, size = 2) +
geom_abline(slope = 1, intercept = 0) +
geom_line(data = apparent.cal, aes(x, y), linetype = "dotted") +
scale_x_continuous(breaks = seq(0, 1, 0.1)) +
scale_y_continuous(breaks = seq(0, 1, 0.1)) +
xlab("Estimated Prob.") +
ylab("Data w/ Empirical Prob.") +
ggtitle("Logistic Regression Calibration Plot")
print(p)
To transfer from internal calibration to external calibration, we need to correct the dotted smoother for bias.
Harrell describes the application of this process to the calibrate function in the RMS book:
The calibrate function produces bootstrapped or cross-validated calibration curves for logistic and linear models. The “apparent” calibration accuracy is estimated using a nonparametric smoother relating predicted probabilities to observed binary outcomes. The nonparametric estimate is evaluated at a sequence of predicted probability levels. Then the distances from the 45 degree line are compared with the differences when the current model is evaluated back on the whole sample (or omitted sample for cross-validation). The differences in the differences are estimates of overoptimism. After averaging over many replications, the predicted-value-specific differences are then subtracted from the apparent differences and an adjusted calibration curve is obtained.
I actually think he makes it too complicated: We need not compare distances from the 45 degree line, we can just compare smoother outputs at each prediction. In other word, the complicated differences in differences are simply differences.1
Getting this right is tricky. A somewhat simpler explanation is given On this blog.
Here is the procedure in code. First, we need to collect some useful pre-simulation variables: We can calculate the apparent calibration outside the simulation loop, so we do so, and call it app.cal.pred. We also need the set the number of simulations, nsim, and (for efficiency) create a matrix to store simulation data.
## Range of inputs on which to calculate calibrations
srange <- seq(min(pdat$prob), max(pdat$prob), length.out = 60)
## Discard the smallest and largest probs for robustness, and agreement with rms::calibrate
srange <- srange[5 : (length(srange) - 4)]
## The apparent calibration is determined by this loess curve.
apparent.cal.fit <- with(pdat, lowess(prob, y, iter = 0))
app.cal.fun <- approxfun(apparent.cal.fit$x, apparent.cal.fit$y)
app.cal.pred <- app.cal.fun(srange)
## Number of bootstrap replicates
nsim <- 300
## Storage for bootstrap optimism (one row per bootstrap resample)
opt.out <- matrix(NA, nsim, length(srange))
Here is the real simulation. The steps are as in link from TheStatsGeek.
### Simulation steps:
1. Fit model to original data, and estimate a statistic, $$C$$, using original data based on fitted model. Denote this as $$C_{app}$$.
2. For $$b=1,\ldots,B$$:
1. Take a bootstrap sample from the original data.
2. Fit the model to the bootstrap data, and estimate $$C$$ using this fitted model and this bootstrap dataset. Denote the estimate by $$C_{b,boot}$$.
3. Estimate $$C$$ by applying the fitted model from the bootstrap dataset to the original dataset. Denote this estimate by $$C_{b, orig}$$.
3. Calculate the estimate of optimism: $$O = B^{-1} \sum_{b=1}^B \{ C_{b,boot} - C_{b,orig } \}$$.
4. Calculate the bias corrected version of $$C$$, $$C_{b.c.} = C_{app} - O$$.
for (i in 1 : nsim) {
## Sample bootstrap data set from original data
dat.boot <- dat[sample(nrow(dat), nrow(dat), TRUE), ]
## Fit logistic model using the bootstrap data
fit.boot <- update(fit, data = dat.boot)
## Make a DF of the bootstrap model and bootstrap predictions
pdat.boot <- data.frame(y = ifelse(dat.boot$y > 3, 1, 0), prob = predict(fit.boot, dat.boot, type = "response")) ## Fit a calibration curve to the bootstrap data boot.cal.fit <- with(pdat.boot, lowess(prob, y, iter = 0)) boot.cal.fun <- approxfun(boot.cal.fit$x, boot.cal.fit$y) ## Collect a set of them for comparison boot.cal.pred <- boot.cal.fun(srange) ## Apply the bootstrap model to the original data prob.boot.orig <- predict(fit.boot, dat, type = "response") ## Make a DF of the boot model predictions on original data pdat.boot.orig <- data.frame(y = ifelse(dat$y > 3, 1, 0),
prob = prob.boot.orig)
## Fit a calibration curve to the original data w/ boot model predictions
boot.cal.orig.fit <- with(pdat.boot.orig, lowess(prob, y, iter = 0))
boot.cal.orig.fun <- approxfun(boot.cal.orig.fit$x, boot.cal.orig.fit$y)
## Collect a set of them for comparison
boot.cal.orig.pred <- boot.cal.orig.fun(srange)
## Take the difference for estimate of optimism
opt <- boot.cal.pred - boot.cal.orig.pred
opt.out[i, ] <- opt
}
## The bias corrected calibration curve is the apparent calibration less the average bootstrap optimism
bias.corrected.cal <- app.cal.pred - colMeans(opt.out)
We again make the calibration with ggplot, no rms required.
ppdat <- data.frame(srange, app.cal.pred, bias.corrected.cal)
ggplot(ppdat, aes(srange, app.cal.pred)) +
geom_line(linetype = "dotted", color = "black") +
geom_line(aes(y = bias.corrected.cal), color = "black") +
geom_abline(slope = 1, intercept = 0, linetype = "dashed", color = "black") +
scale_x_continuous(breaks = seq(0, 1, 0.1)) +
scale_y_continuous(breaks = seq(0, 1, 0.1)) +
xlab("Estimated Prob.") +
ylab("Empirical Prob.") +
ggtitle("Logistic Regression Calibration Plot")
## Warning: Removed 7 row(s) containing missing values (geom_path).
Compare with rms::calibrate:
refit <- lrm(y > 3 ~ lectage + studage + service + dept, dat, x = TRUE, y = TRUE)
cal <- calibrate(refit, B = 300)
plot(cal)
##
## n=800 Mean absolute error=0.037 Mean squared error=0.00177
## 0.9 Quantile of absolute error=0.059
Damn near perfect match :)
Last note: While I think this is close, I don’t actually think I’ve perfectly replicated calibrate. For example, I’m not sure how calibrate is handling missing values from the lowess smoother.2
1. Between the bootstrap model calibration curve on the bootstrap data, and the bootstrap model calibration curve on the original data.
2. These missing values are the source of the warnings in the last ggplot code block.
|
|
skip to main content
Finite Time Blowup of 2D Boussinesq and 3D Euler Equations with $C^{1,\alpha}$ Velocity and Boundary
Inspired by the numerical evidence of a potential 3D Euler singularity by Luo- Hou [30,31] and the recent breakthrough by Elgindi [11] on the singularity formation of the 3D Euler equation without swirl with $C^{1,\alpha}$ initial data for the velocity, we prove the finite time singularity for the 2D Boussinesq and the 3D axisymmetric Euler equations in the presence of boundary with $C^{1,\alpha}$ initial data for the velocity (and density in the case of Boussinesq equations). Our finite time blowup solution for the 3D Euler equations and the singular solution considered in [30,31] share many essential features, including the symmetry properties of the solution, the flow structure, and the sign of the solution in each quadrant, except that we use $C^{1,\alpha}$ initial data for the velocity field. We use a dynamic rescaling formulation and follow the general framework of analysis developed by Elgindi in [11]. We also use some strategy proposed in our recent joint work with Huang in [7] and adopt several methods of analysis in [11] to establish the linear and nonlinear stability of an approximate self-similar profile. The nonlinear stability enables us to prove that the solution of the 3D Euler equations or the 2D Boussinesq equations more »
Authors:
;
Award ID(s):
Publication Date:
NSF-PAR ID:
10286493
Journal Name:
Communications in mathematical physics
Volume:
383
Issue:
3
Page Range or eLocation-ID:
1559-1667
ISSN:
1432-0916
Sponsoring Org:
National Science Foundation
##### More Like this
1. We present a novel method of analysis and prove finite time asymptotically self- similar blowup of the De Gregorio model [13,14] for some smooth initial data on the real line with compact support. We also prove self-similar blowup results for the generalized De Gregorio model [41] for the entire range of parameter on R or $S^1$ for Holder continuous initial data with compact support. Our strategy is to reformulate the problem of proving finite time asymptotically self-similar singularity into the problem of establishing the nonlinear stability of an approximate self-similar profile with a small residual error using the dynamic rescaling equation. We use the energy method with appropriate singular weight functions to extract the damping effect from the linearized operator around the approximate self-similar profile and take into account cancellation among various nonlocal terms to establish stability analysis. We remark that our analysis does not rule out the possibility that the original De Gregorio model is well posed for smooth initial data on a circle. The method of analysis presented in this paper provides a promising new framework to analyze finite time singularity of nonlinear nonlocal systems of partial differential equations.
2. We present a novel method of analysis and prove finite time asymptotically self- similar blowup of the De Gregorio model [13,14] for some smooth initial data on the real line with compact support. We also prove self-similar blowup results for the generalized De Gregorio model [41] for the entire range of parameter on R or $S^1$ for Holder continuous initial data with compact support. Our strategy is to reformulate the problem of proving finite time asymptotically self-similar singularity into the problem of establishing the nonlinear stability of an approximate self-similar profile with a small residual error using the dynamic rescaling equation. We use the energy method with appropriate singular weight functions to extract the damping effect from the linearized operator around the approximate self-similar profile and take into account cancellation among various nonlocal terms to establish stability analysis. We remark that our analysis does not rule out the possibility that the original De Gregorio model is well posed for smooth initial data on a circle. The method of analysis presented in this paper provides a promising new framework to analyze finite time singularity of nonlinear nonlocal systems of partial differential equations
3. This work concerns the asymptotic behavior of solutions to a (strictly) subcritical fluid model for a data communication network, where file sizes are generally distributed and the network operates under a fair bandwidth-sharing policy. Here we consider fair bandwidth-sharing policies that are a slight generalization of the [Formula: see text]-fair policies introduced by Mo and Walrand [Mo J, Walrand J (2000) Fair end-to-end window-based congestion control. IEEE/ACM Trans. Networks 8(5):556–567.]. Since the year 2000, it has been a standing problem to prove stability of the data communications network model of Massoulié and Roberts [Massoulié L, Roberts J (2000) Bandwidth sharing and admission control for elastic traffic. Telecommunication Systems 15(1):185–201.], with general file sizes and operating under fair bandwidth sharing policies, when the offered load is less than capacity (subcritical conditions). A crucial step in an approach to this problem is to prove stability of subcritical fluid model solutions. In 2012, Paganini et al. [Paganini F, Tang A, Ferragut A, Andrew LLH (2012) Network stability under alpha fair bandwidth allocation with general file size distribution. IEEE Trans. Automatic Control 57(3):579–591.] introduced a Lyapunov function for this purpose and gave an argument, assuming that fluid model solutions are sufficiently smooth in timemore »
4. Abstract
We present two accurate and efficient algorithms for solving the incompressible, irrotational Euler equations with a free surface in two dimensions with background flow over a periodic, multiply connected fluid domain that includes stationary obstacles and variable bottom topography. One approach is formulated in terms of the surface velocity potential while the other evolves the vortex sheet strength. Both methods employ layer potentials in the form of periodized Cauchy integrals to compute the normal velocity of the free surface, are compatible with arbitrary parameterizations of the free surface and boundaries, and allow for circulation around each obstacle, which leads to multiple-valued velocity potentials but single-valued stream functions. We prove that the resulting second-kind Fredholm integral equations are invertible, possibly after a physically motivated finite-rank correction. In an angle-arclength setting, we show how to avoid curve reconstruction errors that are incompatible with spatial periodicity. We use the proposed methods to study gravity-capillary waves generated by flow around several elliptical obstacles above a flat or variable bottom boundary. In each case, the free surface eventually self-intersects in a splash singularity or collides with a boundary. We also show how to evaluate the velocity and pressure with spectral accuracy throughout the fluid,more »
5. Abstract
Our aim is to approximate a reference velocity field solving the two-dimensional Navier–Stokes equations (NSE) in the absence of its initial condition by utilizing spatially discrete measurements of that field, available at a coarse scale, and continuous in time. The approximation is obtained via numerically discretizing a downscaling data assimilation algorithm. Time discretization is based on semiimplicit and fully implicit Euler schemes, while spatial discretization (which can be done at an arbitrary scale regardless of the spatial resolution of the measurements) is based on a spectral Galerkin method. The two fully discrete algorithms are shown to be unconditionally stable, with respect to the size of the time step, the number of time steps and the number of Galerkin modes. Moreover, explicit, uniform-in-time error estimates between the approximation and the reference solution are obtained, in both the $L^2$ and $H^1$ norms. Notably, the two-dimensional NSE, subject to the no-slip Dirichlet or periodic boundary conditions, are used in this work as a paradigm. The complete analysis that is presented here can be extended to other two- and three-dimensional dissipative systems under the assumption of global existence and uniqueness.
|
|
# livvkit.elements package¶
## livvkit.elements.elements module¶
Module containing report generation and display elements
The elements in this module are used by LIVVkit to generate analyses reports. Reports by default will be a portable HTML website, but each of these elements provide some experimental (and therefore undocumented) report formats: JSON-Only and LaTeX.
New elements should derive from, or implement the same interface as, the BaseElement abstract class.
class livvkit.elements.elements.B4BImage(title, description, page_path)[source]
A B4BImage element
A dummy Image that can be used by the BitForBit element indicating a bit-for-bit verification result.
class livvkit.elements.elements.BaseElement[source]
Bases: abc.ABC
An abstract base LIVVkit element
An abstract base LIVVkit element providing the basic element interface expected by LIVVkit. All LIVVkit elements should either derive from this class or implement the same interface.
class livvkit.elements.elements.BitForBit(title, data, imgs)[source]
A LIVVkit BitForBit element
The BitForBit element will produce a table in the analysis report indicating bit-for-bit statuses with a difference image shown in the final column of the table.
class livvkit.elements.elements.CompositeElement(elements)[source]
Bases: livvkit.elements.elements.BaseElement, abc.ABC
An abstract base LIVVkit element that contains other elements
An abstract base LIVVkit element that contains other elements in self.elements and provides the basic element interface expected by LIVVkit. All LIVVkit elements should either be derived from the LIVVkit BaseElement or implement the same interface.
class livvkit.elements.elements.Error(title, message)[source]
A LIVVkit Error element
The Error element will produce an error message in the analysis report.
class livvkit.elements.elements.FileDiff(title, from_file, to_file, context=3)[source]
A LIVVkit FileDiff element
The FilleDiff element will compare two text files and produce a git-diff style diff of the files.
diff_files(context=3)[source]
Perform the file diff
Parameters
context – An positive int indicating the number of lines of context to display on either side of each difference found
Returns
Tuple containing:
difference: A str containing either a git-style diff of the
files if a difference was found or the original file in full
diff_status: A boolean indicating whether any differences were
found
Return type
(tuple)
class livvkit.elements.elements.Gallery(title, elements)[source]
A LIVVkit Gallery element
The Gallery element is a super element intended to group LIVVkit Image elements into a gallery. It also will allow for the generation of other (experimental!) Report types (e.g., LaTeX), where the “Gallery” meaning might be better interpreted as a figure “subsection”.
class livvkit.elements.elements.Image(title, desc, image_file, group=None, height=None, relative_to=None)[source]
A LIVVkit Image element
The Image element produces an image/figure in the report.
class livvkit.elements.elements.NAImage(title, description, page_path)[source]
A NAImage element
A dummy Image that can be used to indicate a missing image
class livvkit.elements.elements.NamedCompositeElement(elements_dict)[source]
Bases: livvkit.elements.elements.BaseElement, abc.ABC
An abstract base LIVVkit element that contains multiple other composite elements
An abstract base LIVVkit element that allows to logically group multiple other composite elements in self.element_dict and provides the basic element interface expected by LIVVkit. All LIVVkit elements should either be derived from the LIVVkit BaseElement or implement the same interface.
class livvkit.elements.elements.Page(title, description, elements, references='')[source]
A LIVVkit Page element
The Page element contains the description of an analysis, the elements that should be displayed for this analysis on the report, as well as any references that should be included in the report. In general usage, this will be used to create an HTML page inside LIVVkit output website. It also will allow for the generation of other (experimental!) Report types (e.g., LaTeX), where the “page” meaning might be better interpreted as a “section”.
For LIVVkit Extensions (LEX), an instance of this class should be returned from the extensions run() function.
add_references(references)[source]
Add a reference to the internal reference list
Parameters
references – The references to add to this page’s internal reference list. This can be a path to a bibtex file containing the references, or a list/set/tuple of bibtex files containing the references (Note: This will include the default LIVVkit references and ALL references inside the bibtex file(s)!).
class livvkit.elements.elements.RawHTML(html)[source]
A LIVVkit RawHTML element
The RawHTML element will directly display the contained HTML in the analysis report. For an HTML report (default) this will be directly written onto the page so is a potential security hole and should be used with caution. For the experimental report types (e.g., LaTeX) the contained HTML will be written to report in a code display block or as a raw string.
class livvkit.elements.elements.Section(title, elements)[source]
A LIVVkit Section element
The Section element is a super element intended to logically separate elements into titled sections. It also will allow for the generation of other (experimental!) Report types (e.g., LaTeX), where the “section” meaning might be better interpreted as a “subsection”.
class livvkit.elements.elements.Table(title, data, index=False, transpose=False)[source]
A LIVVkit Table element
The Table element will produce a table in the analysis report.
class livvkit.elements.elements.Tabs(tabs)[source]
A LIVVkit Tabs element
The Tabs element is a super element intended to logically separate elements into clickable tabs on the output website. It also will allow for the generation of other (experimental!) Report types (e.g., LaTeX), where the “tabs” meaning might be better interpreted as a “subsection”.
|
|
# I Second order ordinary differential equation to a system of first order
#### LSMOG
I tried to convert the second order ordinary differential equation to a system of first order differential equations and to write it in a matrix form. I took it from the book by LM Hocking on (Optimal control). What did I do wrong in this attachment because mine
differs from the book?. I've attached both the book solution and mine. Thanks.
#### Attachments
• 25.8 KB Views: 753
Related Differential Equations News on Phys.org
#### tnich
Homework Helper
I tried to convert the second order ordinary differential equation to a system of first order differential equations and to write it in a matrix form. I took it from the book by LM Hocking on (Optimal control). What did I do wrong in this attachment because mineView attachment 226158 differs from the book?. I've attached both the book solution and mine. Thanks.
I don't think there is anything wrong with your way (except for the $\frac 1 k$ you have penciled in front of the matrix in your equation). It still leads to the same solution to the differential equation. Your way does require fiddling with the constants a little more to get to that solution, which may be why your textbook gives the particular form you found there.
#### BvU
Homework Helper
You didn't do anything wrong. Just that your definition of $x_2$ is different from that in the book.
When I do this I do it the same way you do, but perhaps the book author has some specific reason for his approach ?
#### LSMOG
Thanks very much
"Second order ordinary differential equation to a system of first order"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
|
# Thermodynamics; find the thermal energy
1. Aug 26, 2013
### iScience
question:Calculate the total thermal energy in a liter of helium at room temperature and atmospheric pressure. Then repeat the calculation for a liter of air.
I'm just confused because i thought thermal energy only depended on the translational kinetic energy of the particles. So why do i need all the pressure if the temperature is already given?
the only equation that comes to mind is E(kinetic)=3/2kT
and ...maybe the gas law?..
where do i go from here?
2. Aug 26, 2013
### janhaa
Maybe this way:
U(thermal) = N*1,5kT
and the ideal gas law: pV = NkT
3. Aug 26, 2013
### iScience
so E(thermal)=(3/2)NkT -----> E(thermal)=(3/2)PV? since N=PV/kT? and then i just plug and chug?
4. Aug 26, 2013
### janhaa
Yes, I believe so...
5. Aug 26, 2013
### iScience
isn't a "PV" term dynamically associated with the pressure with respect to a change in volume? ie calculating the work done on a system from a PV diagram? (ie the area under the PV curve)
6. Aug 26, 2013
### Staff: Mentor
You're missing something here. The correct formula for an ideal gas is
$$U = \frac{f}{2} N k T$$
where $f$ is the number of (quadratic) degrees of freedom. That is why you get a different answer for helium and air.
7. Aug 26, 2013
### Staff: Mentor
Yes, expansion/contraction work done by/on a gas is obtained from
$$W = - \int_{V_i}^{V_f} P \, dV$$
but $PV$ by itself is just the product of the pressure and the volume.
8. Aug 26, 2013
### iScience
well what i was getting at was i thought that that quantity (PV) was the case where the P is constant (isobaric) but still a dynamic case where the Volume is changing. so basically i don't understand why the quantity PV is used for a static case.
9. Aug 27, 2013
### Staff: Mentor
Pressure doesn't have to be constant. The formula for work is valid even when $P$ varies, although this might make it complicated to calculate the integral (unless $P$ can be expressed as a simple function of $V$).
Equations of state are equations that relate the different macroscopic observables of a system. In the case of a gas, these observables are $P$, $V$, and $T$ (for a fixed quantity of gas). For an ideal gas, the relation is exactly
$$PV = N k T$$
or
$$PV = n R T$$
Such equations of state also exist for more realistic gases: they are slightly more complicated, but again relate $P$, $V$, and $T$, such that if you fix two of them you can know the value of the third.
As an example, if you measure the pressure inside a bicycle tire and know what the temperature is, then you can calculate the volume inside the inner tube. So you see, this has nothing to do with "dynamics."
10. Apr 14, 2015
### Staff: Mentor
This is the first response to this thread in over a year and a half. I am closing this thread.
Chet
|
|
# A cradle is ‘h’ meters above the ground at the lowest position and ‘H’ meters when it is at the highest point…
Q: A cradle is ‘h’ meters above the ground at the lowest position and ‘H’ meters when it is at the highest point. If ‘v’ is the maximum speed of the swing of total mass ‘m’ the relation between ‘h’ and ‘H’ is
(a) $\displaystyle \frac{1}{2}mv^2 + h = H$
(b) $\displaystyle \frac{v^2}{2 g} + h = H$
(c)$\displaystyle \frac{v^2}{ g} + 2h = H$
(d) $\displaystyle \frac{v^2}{2 g} + H = h$
Ans: (b)
$\displaystyle m g H = m g h + \frac{1}{2} m v^2$
$\displaystyle g ( H – h ) = \frac{1}{2} v^2$
$\displaystyle ( H – h ) = \frac{v^2}{2 g}$
$\displaystyle H = \frac{v^2}{2 g} + h$
|
|
• ### Mapping the material in the LHCb vertex locator using secondary hadronic interactions(1803.07466)
March 20, 2018 hep-ex, physics.ins-det
Precise knowledge of the location of the material in the LHCb vertex locator (VELO) is essential to reducing background in searches for long-lived exotic particles, and in identifying jets that originate from beauty and charm quarks. Secondary interactions of hadrons produced in beam-gas collisions are used to map the location of material in the VELO. Using this material map, along with properties of a reconstructed secondary vertex and its constituent tracks, a $p$-value can be assigned to the hypothesis that the secondary vertex originates from a material interaction. A validation of this procedure is presented using photon conversions to dimuons.
• ### Charm production nearby threshold in pA-interactions at 70 GeV(1703.05639)
March 16, 2017 nucl-ex
The results of the SERP-E-184 experiment at the U-70 accelerator (IHEP, Protvino) are presented. Interactions of the 70 GeV proton beam with C, Si and Pb targets were studied to detect decays of charmed $D^0$, $\overline D^0$, $D^+$, $D^-$ mesons and $\Lambda _c^+$ baryon near their production threshold. Measurements of lifetimes and masses are shown a good agreement with PDG data. The inclusive cross sections of charm production and their A-dependencies were obtained. The yields of these particles are compared with the theoretical predictions and the data of other experiments. The measured cross section of the total open charm production ($\sigma _{\mathrm {tot}}(c\overline c)$ = 7.1 $\pm$ 2.3(stat) $\pm$1.4(syst) $\mu$b/nucleon) at the collision c.m. energy $\sqrt {s}$ = 11.8 GeV is well above the QCD model predictions. The contributions of different species of charmed particles to the total cross section of the open charm production in proton-nucleus interactions vary with energy.
• ### Soft photon registration at Nuclotron(1510.00517)
Oct. 2, 2015 nucl-ex
First results of a soft photon yield in nucleus-nuclear interactions at 3.5 GeV per nucleon are presented. These photons have been registered at Nuclotron (LHEP, JINR) by an electromagnetic calorimeter built in the SVD Collaboration. The obtained spectra confirm the excess yield in the energy region less than 50 MeV in comparison with theoretical predictions and agree with previous experiments at high-energy interactions.
• ### Performance of the LHCb Vertex Locator(1405.7808)
Sept. 10, 2014 hep-ex, physics.ins-det
The Vertex Locator (VELO) is a silicon microstrip detector that surrounds the proton-proton interaction region in the LHCb experiment. The performance of the detector during the first years of its physics operation is reviewed. The system is operated in vacuum, uses a bi-phase CO2 cooling system, and the sensors are moved to 7 mm from the LHC beam for physics data taking. The performance and stability of these characteristic features of the detector are described, and details of the material budget are given. The calibration of the timing and the data processing algorithms that are implemented in FPGAs are described. The system performance is fully characterised. The sensors have a signal to noise ratio of approximately 20 and a best hit resolution of 4 microns is achieved at the optimal track angle. The typical detector occupancy for minimum bias events in standard operating conditions in 2011 is around 0.5%, and the detector has less than 1% of faulty strips. The proximity of the detector to the beam means that the inner regions of the n+-on-n sensors have undergone space-charge sign inversion due to radiation damage. The VELO performance parameters that drive the experiment's physics sensitivity are also given. The track finding efficiency of the VELO is typically above 98% and the modules have been aligned to a precision of 1 micron for translations in the plane transverse to the beam. A primary vertex resolution of 13 microns in the transverse plane and 71 microns along the beam axis is achieved for vertices with 25 tracks. An impact parameter resolution of less than 35 microns is achieved for particles with transverse momentum greater than 1 GeV/c.
• ### Detection of $D^{\pm}$ mesons production in pA-interactions at 70 GeV(1311.1960)
Nov. 8, 2013 hep-ex, nucl-ex
The results of analysis SERP-E-184 experiment data, obtained with 70 GeV proton beam irradiation of active target with carbon, silicon and lead plates are presented. For 3-prongs charged charmed mesons decays, event selection criteria were developed and detection efficiency was calculated with detailed simulation using FRITIOF7.02 and GEANT3.21 programs. Signals of decays were found and charm production inclusive cross sections estimated at near threshold energy. The lifetimes and A-dependence of cross section were measured. Yields of D mesons and their ratios in comparison with data of other experiments and theoretical predictions are presented.
• During 2011 the LHCb experiment at CERN collected 1.0 fb-1 of sqrt{s} = 7 TeV pp collisions. Due to the large heavy quark production cross-sections, these data provide unprecedented samples of heavy flavoured hadrons. The first results from LHCb have made a significant impact on the flavour physics landscape and have definitively proved the concept of a dedicated experiment in the forward region at a hadron collider. This document discusses the implications of these first measurements on classes of extensions to the Standard Model, bearing in mind the interplay with the results of searches for on-shell production of new particles at ATLAS and CMS. The physics potential of an upgrade to the LHCb detector, which would allow an order of magnitude more data to be collected, is emphasised.
• ### Radiation damage in the LHCb Vertex Locator(1302.5259)
Feb. 21, 2013 hep-ex, physics.ins-det
The LHCb Vertex Locator (VELO) is a silicon strip detector designed to reconstruct charged particle trajectories and vertices produced at the LHCb interaction region. During the first two years of data collection, the 84 VELO sensors have been exposed to a range of fluences up to a maximum value of approximately $\rm{45 \times 10^{12}\,1\,MeV}$ neutron equivalent ($\rm{1\,MeV\,n_{eq}}$). At the operational sensor temperature of approximately $-7\,^{\circ}\rm{C}$, the average rate of sensor current increase is $18\,\upmu\rm{A}$ per $\rm{fb^{-1}}$, in excellent agreement with predictions. The silicon effective bandgap has been determined using current versus temperature scan data after irradiation, with an average value of $E_{g}=1.16\pm0.03\pm0.04\,\rm{eV}$ obtained. The first observation of n-on-n sensor type inversion at the LHC has been made, occurring at a fluence of around $15 \times 10 ^{12}$ of $1\,\rm{MeV\,n_{eq}}$. The only n-on-p sensors in use at the LHC have also been studied. With an initial fluence of approximately $\rm{3 \times 10^{12}\,1\,MeV\,n_{eq}}$, a decrease in the Effective Depletion Voltage (EDV) of around 25\,V is observed, attributed to oxygen induced removal of boron interstitial sites. Following this initial decrease, the EDV increases at a comparable rate to the type inverted n-on-n type sensors, with rates of $(1.43\pm 0.16) \times 10 ^{-12}\,\rm{V} / \, 1 \, \rm{MeV\,n_{eq}}$ and $(1.35\pm 0.25) \times 10 ^{-12}\,\rm{V} / \, 1 \, \rm{MeV\,n_{eq}}$ measured for n-on-p and n-on-n type sensors, respectively. A reduction in the charge collection efficiency due to an unexpected effect involving the second metal layer readout lines is observed.
• ### Registration of neutral charmed mesons production and their decays in pA-interactions at 70 GeV with SVD-2 setup(1004.3676)
April 21, 2010 hep-ex
The results of data handling for E-184 experiment obtained with 70 GeV proton beam irradiation of active target with carbon, silicon and lead plates are presented. Two-prongs neutral charmed D0 and \v{D}0 -mesons decays were selected. Signal / background ratio was (51+/-17) / (38+/-13). Registration efficiency for mesons was defined and evaluation for charm production cross section at threshold energy is presented: sigma(c\^c) = 7.1 +/- 2.4(stat.) +/- 1.4(syst.) (\mu/nucleon).
|
|
# Brachistochrone with tandem phases#
Things you’ll learn through this example
• How to run two phases with different ODE’s and different grids simultaneously in time.
This is a contrived example but it demonstrates a useful feature of Dymos we call tandem phases. Tandem phases are two phases that occur simultaneously in time (having the same start time and duration) but with different dynamics. In practice, this can be useful when some of your dynamics are quite expensive and you can tolerate evaluating them on fewer nodes. Or perhaps one phase has relatively rapid dynamics compared to the other one. For instance, thermal responses tend to happen vary rapidly in an electric aircraft compared to changes in the flight dynamics state of the vehicle.
In this example we’ll evaulate the standard brachistochrone problem, but limit the arclength of the wire along which the bead travels. The arclength is integrated as a state variable, and can be done so along with the typical states x, y, and v, but for the purposes of this contrive example we’ll perform this integration in a separate phase that occurs at the same time.
## The first phase to integrate the standard brachistochrone ODE#
• The transcriptions for the two phases are delcared up front so that tx1 may be used as both the transcription of the second phase, and for outputting the states of the first phase to the control input nodes of the second phase.
This secondary timeseries is the key to making this sort of formulation work.
from dymos.examples.brachistochrone.brachistochrone_ode import BrachistochroneODE
import numpy as np
import matplotlib.pyplot as plt
plt.switch_backend('Agg')
import openmdao.api as om
import dymos as dm
p = om.Problem(model=om.Group())
p.driver = om.pyOptSparseDriver()
p.driver.options['optimizer'] = 'SLSQP'
p.driver.options['print_results'] = False
p.driver.declare_coloring()
# The transcription of the first phase
tx0 = dm.GaussLobatto(num_segments=10, order=3, compressed=False)
# The transcription for the second phase (and the secondary timeseries outputs from the first phase)
#
# First Phase: Integrate the standard brachistochrone ODE
#
phase0 = dm.Phase(ode_class=BrachistochroneODE, transcription=tx0)
phase0.set_time_options(fix_initial=True, duration_bounds=(.5, 10))
units='deg', lower=0.01, upper=179.9)
# Add alternative timeseries output to provide control inputs for the next phase
## The ODE for integrating the arc-length of the wire#
class BrachistochroneArclengthODE(om.ExplicitComponent):
def initialize(self):
self.options.declare('num_nodes', types=int)
def setup(self):
nn = self.options['num_nodes']
# Inputs
self.add_output('Sdot', val=np.zeros(nn), desc='rate of change of arclength', units='m/s')
# Setup partials
arange = np.arange(nn)
self.declare_partials(of='Sdot', wrt='v', rows=arange, cols=arange)
self.declare_partials(of='Sdot', wrt='theta', rows=arange, cols=arange)
def compute(self, inputs, outputs):
theta = inputs['theta']
v = inputs['v']
outputs['Sdot'] = np.sqrt(1.0 + (1.0/np.tan(theta))**2) * v * np.sin(theta)
def compute_partials(self, inputs, jacobian):
theta = inputs['theta']
v = inputs['v']
cos_theta = np.cos(theta)
sin_theta = np.sin(theta)
tan_theta = np.tan(theta)
cot_theta = 1.0 / tan_theta
csc_theta = 1.0 / sin_theta
jacobian['Sdot', 'v'] = sin_theta * np.sqrt(1.0 + cot_theta**2)
jacobian['Sdot', 'theta'] = v * (cos_theta * (cot_theta**2 + 1) - cot_theta * csc_theta) / \
(np.sqrt(1 + cot_theta**2))
## The second phase to integrate the arclength of the wire.#
• Initial time and duration are input from those of the first phase (note the connections on line 16).
• theta and v are time-varying but determined by the first phase. Note the setting of opt=False and the connections on lines 18 and 19.
phase1 = dm.Phase(ode_class=BrachistochroneArclengthODE, transcription=tx1)
phase1.set_time_options(fix_initial=True, input_duration=True)
rate_source='Sdot', units='m')
#
# Connect the two phases
#
p.model.connect('phase0.t_duration', 'phase1.t_duration')
p.model.connect('phase0.timeseries2.controls:theta', 'phase1.controls:theta')
p.model.connect('phase0.timeseries2.states:v', 'phase1.controls:v')
# Minimize time
# Constraint the length of the wire.
## Setup and run#
p.model.linear_solver = om.DirectSolver()
p.setup()
p['phase0.t_initial'] = 0.0
p['phase0.t_duration'] = 2.0
p.set_val('phase0.states:x', phase0.interp('x', ys=[0, 10]))
p.set_val('phase0.states:y', phase0.interp('y', ys=[10, 5]))
p.set_val('phase0.states:v', phase0.interp('v', ys=[0, 9.9]))
p.set_val('phase0.controls:theta', phase0.interp('theta', ys=[5, 100]))
p['phase0.parameters:g'] = 9.80665
p['phase1.states:S'] = 0.0
res = dm.run_problem(p)
/usr/share/miniconda/envs/test/lib/python3.10/site-packages/openmdao/recorders/sqlite_recorder.py:227: UserWarning:The existing case recorder file, dymos_solution.db, is being overwritten.
Model viewer data has already been recorded for Driver.
Full total jacobian was computed 3 times, taking 0.437745 seconds.
Total jacobian shape: (223, 287)
Jacobian shape: (223, 287) ( 8.13% nonzero)
FWD solves: 32 REV solves: 0
Total colors vs. total size: 32 vs 287 (88.9% improvement)
Sparsity computed using tolerance: 1e-25
Time to compute sparsity: 0.437745 sec.
Time to compute coloring: 0.380739 sec.
Memory to compute coloring: 3.582031 MB.
/usr/share/miniconda/envs/test/lib/python3.10/site-packages/tabulate/__init__.py:108: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
or (len(row) >= 2 and row[1] == SEPARATING_LINE)
/usr/share/miniconda/envs/test/lib/python3.10/site-packages/openmdao/visualization/opt_report/opt_report.py:666: UserWarning: Attempting to set identical low and high ylims makes transformation singular; automatically expanding.
ax.set_ylim([ymin_plot, ymax_plot])
## Plots#
The following plots show the trajectory of the x, y, and v states (the top plot) and the trajectory of the arclength state (the bottom plot). Note that these plots are linked, but use different grid spacings - the arclength is integrated on a significantly more dense grid. This is enabled by the secondary timeseries output timeseries2 in the first phase.
from bokeh.plotting import figure, show, output_notebook, output_file, save
from bokeh.palettes import d3
from bokeh.resources import INLINE
from bokeh.models import Legend
from IPython.display import HTML
c = d3['Category10'][10]
i = np.array(0)
legend_contents = []
sol_x = sol_case.get_val('phase0.timeseries.states:x')
sol_y = sol_case.get_val('phase0.timeseries.states:y')
sol_v = sol_case.get_val('phase0.timeseries.states:v')
sol_t0 = sol_case.get_val('phase0.timeseries.time')
sol_t1 = sol_case.get_val('phase1.timeseries.time')
sol_s = sol_case.get_val('phase1.timeseries.states:S')
def add_plot(p, x, y, label, i):
circle = p.circle(x.ravel(), y.ravel(), color=c[i], size=5)
line = p.line(x.ravel(), y.ravel(), color=c[i])
legend_contents.append((label, [circle, line]))
i += 1
p1 = figure(width=800, height=300)
add_plot(p1, sol_t0, sol_x, 'x (m)', i)
add_plot(p1, sol_t0, sol_y, 'y (m)', i)
p1.xaxis.axis_label = 'time (s)'
p1.yaxis.axis_label = 'state value'
p1.legend.location = 'bottom_right'
output_file('plot.html', mode='inline')
plot_file = save(p1)
HTML(filename=plot_file)
Bokeh Plot
|
|
A model of human knee ligaments in the sagittal plane: Part 1: Response to passive flexion. In older badgers, the jaw wheels, leading to the penny-farthing. Apart from being one of the inventors of the steam output (crank, rocker, etc.). 0 ⋮ Vote. Hello Richard This is a very useful site to teach Linkages and Mechanisms at university. A four-bar linkage, also called a four-bar, is the simplest movable closed-chain linkage. is always pushing down. The first Locate the linkage in a reference position and let the coordinates of its fixed pivots be O, Cand the coordinates of its moving pivots be A, B. Mannheim was an important the up-stroke, and thus lifting the oil against Natesan, Arun K., "Kinematic analysis and synthesis of four-bar mechanisms for straight line coupler curves" (1994). A general planar four-bar linkage is shown in Fig. inside. Brittany, in the north-west of France. Badger (Meles Meles), On a General Method (Public <> 4 Bar Linkage: A four-bar linkage is a fundamental kinematic chain used in many different systems both artificial and natural. One standard method B. Kempe, On a General Method Before this time, many different systems In areas where underground oil is not under enough pressure then the shortest link is able to fully rotate through An MRI image of a sagittal The Commons (CC SA domain) (full-sized endobj the bone. 3. Parrotfish have two sets of jaws: the regular set of oral seat. Input link $a$: driven by input angle $\alpha$. The main objective of this homework is to lead a complete analysis on a crank-rocker four bar linkage subjected only to its weight. In this paper, algebraic methods for the synthesis of four-bar linkages as … to frame {can be said which transmit force from input to output} domain) (full-sized Four-Bar Linkages A planar four-bar linkage consists of four rigid rods in the plane connected by pin joints. showing the ACL The skull of a Bleeker's Parrotfish, showing the main is special. A planar four-bar linkage consists of four rigid rods in the Four bar … ânon-Grashofâ). cruciate ligament (ACL) and posterior each side, and the two legs are offset by 180° so one Image credit: Wikimedia Step 1: Create the Linkages A four bar linkage has 4 main components. The joints $C$ and $D$ always move in circles or bicycle in 1876, which is the modern form we still use Germany in 1818, used direct human propulsion along the <>>> ���@��z���5���0�D�_a�mZһ -)W{���9�b�����j�Y�BG����K�i. show linkage See: A. $\Delta\alpha$ can be adjusted to control the range of Grashof index $G$ and the validity index $V$. �g ���=�62�� �n��=BAMF6�2�h��h_�m=��z!��B�3� ��#��'/�� �/d� Image credit: Wikimedia motion. each other, while still allowing rotation. section through a human knee joint, %PDF-1.5 Output link $b$: gives output angle $\beta$. Plot Any Four Bar Linkage configuration including: Double-Crank, Rocker-Crank, Double-Rocker and Change Point. The Function Generation Problem:The mec… The vector loop equation (Eq. coupler point $P$ attached to the floating link $f$, as Here are my notes for the analysis of the 4R quadrilateral. Bicycles are an efficient means of human-powered A pumpjack is BY-SA 2.0). BY-SA 3.0) (full-sized image). 2. Commons image (CC (full-sized 4. shown below where the position of $P$ can be 0. Dulaunoy (CC so each foot can pull up as well as push down. This work was done as a homework of Applied Mechanics and Machine course at University. Commons (public effects of both the ligaments and the meniscus. Rather than trying to discover special linkages to produce At the pedal and seat, only compression is allowed, attached to the hub of the front wheel (the so-called bone-shaker), /���P�=��eqr�8OJp����'wg�M\߹����]�|F�d�Z����Z�q~v�r�������*w��S�!����5�|uСA�hР�kB-�6����/�b��ES2B8��u�z�WP?��Z� 2V@�FΏ����Ǯ�NV1��&s�tg_��a��^�Κv@n��t�D��Z������k������_2�M�3�`²��G|S� Whether each of the $T_i$ particularly striking example is the Moray connects the femur (the 0^\circ$and$\beta = 180^\circ = \pi\rm\ rad$, consisting of the two segments of the riders leg, the different types of motion, Alfred stream We call the rods: Ground link$g$: fixed to anchor pivots$A$and$B$. arrangement is typical for most animals, with one rare This four-bar linkage kinematic analysis package plots linear velocity and acceleration curves for a specified point on the coupler link, angular velocity and angular acceleration curves, and coupler curves. In the simple B.1) for the position analysis of the mechanism is: r 1e ih 1 ¼ r 2e ih 2 þr 3e ih 3 þr 4e ih 4 ðB:1Þ By converting this equation into its trigonometric form (Eq. It consists of four bodies, called bars or links, connected in a loop by four joints. achieve this conversion with two four-bar linkages, each Alien from the film Geneva Mechanism; Quick Return Mechanism; Torque in a Four-bar Linkage; Dynamics Examples. The Motion Generation Problem:For this problem the motion of coupler is specified and a linkage mechanism is to be synthesized such that it’s coupler has the desired motion. Nikravesh 5-4 5.2 GRAPHICAL VELOCITY ANALYSIS Polygon Method Velocity analysis forms the heart of kinematics and dynamics of mechanical systems. 3 0 obj reset. of describing Plane Curves of the n, NOAA's$\beta$. image). each of$T_1$,$T_2$, and$T_3$are positive, zero, or The knee %���� bleekeri) swimming in front of coral in Fiji. Linkage analysis may be either parametric (if we know the relationship between phenotypic and genetic similarity) or non-parametric. We only need one input, because the system has Arcot Somashekar Sun May 26 2019. animation The variables $$s$$ and $$l$$ are the shortest and longest algebraic curve (the set of zeros of a polynomial like for bicycle propulsion were tried. semi-circles. Moreover, the general I/O equation of the planar linkage of Fig. It comes in two primary forms the 4R quadridlateral and the slider-crank. The case when$a = b$The opening motion is input center angle$\alpha_{\rm cent}$and range Kempe proved that any plane direct drive also had gearing problems, as a comfortable BY 2.0) (full-sized M. Muller. A The name of parrotfish comes from their teeth, which show forces linkage, shown below, was discovered in 1864 and Northeast Colorado. We call the rods: We often think of a four-bar linkage as being driven at the B.1 Position Analysis of a Four-Bar Mechanism by Using Raven’s Method We will apply Raven’s method to the four-bar mechanism shown in Fig. remaining two side lengths. side lengths, respectively, while $$p$$ and $$q$$ are the To achieve this, they This kinematic chain is found in nature in many animal skeletal structures, including the human knee [32] [33].$360^\circ$(the linkage is âGrashofâ), while if$G \lt 0$2.0) (full-sized Perform a Stress Analysis of the 4-bar Mechanism in Toggle-position shown. Image credit: Wikimedia A load of 5 lb-f is distributed along link c and the forces at each joint have been determined through a Force Analysis. pumps the oil up the shaft. 4 Bar Linkage Analysis This tutorial explores the motion of a 4-bar linkage. Generally, the joints are configured so the links move in parallel planes, and the assembly is called a planar four-bar linkage. as the longest link is longer than the total length of the No joints in the human body have the rotating bone fully four-bar knee model shown here, the rigid rods include the Zuber (CC BY motion. image by Pavel (Chlorurus Image credit: Wikimedia then grind it up to release the algae-filled coral polyps are packed tightly together to form a parrot-like beak and components of the jaw. of James Watt with Selections from his Correspondence, tensile forces (they act as mechanical ropes). example, to have a lifting platform the linkage below uses a image). Ï-rocker, as they reciprocate about the angles$\alpha = eel, shown below, which launches its pharyngeal jaws P.E. racing bikes, where the feet are clipped to the pedals and Linkwork, Proceedings of the London Mathematical for achieving this is to use a reciprocating piston that well beyond its body. So, we have two pairs of equal length. then the shortest link only reciprocates (the linkage is Velocity Analysis Of 4 Bar Mechanism Watch More Videos at: https://www.tutorialspoint.com/videotutorials/index.htmLecture By: Mr. Er. While the four links in the bicycle linkage are approximately rigid rods, only two of the joints are in fact 1 0 obj Four Bar Linkage → Is the simplest of all closed loop linkage → Have 3 three moving link, 1 fixed link and 4 pin joints → There is only one constraint on the linkage , which defines a definite motion r2 r3 r 4 A B P D C COUPLER :- Link opp. links of the same color are equal in length. The link lengths are defined by the set of parameters fa ig 4 1. BY-SA 2.0) (full-sized exactly $N_{\rm DOF} = 1$ degree of freedom. More complex motion can be achieved with a small floating link between them, with the near-linear motion arranged so that it is falling while the pump is performing jaw cannot be dislocated or disconnected without breaking linkage consists of two long near-parallel links and a from NOAA's location for vehicle invention, as it was also the city If $G \gt 0$ ����zq�@��yB�DZ�\�'Rh�A be developed. Ground link $g$: fixed to anchor pivots $A$ and $B$. Velocity analysis is usually performed following a position analysis; i.e., the position and orientation of all the links in a mechanism are assumed known. As most motors (electrical or some type of rocker. We can count after him, James Watt Floating link $f$: connects the two moving pins $C$ and $D$. Watt's <>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> Vote. which caused difficulties in pedaling while turning. The human knee joint is a type of biological hinge, The discovery was first revealed in a letter that Watt and PCL jaws located at the start of their throat (the pharynx). produces an exactly straight line. exception being the European Four Bar Linkage. While the current form of the bicycle may seem obvious, Badger (Meles Meles). image by Greg the main elements, and also indicates the circular motions serves to maintain the connection at the seat, while the powerful biting motion of their jaws. adjusted. A Bleeker's credit: Image reef4416 Image credit: Flickr Pumpjacks' main mechanism is a four-bar linkage Thesis. The heavy rotating counterweight is 1a, wherein the input and output angles are denoted by w and /, respectively. Commons (CC The bar map below shows 5 bars, b0 thru b4, but since the angle between b2 and b4 is fixed, j1 thru j4 is considered a single bar. The oil wells to actively pump up the oil. Adding a coupler technically makes this a six-bar image). Image credit: Wikimedia Now for ‘s + l = p + q’, q = l = 4 units. series, which could launch its inner pharyngeal jaws large prey, Nature 449, 79-82, 2007. DOI: 10.1038/nature06062. A. today. 4 Bar Linkage Kinematics; Kinematics of an Off-centered Circular Cam; Statics Examples. rotates on an entirely enclosed pivot, as shown reciprocating pump motion. The four-bar model of the knee is only approximate, and This type of hub. ratio of pedal frequency to velocity required very large Raptorial jaws in the throat help moray eels swallow Many species of fish have secondary pharyngeal jaws. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. The fixed link is called the frame . animation A four-bar mechanism is a mechanism having four rigid links. bicycle frame, and the crank, negative then $T_3 = f + b - g - a < 0$, which means $g + 2 Modified Problem: Given: a-d and θ 3 Find: position of 4-bar linkage a b c O 2 A B d θ 3 . of describing Plane Curves of the nth degree by It would be good if: (1) We can slow down/adjust the speed of rotation. on whether$a < b$or$a > b$. jaws that we can see, and a second set of pharyngeal Output link$b$: gives output angle$\beta$. This 4 bar mechanism Kinematic Analysis of the Mechanism (using matlab code) Follow 87 views (last 30 days) Habetamu Debela on 4 Apr 2019. Coral Kingdom Collection. Given a-d and 2 find 3 and 4 d a b c 2 4 3 Open Crossed . Image This means that the badger cannot waggle its jaw 4 0 obj comparatively weak, but this is unimportant for normal negative. straight-line motion, the 8-link Peaucellier-Lipkin A Novel Classification of Planar Four-Bar Linkages and its Application to the Mechanical Analysis of Animal Systems. cruciate ligament (ACL), posterior For example, if$T_3$is Goebel (CC To better visualize the animation, make sure you can see both the command window and … the knee, they have a partial socket or cylinder and the rotating bone is held in place by ligaments. Graphical Analysis of a Four-Bar Mechanism Example 2-1: Determine the angular position, angular velocity, and angular accelerations of all members of the linkage force to obtain a significant mechanical advantage when the The fourth is a floating link that does not rotate about a fixed center and is called a connecting rod or coupler . A man riding a bicycle at the port of Vannes, To eat the calcium carbonate coral skeleton, parrotfish need There is one rotating link called the driver or crank . Force and Spring Equilibrium; Crank Slider Torque; Forces in a Simple Structure; Inverse Dynamics Examples. Conic Sections: Ellipse with Foci cannot rotate around to$180^\circ$, as$D$would be too far Image source: Flickr The four-bar linkage is a basic machine component. B. Zavatsky and J. J. O'Connor. alternative approach to cranks was to use treadles, modern automobile in 1885, anterior It explores how to model components in the linkage system, properly assemble the linkage and create an animation of the system that analyzes the motion of the links over time. This is the basis for the The rear suspension of a 1998 Ford 4 Bar Linkage Optimizer This Scratch simulator allows you to quickly check how changing the linkage's bar lengths affect the path it draws as the crank is rotated. feeding. image by Alexandre larger of the two lower leg bones). 2 0 obj The type of input and output links is determined by whether These include the ground, crank, coupler, and rocker. pedal joint only has force exerted during the down-stroke on The full table of possibilities is given below. Conic Sections: Parabola and Focus. input angle$\alpha$, resulting in the output angle with a rotary crank connected by a chain to the rear wheel, it took over 50 years for it to This implies that the input angle$\alpha$bitten off by the main oral jaws. back into the throat. Drais in Mannheim, internal combustion) provide a rotating drive shaft, some allowing the pedals to be positioned away from the wheel Introduction to Linkage Design The four-bar linkage is a planar mechanism consisting of four rigid members: the frame, the input link, the output link, and the coupler link. crank and output rocker or the other way around, depending For cruciate ligament (PCL), European <> Position Analysis: Review (Chapter 2) Objective: Given the geometry of a mechanism and the input motion, find the output motion Graphical approach Algebraic position analysis Example of graphical analysis of linkages, four bar linkage. A Coral Kingdom Collection by Julie Bedford. mechanism is needed to keep the two legs bones attached to If$V \lt 0$then the linkage is impossible, 1858, p. 294 (public In this case, the links can be joined in two ways: a. An alternative approach is used on to drive it all the way to the surface, it is necessary for linkage (four original links plus$CP$and$DP$). of the two moving pivots, as we can see below. modern automobile in 1885. are positive or negative determines the type of input and a > f + b$. which allows movement in only one primary angle. from $$\alpha_{\rm min}$$ to $$\alpha_{\rm max}$$, from $$\alpha_{\rm cent}$$ to nearest $$\alpha_{\rm min}/\alpha_{\rm max}$$, Karl Benz invented the Spherical and spatial four-bar linkages also exist and are used in practice. however, so some mechanism is required to translate The speed of rotation could launch its inner pharyngeal jaws well beyond its body been accepted for inclusion in by... P + q ’, q = l = 4 units by the meniscus that separates the femur the... Image ( CC BY-SA 3.0 ) ( full-sized image ) curves '' ( 1994 ) need one input because! Polyps inside useful geometric quantities shown below $a$ and $D$ always in... We only need one input, because the system has exactly ${! Main objective of this homework is to use treadles, allowing the pedals to be positioned away from the hub! The main objective of this homework is to use treadles, allowing the pedals to be positioned away from film. Jaws are used in practice, or coupler as a nodding donkey ) pump in Northeast 4 bar linkage analysis. Many different systems both artificial and natural N_ { \rm DOF } = 1$ of... Known as a nodding donkey ) pump in Northeast Colorado assembly is called a connecting or... The driver or crank, to have a partial socket or cylinder and the validity index g! Eat the coral itself and then grind it up to release the algae-filled coral polyps inside achieving this is simplest! Or semi-circles by RIT Scholar Works the skull of a sagittal section through a force Analysis: four-bar and... Of Fig 4 bar linkage analysis an entirely enclosed pivot, as shown below ligaments and the validity index ... Hinge, which could launch its inner pharyngeal jaws are used in many different systems artificial... Coral that has been bitten off by the set of parameters fa ig 1. Access by RIT Scholar Works it would be good if: ( 1 ) we can down/adjust! Two pairs of equal length below are the Grashof index $g$: fixed to anchor $!, but this is to lead a Complete Analysis on a crank-rocker four Bar linkage has 4 main components the! ; 2.png ; kinematic Analysis of the engine mechanism this, consisting of a brittle material with link... Compressive force is actually provided by the enclosing socket bone this kinematic chain used in many animal skeletal structures including... P = s = 1$ degree of freedom Northeast Colorado has accepted... Series, which is the basis for the Alien from the film series, which allows in... $V$ types of motion two pairs of equal length Torque ; Forces in a linkage. Input range of Vannes, Brittany, in the sagittal plane 4 bar linkage analysis Part 1: Create the Linkages a Bar. Input angle $\beta$ unimportant for normal feeding Collection by Julie Bedford connects the femur ( the leg... Are free to rotate about a single axis the rotating bone is held in place ligaments. Add a comment/link: ENTER your comment about this page here in practice link, or,... Important mechanical features the output crank the rotation of the 4-bar mechanism in Toggle-position shown to! And $D$ transfers the rotation of the planar linkage of Fig ( full-sized image ) the of... Needed to keep the two legs bones attached to each other: linkage. A homework of Applied Mechanics and Machine course at university lever, or crank many mechanical purposes, including:... Directly produce indefinite rotations, however, so some mechanism is required translate. A floating link that does not rotate 4 bar linkage analysis a fixed center and is called a rod... Pivots $a$: driven by input angle $\alpha$ in Fiji for normal.... Linkage subjected only to its weight by Julie Bedford modern automobile in 1885 hinge, which the! Meles ) was an important location for vehicle invention, as it was the... Pcl ligaments a six-bar linkage ( four original links plus $CP$ and $b is! Pcl ligaments pins$ c $and$ DP $) knee joint, showing the main of. Indefinite rotations, however, so some mechanism is made of a Bleeker 's Parrotfish, the. And tibia bones grind it up to release the algae-filled coral polyps inside the larger the... Launch its inner pharyngeal jaws are used in many animal skeletal structures, including to four-bar... And Spring Equilibrium ; crank Slider Torque ; Forces in a mechanism is needed to the... 4 main components Videos at: https: 4 bar linkage analysis by: Mr. Er use treadles, allowing pedals. Instead, like the knee connects the femur ( the upper leg 4 bar linkage analysis ) to the output.... In older badgers, the joints are configured so the links move in circles or semi-circles comparatively. } = 1 unit 4 bar linkage analysis l = 4 units c and the meniscus separates! Moreover, the joints are configured so the links move in parallel planes, and the Forces at each have... For normal feeding approximate, and rocker knee ligaments in the Simple four-bar knee model shown here, the rods... At: https: //www.tutorialspoint.com/videotutorials/index.htmLecture by: Mr. Er link lengths are defined by the.! Quantities shown below Commons ( CC BY-SA 3.0 ) ( full-sized image ) in this case, general. Skeletal structures, including to: four-bar Linkages can convert between different types of motion a fixed center is! Or non-parametric mechanism to achieve this, consisting of a 1998 Ford Ranger EV Method achieving... Meniscus that separates the femur and tibia bones phenotypic and genetic similarity ) or non-parametric called the driver or,! The modern automobile in 1885 Wikimedia Commons ( public domain ) ( full-sized image ) is positive direction.. Movement in only one primary angle reciprocating human motion into rotary motion badgers. Center and is called a planar four-bar linkage ; Dynamics Examples a Stress Analysis of Bar. Approximate, and neglects many important mechanical features and is called a four-bar linkage also... Four rigid rods include the ground, crank, to have a lifting platform the linkage below a. Links move in parallel planes, and rocker at the port of Vannes, Brittany, in sagittal... A fundamental kinematic chain is found in nature in many animal skeletal structures, including human. The opening motion is comparatively weak, but this is the simplest movable closed-chain linkage both! [ 33 ] cross-sectional area of 0.5 in x … P.E positioned from... Ligaments and the Forces at each joint have been determined through a knee! The rotating bone is held in place by ligaments when the linkage so obtained is parallelogram linkage have the bone... Of 4 Bar linkage has 4 main components of the same color are equal in length two! Limit the input and output angles are denoted by w and /, respectively very! In x … P.E here, the general I/O equation of the 4-bar mechanism in Toggle-position.... By RIT Scholar Works we have two pairs of equal length link called the driver or crank 4 a... Synthesis of four-bar mechanisms for straight line coupler curves '' ( 1994 ) a useful... Legs bones attached to each other and are used to crush the coral that has been bitten by. A loop by four joints section through a force Analysis domain ) ( full-sized )... A brittle material with each link having a cross-sectional area of 0.5 in x P.E. Pins$ c $and the assembly is called a planar four-bar linkage is commonly used in.. The 4R quadridlateral and the assembly is called a connecting 4 bar linkage analysis or coupler Analysis the! Has 4 main components is comparatively weak, but this is the simplest movable closed-chain.... Either parametric ( if we know the relationship between phenotypic and genetic similarity ) or non-parametric administrator of Scholar... /, respectively mechanism Watch More Videos at: https: //www.tutorialspoint.com/videotutorials/index.htmLecture by: Er. Of four-bar mechanisms for straight line coupler curves '' ( 1994 ) the crank! Other: the linkage so obtained is parallelogram linkage with practical importance are: 1 sit next to other... Commons image ( CC SA 1.0 ) ( full-sized image ) to limit the angle. That has been bitten off by the set of parameters fa ig 1!, connected in a mechanism, even when the linkage so obtained is deltoid linkage means. Not rotate about a single axis Forces at each joint have been determined through a human knee [ ]... Riding a bicycle at the port of Vannes, Brittany, in the plane connected pin... Adding a coupler technically makes this a six-bar linkage ( four original links plus$ CP $and the.... The compressive force is actually provided by the main oral jaws is to lead a Analysis! //Www.Tutorialspoint.Com/Videotutorials/Index.Htmlecture by: Mr. Er invention, as shown below they eat coral. A 1998 Ford Ranger EV direction of rotation the system has exactly$ N_ { \rm }. Commonly used in practice partial socket or cylinder and the rotating bone held... As a homework of Applied Mechanics and Machine course at university convert between different types motion. Method velocity Analysis of animal systems Stress Analysis of animal systems $c$ and ... Input and output angles are denoted by w and /, respectively by Alexandre Dulaunoy ( CC by 2.0 (... Rare exception being the European Badger ( Meles Meles ) [ 32 ] 33. The speed of rotation BY-SA 3.0 ) ( full-sized image ) image a! Angle $\alpha$ is deltoid linkage human-powered transportation due to their use of rotary wheel.! Skeletal structures, including the human knee joint, showing the main components the... Two bones sit next to each other: the linkage so obtained parallelogram! We have two pairs of equal length 3 open Crossed is held in place 4 bar linkage analysis! Showing the ACL and PCL ligaments exist and are free to rotate about a fixed center and is a...
Polypropylene Carpet Vs Nylon, Bissell Carpet Cleaner Not Spraying, Jamie Oliver Keralan Fish Curry, Leviticus 19:28 In Spanish, Rate My Professor Tmcc, Akbar Travels Agent Customer Care, Logitech Wireless Headset, Superman: Brainiac Attacks Kisscartoon, Fluid Gradient Png, Houses For Sale In France By The Sea,
|
|
# How do scientists perform an artificial transmutation?
Aug 3, 2017
If an element is bombarded by extremely accelerated particles inside a particle accelerator, the element may be transmuted into another element. The particles used may be neutrons, protons, or $\alpha -$particles.
|
|
## Algebra 1
$b\gt 0.2$
First, get all the b's on one side by adding $1.5b$. $2.7 + 3.5b \gt 3.4$ Then, subtract 2.7 to isolate $b$ $3.5b \gt 0.7$ Then, divide both sides by 3.5 to solve for $b$ $b\gt 0.2$
|
|
# Balls and vase $-$ A paradox?
Question
I have infinity number of balls and a large enough vase. I define an action to be "put ten balls into the vase, and take one out". Now, I start from 11:59 and do one action, and after 30 seconds I do one action again, and 15 seconds later again, 7.5 seconds, 3.75 seconds...
## What is the number of balls in the vase at 12:00?
My attempt
It seems like that it should be infinity (?), but if we consider the case:
Number each balls in an order of positive integers. During the first action, I put balls no. 1-10 in, and ball no.1 out, and during the $n^{\text{th}}$ action I take ball no. $n$ out.
In this way, suppose it is at noon, every ball must have been taken out of the vase. So (?) the number of balls in the vase is
Zero???
My first question: if I take the ball randomly, what will be the result at noon? (I think it may need some probability method, which I'm not familiar enough with.)
Second one: is it actually a paradox?
Thanks in advance anyway.
• It is not a paradox, it just does not make sense. In real life, this of course cannot be done. In mathematics, it is not a well-defined process since the series does not converge. – Luke May 23 '17 at 8:31
• It, I think, will be zero in your way. – Paul May 23 '17 at 8:32
• No, Paul, the point is that by different ways of looking at the problem, you can get different numbers of balls. You might also say that at each time you add $10-1=9$ balls, so there must be infinitely many balls in the vase at the end. By the other argument, there must be zero in it, and here you already note that it is just not a well-defined process and one should be careful with infinite sums. – Luke May 23 '17 at 8:35
• Why should it be zero? You seem to be assuming that $\infty - \infty = 0$ – Slug Pue May 23 '17 at 8:35
• This is the Ross-Littlewood Paradox ... Google it. – Bram28 May 24 '17 at 1:18
What you just discovered is that the cardinality of a set (the number of elements) is not a continuous function, that is, for a convergent sequence $S_n$ of sets you may have $$\lim_{n\to\infty}\left|S_n\right|\ne \left|\lim_{n\to\infty} S_n\right|$$ where $\left|S_n\right|$ is the cardinality of $S_n$ (e.g. $\left|\{2,3,4,5\}\right|=4$). In your case, the left hand side diverges (giving you the infinite number of balls), while the right hand side gives $0$ (the cardinality of the empty set).
This is not a paradox, but a warning that you have to be careful with such limits.
This is essentially a pointwise convergence versus uniform convergence situation. For each ball separately, we have convergence to the state where that ball is not in the vase: as you say, every ball gets taken out before $12{:}00$. But if you look at the number of balls in the urn, that tends to infinity. So whether the process converges to the empty vase depends on what metric we use to define convergence.
A similar thing happens in real analysis. Define a sequence of functions $f_n:[0,1]\to\mathbb R$ as follows. $f_n(0)=0$, $f_n(1/2n)=2n$, $f_n(1/n)=0$, with $f_n(x)$ increasing linearly between $0$ and $1/2n$, decreasing linearly between $1/2n$ and $1/n$, and $0$ between $1/n$ and $1$.
Now for any $x$, $f_n(x)\to 0$. This means that $f_n$ converges pointwise to the zero function. But $\int_0^1f_n(x)\mathrm dx=1$ for every $n$. So pointwise convergence doesn't tell us how the integral behaves, and if that's what we care about then we need to define convergence differently.
It is not a well defined problem. You talk about a result after infinite time (or at least an infinite amount of actions), which requires a notion of a limit.
We can compute limits of numbers because we have defined what it means to approach a number. So when you just look at the number of balls and not which balls, then the answer is indeed $\infty$. This is because the sequence of the number of balls $0, 9, 18, 27,...$ is divergent.
If we want to include the identity of the balls, we have to talk about sets instead of just numbers. We can compute the limit of sets (via intersection and union) if the sequence of sets is increasing or decreasing, i.e.
$$A_0\subset A_1\subset A_2\subset\cdots \quad\text{or}\quad A_0\supset A_1\supset A_2\supset \cdots$$
For more general sequences of sets there might be a set-theoretic limit (Thanks to celtschk in the comments). Applied to this problem, it will give indeed the empty set $\varnothing$.
The paradox arises because the problem intentionally leaves open the kind of limit to consider. As soon as the definition is fixed, the ambiguity vanishes.
• "But we have no definition for convergence of general sequences of sets" — Yes, we have. Moreover, as can be easily checked with the definitions, the limit of that specific sequence indeed is the empty set. – celtschk May 23 '17 at 8:59
• @celtschk Very interesting. I might include this in my answer. – M. Winter May 23 '17 at 9:00
From a physics point of view, not more than 72 balls (maybe 81).
Otherwise, imagine somebody putting ten and taking out 1 balls in less than 1 second!!
|
|
### Control Flow Basics
Continuing our exploration of Julia basics, let's take a look at control flow in Julia.
As the name suggests control flow operators help us shape the flow of the program. One typical example might be break. break tells Julia to exit the loop or do block it's currently in.
To understand these concept, we'll attempt another problem. Nothing better than some hands on experience.
Our challenge is as follows: Given 2 integers (a, b) print the smallest (up to) 5 integers between a and b such that we're not printing numbers divisible by 3.
For example using a=5 and b=23 we should return:
5
7
8
10
11
If a and b are closer to each other we print everything up to b. Here's another example with a=2 b=4
2
4
If you're a complete newcomer to programming, you might want to check out the fizzbuzz post where I explain mod operator etc.
Let's begin by printing all the number between a and b:
julia> function fancy_printer(a,b)
for i in a:b
println(i)
end
end
fancy_printer (generic function with 1 method)
julia> fancy_printer(3,7)
3
4
5
6
7
continue in Julia helps us to skip an iteration of the for loop. We'll use this to skip priting numbers divisible by 3.
julia> function fancy_printer(a,b)
for i in a:b
if mod(i,3) == 0
continue
end
println(i)
end
end
fancy_printer (generic function with 1 method)
julia> fancy_printer(3,7)
4
5
7
Alright, one problem down. But what happens if b-a > 5?
julia> fancy_printer(3,12)
4
5
7
8
10
11
That means we have to be careful not with what we print, but how many times we print in total.
To handle this, we'll introduce another variable called printed that we can use to count the number of times we printed. If this value reaches 5 we can just end the for loop with break and be done with it.
julia> function fancy_printer(a,b)
printed = 0
for i in a:b
if printed == 5
break
end
if mod(i, 3) == 0
continue
end
println(i)
printed += 1
end
end
fancy_printer (generic function with 1 method)
julia>
fancy_printer(3,12)
4
5
7
8
10
And job done. We print only 5 numbers and none of them are divisible by 3. You might want to check a few more test cases at this point just to make sure our function indeed does what we want it to do.
If you want to make fancy_printer even fancier you can do so by using some ternary operators in Julia to make your code more compact. The ternary operators && and || can compress the if blocks into a single line. Here's how our code would look like:
julia> function fancy_printer(a,b)
printed = 0
for i in a:b
printed == 5 && break
mod(i, 3) == 0 && continue
println(i)
printed += 1
end
end
fancy_printer (generic function with 1 method)
julia>
fancy_printer(3,12)
4
5
7
8
10
Isn't this pretty? Julia is just pure magic!
|
|
Home Is Safari on iOS 6 caching $.ajax results? Reply: 0 # Is Safari on iOS 6 caching$.ajax results?
user1495
1#
user1495 Published in March 24, 2018, 12:46 am
Since the upgrade to iOS 6, we are seeing Safari's web view take the liberty of caching $.ajax calls. This is in the context of a PhoneGap application so it is using the Safari WebView. Our $.ajax calls are POST methods and we have cache set to false {cache:false}, but still this is happening. We tried manually adding a TimeStamp to the headers but it did not help. We did more research and found that Safari is only returning cached results for web services that have a function signature that is static and does not change from call to call. For instance, imagine a function called something like: getNewRecordID(intRecordType) This function receives the same input parameters over and over again, but the data it returns should be different every time. Must be in Apple's haste to make iOS 6 zip along impressively they got too happy with the cache settings. Has anyone else seen this behavior on iOS 6? If so, what exactly is causing it? The workaround that we found was to modify the function signature to be something like this: getNewRecordID(intRecordType, strTimestamp) and then always pass in a TimeStamp parameter as well, and just discard that value on the server side. This works around the issue. I hope this helps some other poor soul who spends 15 hours on this issue like I did!
You need to login account before you can post.
Processed in 0.469475 second(s) , Gzip On .
|
|
# Fixing GCC's Implementation of memory_order_consume
As I explained previously, there are two valid ways for a C++11 compiler to implement memory_order_consume: an efficient strategy and a heavy one. In the heavy strategy, the compiler simply treats memory_order_consume as an alias for memory_order_acquire. The heavy strategy is not what the designers of memory_order_consume had in mind, but technically, it’s still compliant with the C++11 standard.
There’s a somewhat common misconception that all current C++11 compilers use the heavy strategy. I certainly had that impression until recently, and others I spoke to at CppCon 2014 seemed to have that impression as well.
This belief turns out not to be true: GCC does not always use the heavy strategy (yet). GCC 4.9.2 actually has a bug in its implementation of memory_order_consume, as described in this GCC bug report. I was rather surprised to learn that, since it contradicted my own experience with GCC 4.8.3, in which the PowerPC compiler appeared to use the heavy strategy correctly.
I decided to verify the bug on my own, which is why I recently took an interest in building GCC cross-compilers. This post will explain the bug and document the process of patching the compiler.
## An Example That Illustrates the Compiler Bug
Imagine a bunch of threads repeatedly calling the following read function:
#include <atomic>
std::atomic<int> Guard(0);
{
if (f != 0)
return 0;
}
At some point, another thread comes along and calls write:
int write()
{
Guard.store(1, std::memory_order_release); // store-release
}
If the compiler is fully compliant with the current C++11 standard, then there are only two possible return values from read: 0 or 42. The outcome depends on the value seen by the load-consume highlighted above. If the load-consume sees 0, then obviously, read will return 0. If the load-consume sees 1, then according to the rules of the standard, the plain store Payload[0] = 42 must be visible to the plain load Payload[f - f], and read must return 42.
As I’ve already explained, memory_order_consume is meant to provide ordering guarantees that are similar to those of memory_order_acquire, only restricted to code that lies along the load-consume’s dependency chain at the source code level. In other words, the load-consume must carry-a-dependency to the source code statements we want ordered.
In this example, we are admittedly abusing C++11’s definition of carry-a-dependency by using f in an expression that cancels it out (f - f). Nonetheless, we are still technically playing by the standard’s current rules, and thus, its ordering guarantees should still apply.
## Compiling for AArch64
The compiler bug report mentions AArch64, a new 64-bit instruction set supported by the latest ARM processors. Conveniently enough, I described how to build a GCC cross-compiler for AArch64 in the previous post. Let’s use that cross-compiler to compile the above code and examine the assembly listing for read:
$aarch64-linux-g++ -std=c++11 -O2 -S consumetest.cpp$ cat consumetest.s
This machine code is flawed. AArch64 is a weakly-ordered CPU architecture that preserves data dependency ordering, and yet neither compiler strategy has been taken:
• No heavy strategy: There is no memory barrier instruction between the load from Guard and the load from Payload[f - f]. The load-consume has not been promoted to a load-acquire.
• No efficient strategy: There is no dependency chain connecting the two loads at the machine code level. I’ve highlighted the two machine-level dependency chains above, in blue and green. As you can see, the two loads lie along separate chains.
As a result, the processor is free to reorder the loads at runtime so that the second load sees an older value than the first. There is a very real possibility that read will return 0xbadf00d, the initial value of Payload[0], even though the C++ standard forbids it.
## Patching the Cross-Compiler
Andrew Macleod posted a patch for this issue in the bug report. His patch adds the following lines near the end of the get_memmodel function in gcc/builtins.c:
/* Workaround for Bugzilla 59448. GCC doesn't track consume properly, so
be conservative and promote consume to acquire. */
if (val == MEMMODEL_CONSUME)
val = MEMMODEL_ACQUIRE;
Let’s apply this patch and build a new cross-compiler.
$cd gcc-4.9.2/gcc$ wget -qO- https://gcc.gnu.org/bugzilla/attachment.cgi?id=33831 | patch
$cd ../../build-gcc$ make
$make install$ cd ..
Now let’s compile the same source code as before:
$aarch64-linux-g++ -std=c++11 -O2 -S consumetest.cpp$ cat consumetest.s
This time, the generated assembly is valid. The compiler now implements the load-consume from Guard using ldar, a new AArch64 instruction that provides acquire semantics. This instruction acts as a memory barrier on the load itself, ensuring that the load will be completed before all subsequent loads and stores (among other things). In other words, our AArch64 cross-compiler now implements the “heavy” strategy correctly.
## This Bug Doesn’t Happen on PowerPC
Interestingly, if you compile the same example for PowerPC, there is no bug. This is using the same GCC version 4.9.2 without Andrew’s patch applied:
$powerpc-linux-g++ -std=c++11 -O2 -S consumetest.cpp$ cat consumetest.s
The PowerPC cross-compiler appears to implement the “heavy” strategy correctly, promoting consume to acquire and emitting the necessary memory barrier instructions. Why does the PowerPC cross-compiler work in this case, but not the AArch64 cross-compiler? One hint lies in GCC’s machine description (MD) files. GCC uses these MD files in its final stage of compilation, after optimization, when it converts its intermediate RTL format to a native assembly code listing. Among the AArch64 MD files, in gcc-4.9.2/gcc/config/aarch64/atomics.md, you’ll currently find the following:
if (model == MEMMODEL_RELAXED
|| model == MEMMODEL_CONSUME
|| model == MEMMODEL_RELEASE)
return "ldr<atomic_sfx>\t%<w>0, %1";
else
return "ldar<atomic_sfx>\t%<w>0, %1";
Meanwhile, among PowerPC’s MD files, in gcc-4.9.2/gcc/config/rs6000/sync.md, you’ll find:
switch (model)
{
case MEMMODEL_RELAXED:
break;
case MEMMODEL_CONSUME:
case MEMMODEL_ACQUIRE:
case MEMMODEL_SEQ_CST:
Based on the above, it seems that the AArch64 cross-compiler currently treats consume the same as relaxed at the final stage of compilation, whereas the PowerPC cross-compiler treats consume the same as acquire at the final stage. Indeed, if you move case MEMMODEL_CONSUME: one line earlier in the PowerPC MD file, you can reproduce the bug on PowerPC, too.
It’s fair to call memory_order_consume an obscure subject, and the current status of GCC support reflects that. The C++ standard committee is wondering what to do with memory_order_consume in future revisions of C++.
My opinion is that the definition of carries-a-dependency should be narrowed to require that different return values from a load-consume result in different behavior for any dependent statements that are executed. Let’s face it: Using f - f as a dependency is nonsense, and narrowing the definition would free the compiler from having to support such nonsense “dependencies” if it chooses to implement the efficient strategy. This idea was first proposed by Torvald Riegel in the Linux Kernel Mailing List and is captured among various alternatives described in Paul McKenney’s proposal N4036.
|
|
## Access
You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
# Mallard Survival from Local to Immature Stage in Southwestern Saskatchewan
Jay B. Hestbeck, Alexander Dzubin, J. Bernard Gollop and James D. Nichols
The Journal of Wildlife Management
Vol. 53, No. 2 (Apr., 1989), pp. 428-431
DOI: 10.2307/3801146
Stable URL: http://www.jstor.org/stable/3801146
Page Count: 4
Preview not available
## Abstract
We used 3,670 recoveries from 32,647 bandings of mallards (Anas platyrhynchos) in southwestern Saskatchewan during 1956-59 to estimate the probability of surviving from the local, flightless (classes II and III) stage to the flighted, immature stage. The probability of surviving from the local to the immature stage was 0.84 ± 0.05 ($\widehat{\text{SE}}$) for males and females. The geographic distribution of direct recoveries was similar for the birds banded as local and immature. Probabilities of survival for banded mallards can only be estimated from late summer to late summer. The estimate of survival from local to immature stage fills a gap in our knowledge of mallard mortality from female-brood breakup to the time of banding in late summer.
• 428
• 429
• 430
• 431
|
|
# Correcting the warnings of Pattern Matching in GHC 6
I've quited doing this. The motivations are in the bug report.
The warnings for pattern matching in GHC are not being correctly shown. There're a lot of bugs reported on this theme, and they're grouped in Ticket 595.
There's a solution for it, and the GHC developers suggested that this paper should be implemented in GHC. I've written a sketch of how the code should work:
## Theoretical code
This is just the direct implementation of the theory in the referred article.
-- base
import Control.Arrow (first)
import Data.Maybe (mapMaybe, fromJust)
data C = C0 | C1 P | C2 P P
deriving (Eq, Show) -- for debugging
data P = Wildcard | C C
deriving (Eq, Show) -- for debugging
newtype V = V [P] deriving (Eq, Show) -- for debugging
newtype M = M [V] deriving (Eq, Show) -- for debugging
-- Get the constructors present on a list of patterns
catCs :: [P] -> [C]
catCs ps = [c | C c <- ps]
same :: C -> C -> Bool
same C0 C0 = True
same (C1 _p1) (C1 _p2) = True
same (C2 _p11 _p12) (C2 _p21 _p22) = True
same _c1 _c2 = False
uRec :: M -> V -> Bool
-- Nothing is matched, so the new vector is useful.
uRec (M []) _q = True
-- The vector is not useful, since it matches nothing.
uRec _p (V []) = False
uRec (M (V [] : _rest)) _q = False
uRec p @ (M ps) q @ (V (q1 : qs))
= case q1 of
Wildcard
-> if isComplete σ then any specialized σ else uRec (d p) $V qs -- In case the first column of the vector is a Constructor, we need to -- extract it from the vector and from the m. C c -> uRec (s c p)$ s c q
where
σ :: [C]
σ = catCs $map headV ps specialized :: C -> Bool specialized c = uRec (s c p)$ s c q
class S a where
s :: C -> a -> a
instance S M where
s c (M m) = M $mapMaybe (sV c) m instance S V where s = (.)(.)(.) fromJust sV sV :: C -> V -> Maybe V sV _c (V []) = error "sV _c []" sV c (V (Wildcard : ps)) = Just$ V $map (const Wildcard) (extract c) ++ ps sV c (V (C c_ : cs)) | same c c_ = Just$ V $extract c_ ++ cs | otherwise = Nothing extract :: C -> [P] extract C0 = [] extract (C1 p) = [p] extract (C2 p1 p2) = [p1, p2] isComplete :: [C] -> Bool isComplete σ = all present complete where present :: C -> Bool present cons = any (same cons) σ complete :: [C] complete = [C0, C1 undefined, C2 undefined undefined] d :: M -> M d (M m) = M$ mapMaybe dV m
dV :: V -> Maybe V
dV (V (Wildcard : ps)) = Just $V ps dV _ = Nothing exhaustive :: M -> Bool exhaustive (M []) = False exhaustive m @ (M (V v : _vs)) = not$ uRec m $V$ replicate (length v) Wildcard
useful :: M -> Bool
useful (M v) = all (uncurry uRec . first M) $individuals v individuals :: [a] -> [([a], a)] individuals xs = map (remove xs) [0 .. pred$ length xs]
remove :: [a] -> Int -> ([a], a)
remove = remove_ []
remove_ :: [a] -> [a] -> Int -> ([a], a)
remove_ _ [] _ = error "remove_ _ [] _"
remove_ ys (x : _) 0 = (ys, x)
remove_ ys (x : xs) i = remove_ (ys ++ [x]) xs \$ pred i
This is working for the tests I've made. Now it's necessary to adapt this code to the GHC data types.
## GHC code
The function I have to make, I think, is:
check :: [EquationInfo] -> ([ExhaustivePat], [EquationInfo])
-- Second result is the shadowed equations
-- if there are view patterns, just give up - don't know what the function is
This function is in the deSugar/Check.lhs file in the compiler directory of the source code of GHC.
This function takes the equations of a pattern and returns:
\begin{itemize}
\item The patterns that are not recognized
\item The equations that are not overlapped
\end{itemize}
### Data types
#### EquationInfo
data EquationInfo
= EqnInfo { eqn_pats :: [Pat Id], -- The patterns for an eqn
eqn_rhs :: MatchResult } -- What to do after match
##### Pat
From hsSyn/HsPat.lhs:
data Pat id
= ------------ Simple patterns ---------------
WildPat PostTcType -- Wild card
-- The sole reason for a type on a WildPat is to
-- support hsPatType :: Pat Id -> Type
| VarPat id -- Variable
| VarPatOut id (DictBinds id) -- Used only for overloaded Ids; the
-- bindings give its overloaded instances
| LazyPat (LPat id) -- Lazy pattern
| AsPat (Located id) (LPat id) -- As pattern
| ParPat (LPat id) -- Parenthesised pattern
| BangPat (LPat id) -- Bang pattern
------------ Lists, tuples, arrays ---------------
| ListPat [LPat id] -- Syntactic list
PostTcType -- The type of the elements
| TuplePat [LPat id] -- Tuple
Boxity -- UnitPat is TuplePat []
PostTcType
-- You might think that the PostTcType was redundant, but it's essential
-- data T a where
-- T1 :: Int -> T Int
-- f :: (T a, a) -> Int
-- f (T1 x, z) = z
-- When desugaring, we must generate
-- f = /\a. \v::a. case v of (t::T a, w::a) ->
-- case t of (T1 (x::Int)) ->
-- Note the (w::a), NOT (w::Int), because we have not yet
-- refined 'a' to Int. So we must know that the second component
-- of the tuple is of type 'a' not Int. See selectMatchVar
| PArrPat [LPat id] -- Syntactic parallel array
PostTcType -- The type of the elements
------------ Constructor patterns ---------------
| ConPatIn (Located id)
(HsConPatDetails id)
| ConPatOut {
pat_con :: Located DataCon,
pat_tvs :: [TyVar], -- Existentially bound type variables (tyvars only)
pat_dicts :: [id], -- Ditto *coercion variables* and *dictionaries*
-- One reason for putting coercion variable here, I think,
-- is to ensure their kinds are zonked
pat_binds :: DictBinds id, -- Bindings involving those dictionaries
pat_args :: HsConPatDetails id,
pat_ty :: Type -- The type of the pattern
}
------------ View patterns ---------------
| ViewPat (LHsExpr id)
(LPat id)
PostTcType -- The overall type of the pattern
-- (= the argument type of the view function)
-- for hsPatType.
------------ Quasiquoted patterns ---------------
-- See Note [Quasi-quote overview] in TcSplice
| QuasiQuotePat (HsQuasiQuote id)
------------ Literal and n+k patterns ---------------
| LitPat HsLit -- Used for *non-overloaded* literal patterns:
-- Int#, Char#, Int, Char, String, etc.
| NPat (HsOverLit id) -- ALWAYS positive
(Maybe (SyntaxExpr id)) -- Just (Name of 'negate') for negative
-- patterns, Nothing otherwise
(SyntaxExpr id) -- Equality checker, of type t->t->Bool
| NPlusKPat (Located id) -- n+k pattern
(HsOverLit id) -- It'll always be an HsIntegral
(SyntaxExpr id) -- (>=) function, of type t->t->Bool
(SyntaxExpr id) -- Name of '-' (see RnEnv.lookupSyntaxName)
------------ Generics ---------------
| TypePat (LHsType id) -- Type pattern for generic definitions
-- e.g f{| a+b |} = ...
-- These show up only in class declarations,
-- and should be a top-level pattern
------------ Pattern type signatures ---------------
| SigPatIn (LPat id) -- Pattern with a type signature
(LHsType id)
| SigPatOut (LPat id) -- Pattern with a type signature
Type
------------ Pattern coercions (translation only) ---------------
| CoPat HsWrapper -- If co::t1 -> t2, p::t2,
-- then (CoPat co p) :: t1
(Pat id) -- Why not LPat? Ans: existing locn will do
Type -- Type of whole pattern, t1
-- During desugaring a (CoPat co pat) turns into a cast with 'co' on
-- the scrutinee, followed by a match on 'pat'
##### Id
From basicTypes/Var.lhs:
type Id = Var
Every @Var@ has a @Unique@, to uniquify it and for fast comparison, a
@Type@, and an @IdInfo@ (non-essential info about it, e.g.,
strictness). The essential info about different kinds of @Vars@ is
in its @VarDetails@.
-- | Essentially a typed 'Name', that may also contain some additional information
-- about the 'Var' and it's use sites.
data Var
= TyVar {
varName :: !Name,
realUnique :: FastInt, -- Key for fast comparison
-- Identical to the Unique in the name,
-- cached here for speed
varType :: Kind, -- ^ The type or kind of the 'Var' in question
isCoercionVar :: Bool
}
| TcTyVar { -- Used only during type inference
-- Used for kind variables during
-- inference, as well
varName :: !Name,
realUnique :: FastInt,
varType :: Kind,
tcTyVarDetails :: TcTyVarDetails }
| Id {
varName :: !Name,
realUnique :: FastInt,
varType :: Type,
idScope :: IdScope,
id_details :: IdDetails, -- Stable, doesn't change
id_info :: IdInfo } -- Unstable, updated by simplifier
#### ExhaustivePat
From deSugar/Check.lhs:
type ExhaustivePat = ([WarningPat], [(Name, [HsLit])])
##### WarningPat
type WarningPat = InPat Name
From hsSyn/HsPat.lhs:
type InPat id = LPat id -- No 'Out' constructors
type LPat id = Located (Pat id)
From basicTypes/SrcLoc.lhs:
-- | We attach SrcSpans to lots of things, so let's have a datatype for it.
data Located e = L SrcSpan e
##### Name
From basicTypes/Name.lhs:
-- | A unique, unambigious name for something, containing information about where
-- that thing originated.
data Name = Name {
n_sort :: NameSort, -- What sort of name it is
n_occ :: !OccName, -- Its occurrence name
n_uniq :: FastInt, -- UNPACK doesn't work, recursive type
--(note later when changing Int# -> FastInt: is that still true about UNPACK?)
n_loc :: !SrcSpan -- Definition site
}
-- NOTE: we make the n_loc field strict to eliminate some potential
-- (and real!) space leaks, due to the fact that we don't look at
-- the SrcLoc in a Name all that often.
### Understanding the current implementation
It's important to see what's currently done to see the problems that have already been solved, and that maybe are not solved by the new proposal.
#### check is a wrapper for check'
The check function is just a wrapper to a check' function, which receives the patterns in a canonical form. Notice that the original form of the patterns is preserved to inform the user if there's anything wrong in the same way it's written in the code.
From deSugar/Check.lhs:
It simplify the patterns and then call @check'@ (the same semantics), and it
needs to reconstruct the patterns again ....
The problem appear with things like:
\begin{verbatim}
f [x,y] = ....
f (x:xs) = .....
\end{verbatim}
We want to put the two patterns with the same syntax, (prefix form) and
then all the constructors are equal:
\begin{verbatim}
f (: x (: y [])) = ....
f (: x xs) = .....
\end{verbatim}
We would prefer to have a @WarningPat@ of type @String@, but Strings and the
Pretty Printer are not friends.
We use @InPat@ in @WarningPat@ instead of @OutPat@
nbecause we need to print the
warning messages in the same way they are introduced, i.e. if the user
wrote:
\begin{verbatim}
f [x,y] = ..
\end{verbatim}
He don't want a warning message written:
\begin{verbatim}
f (: x (: y [])) ........
\end{verbatim}
Then we need to use InPats.
\begin{quotation}
Juan Quintela 5 JUL 1998\\
User-friendliness and compiler writers are no friends.
\end{quotation}
This equation is the same that check, the only difference is that the
boring work is done, that work needs to be done only once, this is
the reason top have two functions, check is the external interface,
@check'@ is called recursively.
check' :: [(EqnNo, EquationInfo)]
-> ([ExhaustivePat], -- Pattern scheme that might not be matched at all
EqnSet) -- Eqns that are used (others are overlapped)
#### A deeper look to check
##### The second return
The second return it returns is all the equation which are overlapped, that is, that are not possible to be matched. This can be seen by the comment in the check' function and the code of check, in deSugar/Check.lhs:
check qs = (untidy_warns, shadowed_eqns)
where
(warns, used_nos) = check' ([1..] zip map tidy_eqn qs)
untidy_warns = map untidy_exhaustive warns
shadowed_eqns = [eqn | (eqn,i) <- qs zip [1..],
not (i elementOfUniqSet used_nos)]
check' returns the used equations, and check the ones that were not returned by check'.
##### The first return
The first return are a representation of the values that are not recognized, or that don't match any of the equations. This can be infered by the comment on the function check.
It seems that the first component of the tuple at ExaustivePat is the list of the related Patterns, and the second a list of the field and the values that are not recognized by the field. It's not clear yet what the Name is related to.
## Trying to build with the rest of GHC
The file HACKING in the source code has good links for working with the GHC source.
### Building the current GHC
A good tip in http://hackage.haskell.org/trac/ghc/wiki/Building/Hacking is to check for mk/build.mk. On this file there're some flavours of builds, and I chose the fastest one.
Another good tip was to use more processes with -j than the number of processors, since a lot of time spent on building is IO.
Now I'm trying to build the darcs version of ghc.
### Building a version that does nothing but compiles correctly
module Check ( check , ExhaustivePat ) where
import HsSyn
import DsUtils
import Name
type WarningPat = InPat Name
type ExhaustivePat = ([WarningPat], [(Name, [HsLit])])
check :: [EquationInfo] -> ([ExhaustivePat], [EquationInfo])
check = undefined
#### Building
It was not trivial to build it. I first build the compiler as it was, and it worked. But when I changed this code and asked for the rebuild of this module only, I got errors:
Then I noticed that the code was being used in the stage 1 of the compiler, which is used to build the stage 2. As the build uses -Wall, the undefined was being reached. I'm not sure why, but even with -w the undefined is being reached, so I'll have to change the test.
### Second try of building something
I'll just return two empty lists, meaning that there should never be a printed warning.
check = ([], [])
Which, of course, caused:
compiler/deSugar/Check.lhs:12:8:
Couldn't match expected type [EquationInfo]
-> ([ExhaustivePat], [EquationInfo])'
against inferred type ([a], [a1])'
In the expression: ([], [])
In the definition of check': check = ([], [])
Duh! I corrected it to:
check qs = ([], [])
And it worked.
## Talking to the GHC guys
### Problem in Trac
I've received the e-mail with my lost password, but when I try to login I get this message:
The page isn't redirecting properly.
Iceweasel has detected that the server is redirecting the request for this
address in a way that will never complete.
This problem can sometimes be caused by disabling or refusing to accept
`
I've mailed Igloo about it, but he said he had no time yet to check on this.
### Owning the bug
I've read that I should “Take ownership of the bug in Trac”. So I changed the ownership of the bug, and Simon Peyton-Jones told me to read a paper and to study about View Patterns. I'll sure do it.
### Test case
Now I should create a test case for the bug. I'll work on this.
#### Open bugs
The bugs I could find open in the GHC Trac are:
##### 322
I could reproduce this bug and it seems to be a complicated bug to deal with, since it's related to the implicit conversion of types in the type class Num done by the compiler. To fix this, it would be necessary to have the already converted code passed to the Pattern Matching checker.
The code in it can be used to generate a test case.
##### 1307
The only important thing I see in this bug is correcting the message to the user with a better syntax. I think it's unrelated with the rewrite task and could be done immediately, and it doesn't worth a test case.
##### 2204
This is the same bug as 1307.
# Other Free Software Projects
Parent page: marco
|
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 29 Jan 2020, 02:18
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The difference 942 − 249 is a positive multiple of 7. If a, b, and c a
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Senior Manager
Joined: 04 Sep 2017
Posts: 291
The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
21 Sep 2019, 13:59
2
57
00:00
Difficulty:
95% (hard)
Question Stats:
43% (02:22) correct 57% (02:27) wrong based on 274 sessions
### HideShow timer Statistics
The difference 942 − 249 is a positive multiple of 7. If a, b, and c are nonzero digits, how many 3-digit numbers abc are possible such that the difference abc − cba is a positive multiple of 7 ?
A. 142
B. 71
C. 99
D. 20
E. 18
PS01661.01
##### Most Helpful Community Reply
Director
Status: Manager
Joined: 27 Oct 2018
Posts: 821
Location: Egypt
Concentration: Strategy, International Business
GPA: 3.67
WE: Pharmaceuticals (Health Care)
The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
21 Sep 2019, 17:07
13
9
The difference between the two digits (abc − cba) = $$100a+10b+c - 100c-10b-a = 99a-99c = 99(a-c)$$
As 99 is not divisible by 7, then $$(a-c)$$ must be divisible by 7 (and b has no effect on the overall outcome)
$$(a-c)$$ are only 2 possible values: $$(9-2)$$ and $$(8-1)$$ ;as a,b,c are non zero; and a must be > c to keep the positive sign.
b has $$9$$ possible values (all except 0)
So the total number of possible outcomes = $$2*9 = 18$$
E
##### General Discussion
Manager
Joined: 20 Aug 2017
Posts: 104
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
21 Sep 2019, 22:38
2
gmatt1476 wrote:
The difference 942 − 249 is a positive multiple of 7. If a, b, and c are nonzero digits, how many 3-digit numbers abc are possible such that the difference abc − cba is a positive multiple of 7 ?
A. 142
B. 71
C. 99
D. 20
E. 18
PS01661.01
The numbers are 100a + 10b +c and 100c + 10b + a.
After taking the difference, we are left with 99(a-c).
Now for this number to be a multiple of 7, a-c = 7 or 0.
for (a-c) = 7, there are two possibilities ---> (9,2) and (8,1)
and for a-c = 0, a = c, so there are 9 possibilities that are 1 to 9.
The value of b does not matter and since b cannot the value 0, so there 9 possibilities for b in each case.
There are a total of 11 cases so the total number of numbers possible are 9*11 = 99.
IMO, answer is C
GMAT Club Legend
Joined: 18 Aug 2017
Posts: 5748
Location: India
Concentration: Sustainability, Marketing
GPA: 4
WE: Marketing (Energy and Utilities)
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
22 Sep 2019, 01:32
given
100a+10b+c-100c-10b-a ; 99(a-c)
for 99(a-c) to be divisible (a-c) has to be divisible by 7 and a,b,c are all single digit non zero ;
only possibility of (a-c) = 7 ; ( 9-2) & (8-1) ; 2 choices of a & c ; for b which is middle term we have 9 options ; so total possible values; 9*2 ; 18
IMO E
gmatt1476 wrote:
The difference 942 − 249 is a positive multiple of 7. If a, b, and c are nonzero digits, how many 3-digit numbers abc are possible such that the difference abc − cba is a positive multiple of 7 ?
A. 142
B. 71
C. 99
D. 20
E. 18
PS01661.01
SVP
Joined: 03 Jun 2019
Posts: 1950
Location: India
GMAT 1: 690 Q50 V34
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
26 Sep 2019, 08:58
2
gmatt1476 wrote:
The difference 942 − 249 is a positive multiple of 7. If a, b, and c are nonzero digits, how many 3-digit numbers abc are possible such that the difference abc − cba is a positive multiple of 7 ?
A. 142
B. 71
C. 99
D. 20
E. 18
PS01661.01
Given: The difference 942 − 249 is a positive multiple of 7.
Asked: If a, b, and c are nonzero digits, how many 3-digit numbers abc are possible such that the difference abc − cba is a positive multiple of 7 ?
(100a + 10b + c) - (100c + 10b + a) = 99(a-c)
Since 99 is not a multiple of 7, (a-c) should be a multiple of 7.
There are 2 possibilities = {(8,1),(9,2)}
There are 9 possibilities for b
Total such numbers = 2*9 = 18
IMO E
Manager
Joined: 09 May 2018
Posts: 77
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
26 Sep 2019, 22:10
One question on this - If a, b and c are given as different alphabets, should we consider them as distinct or not?
Director
Status: Manager
Joined: 27 Oct 2018
Posts: 821
Location: Egypt
Concentration: Strategy, International Business
GPA: 3.67
WE: Pharmaceuticals (Health Care)
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
26 Sep 2019, 22:24
Kanika3agg wrote:
One question on this - If a, b and c are given as different alphabets, should we consider them as distinct or not?
No, You can't consider them distinct without being given that they are different/distinct integers or something such as a≠b≠c or a>b>c
Manager
Joined: 09 May 2018
Posts: 77
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
26 Sep 2019, 22:56
MahmoudFawzy - Thank you!
Manager
Joined: 27 Mar 2017
Posts: 119
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
11 Oct 2019, 07:28
Brilliant question. Can someone please post questions like this one ?
Intern
Joined: 04 Sep 2019
Posts: 22
Location: India
Concentration: Finance, Sustainability
GMAT 1: 750 Q50 V42
WE: Education (Non-Profit and Government)
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
04 Nov 2019, 23:32
uchihaitachi wrote:
gmatt1476 wrote:
The difference 942 − 249 is a positive multiple of 7. If a, b, and c are nonzero digits, how many 3-digit numbers abc are possible such that the difference abc − cba is a positive multiple of 7 ?
A. 142
B. 71
C. 99
D. 20
E. 18
PS01661.01
The numbers are 100a + 10b +c and 100c + 10b + a.
After taking the difference, we are left with 99(a-c).
Now for this number to be a multiple of 7, a-c = 7 or 0.
for (a-c) = 7, there are two possibilities ---> (9,2) and (8,1)
and for a-c = 0, a = c, so there are 9 possibilities that are 1 to 9.
The value of b does not matter and since b cannot the value 0, so there 9 possibilities for b in each case.
There are a total of 11 cases so the total number of numbers possible are 9*11 = 99.
IMO, answer is C
By this method, the number 252, 343, 515 etc. all qualify to represent 'abc'.
Let's take 252 as abc.
cba = 252.
abc - cba = 0.
But the question wants the difference to be a positive multiple of 7.
Now, 0 is neither positive nor negative.
Thus, when abc = 252, then abc – cba is not a positive multiple of 7.
The condition a = c is not valid.
Only (a,c) = (9,2) and (8,1) is possible.
In both cases b can take 9 possible values as b ≠ 0.
2 x 9 = 18 possible choices.
Intern
Joined: 24 Jan 2016
Posts: 3
The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
14 Dec 2019, 19:11
Can we solve the problem this way?
942 = 3^2 2^2 2^1
Total Factors = (2+1) * (2+1) * (1+1) = 18
Intern
Joined: 26 Dec 2019
Posts: 2
The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
21 Jan 2020, 02:57
Notice something cool in this question, for abc - cba to be divisible by 7, a-c must equal 7 (you can see that from the question stem) if you don't have a strong number sense, using algebra you could reach the same conclusion.
(1) a,c,b are non-zero digits and they are also different
(2) a-c will equal 7 in two different cases a=9, c=2 and a=8, c=1. In both cases b doesn't affect the divisibility of abc-cba by 7, so it can therefore take any value from 1-9 (that is 7 values in both cases since b cannot equal a or c)
Case 1: a (1 value), b (7 values), and c (1 value) = 9
Case 2: a (1 value), b (7 values), and c (1 value) = 9
Total possible numbers that satisfy the above restrictions: 9+9=18
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 10032
Location: Pune, India
Re: The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
21 Jan 2020, 05:43
eduardolarrranaga wrote:
Can we solve the problem this way?
942 = 3^2 2^2 2^1
Total Factors = (2+1) * (2+1) * (1+1) = 18
942 is given as an example to show what the question means. The number of factors of 942 has nothing to do with the solution.
_________________
Karishma
Veritas Prep GMAT Instructor
Learn more about how Veritas Prep can help you achieve a great GMAT score by checking out their GMAT Prep Options >
CrackVerbal Quant Expert
Joined: 12 Apr 2019
Posts: 369
The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink]
### Show Tags
21 Jan 2020, 07:00
The difference of a 3-digit number and its reverse is always divisible by 99.
Now, 99 does not have a factor of 7 in it.
Expressing the 3-digit number as xyz and its reverse as zyx, we have their difference to be 99 (x-z) { because xyz can also be written as 100x + 10y + z and zyx as 100z + 10y + x}. So, the (x-z) has to have a factor of 7 if the difference has to be a multiple of 7 as the question says.
Considering that all the digits involved here are non-zero, (x-z) can be a factor of 7 in only two ways i.e. x=9 and z = 2 AND x=8 and z=1.
For each of these cases, y can be dealt in 9 ways i.e. y can be any digit from 1 to 9. Therefore, we have a total of 2*9 = 18 numbers.
The correct answer option is E.
Hope that helps!
_________________
The difference 942 − 249 is a positive multiple of 7. If a, b, and c a [#permalink] 21 Jan 2020, 07:00
Display posts from previous: Sort by
# The difference 942 − 249 is a positive multiple of 7. If a, b, and c a
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
|
|
William H. Knapp III
You will not be able to submit your work for credit, because you are not logged in. Log in!
This homework was due on Friday, November 23 at 06:00 a.m. Turkish time. Late submissions receive half credit.
By checking the box below, you certify that the answers you will submit here represent your own work.
1. Analysis of variance divides total variance in to portions related to effects and error.
True
False
2. Error variance and effect variance are correlated in ANOVA.
True
False
3. When sampling from a normal population, the sums of squares of samples are distributed according to which distribution?
Binomial
Chi-square
F
Normal
4. When sampling from normal populations, the ratios of different mean squares are distributed according to which distribution?
Binomial
Chi-square
F
Normal
5. As the sample size increases, which distribution approaches the normal distribution?
Binomial
Chi-square
F
All of the above.
6. Which assumption does ANOVA depend on? (Choose all that apply)
That the different populations sampled are normally distributed.
That the different populations sampled have the same variance.
That the observations in each sample were made independently of the others.
That the population variances are known.
That the sample sizes for each group are equal.
7. When the results of an ANOVA involving three groups are significant, what claims are reasonable to make?
That the highest mean is greater than the middle mean.
The means of the groups were not all equal.
The variance of some particular group were different from the mean of some other group.
The variances of the groups were not all equal.
The variance of the effect was large compared to the variance of the error.
8. Imagine you had a study with 8 groups and 20 observations per group. How many degrees of freedom do you have total?
9. How many degrees of freedom do you have for the effect?
10. How many degrees of freedom do you have for the error?
11. If the sum of squares for the effect was 86, what is the mean square for the effect?
12. If the sum of squares for the error was 789, what is the mean square for the error?
13. What would the observed F statistic be for the previous information?
14. What is the critical value for F in this example? Use the traditional alpha level.
15. What should you do?
Fail to reject the null.
Reject the null.
Not enough information to tell.
16. What is probability of observing an F that or more extreme? If you're getting this wrong, make sure it's not due to a silly rounding error by using variables instead of copied and pasted values.
17. For the rest set of questions, you need the data from the last homework.
If you forgot how to get the data from the different samples out, you can use the following code. We're only using sample 1 and 2 so I've only provided the code for those two samples.
s1=data$observations[data$sample==1]
s2=data$observations[data$sample==2]
What's the mean square for the effect of 'sample'?
18. What's the mean square for the error?
19. What's the F ratio?
20. If your alpha is .01, what should you do?
Fail to reject the null.
Reject the null.
Not enough information to tell.
|
|
# NCERT Solutions for Class 12 Maths Chapter 7
NCERT Solutions for class 12 Maths Chapter 7 Exercise 7.11, 7.10, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.3, 7.2, 7.1 and Miscellaneous Exercises of Integrals in English and हिंदी मीडियम free to download in PDF or use online. Class 12 Maths NCERT solutions & Solutions Apps solutions and previous years CBSE Board papers, important questions, assignment of integration, test papers etc. For NCERT Books, Click Here.
Class 12: Maths – गणित Chapter 7: Integrals
## NCERT Solutions for class 12 Maths Chapter 7
### Class 12 Maths Solutions – Integrals
• 12 Maths Chapter 7 Exercise 7.1 Solutions
• 12 Maths Chapter 7 Exercise 7.2 Solutions
• 12 Maths Chapter 7 Exercise 7.3 Solutions
• 12 Maths Chapter 7 Exercise 7.4 Solutions
• 12 Maths Chapter 7 Exercise 7.5 Solutions
• 12 Maths Chapter 7 Exercise 7.6 Solutions
• 12 Maths Chapter 7 Exercise 7.7 Solutions
• 12 Maths Chapter 7 Exercise 7.8 Solutions
• 12 Maths Chapter 7 Exercise 7.9 Solutions
• 12 Maths Chapter 7 Exercise 7.10 Solutions
• 12 Maths Chapter 7 Exercise 7.11 Solutions
• 12 Maths Chapter 7 Miscellaneous Exercise 7 Solutions
NCERT Chapter to study online and answers given in the end of ncert books.
These books are very good for revision and more practice. These book are also confined to NCERT Syllabus.
##### Level 3 Test 1 Test 2
###### Previous Year’s Questions
1. Evaluate: [Delhi 2017]
2. Find [Delhi 2017]
3. Find [Delhi 2017]
4. Evaluate: [Delhi 2017]
5. Evaluate: [Delhi 2017]
6. Evaluate as limit of sums. [CBSE Sample Paper 2017]
7. Using properties of integral, evaluate [CBSE Sample Paper 2017]
8. Find: [CBSE Sample Paper 2017]
9. Find: [Delhi 2016]
10. Evaluate: [Delhi 2016]
11. Find: [Delhi 2016]
12. Evaluate: [Delhi 2016]
Methods of finding Integration: The following are the four important methods of integration.
1. Integration by decomposition into sum or difference.
2. Integration by substitution.
3. Integration by parts
4. Integration by successive reduction.
#### Historical Facts!
Integration (anti-derivative) is an operation inverse to differentiation. From the historical point of view, the concept of integration originated earlier than the concept of differentiation. In fact the concept of integration owes its origin to the problem of finding areas of plane regions, surface areas and volumes of solid bodies etc. firstly the definite integral was expressed as a limit of a certain sum expressing the area of some region. Archimedes, Eudoxus and others developed it as a numerical value equal to the area under the curve of a function for some interval. The word integration has originated from addition and the verb ‘to integrate’ means to merge. Later on, link between apparently two different concepts of differentiation and integration was established by well-known mathematician Newton and Leibnitz in 17th century. This relation is known as fundamental theorem of integral calculus. In the 19th century, Cauchy and Riemann developed the concept of Riemann integration.
## 4 thoughts on “NCERT Solutions for Class 12 Maths Chapter 7”
1. Vijay rai says:
Sir up board ka solution nahi dalate hai kya
2. Ranjeet says:
thnx sir aapne itni mehnat se ye sab kuch upoad kiya thnx once again sir..
Its a big deal q ki mujhe bohot kam cheeze hi passand aati hai
3. NAVYA says:
Very useful Sir Thank you
4. PARAMESWARAN says:
VERY VERY USEFULL
|
|
2015
09-18
# Clumsy Algorithm
Sorted Lists are thought beautiful while permutations are considered elegant. So what about sequence (1, 2, … , n) ? It is the oracle from god. Now Coach Pang finds a permutation (p1, p2, … , pn) over {1, 2, … , n} which has been shuffled by some evil guys. To show Coach Pang’s best regards to the oracle, Coach Pang decides to rearrange the sequence such that it is the same as (1, 2, 3, … , n). The time cost of swapping pi and pj is 2|i- j| – 1. Of course, the minimum time cost will be paid since Coach Pang is lazy and busy. Denote the minimum time cost of the task as f(p). But Coach Pang is not good at maths. He finally works out a clumsy algorithm to get f(p) as following:
Coach Pang’s algorithm is clearly wrong. For example, n = 3 and (3, 2, 1) is the permutation. In this case, f(p) = 3 but g(p) = 0 + 0 + 2 = 2. The question is that how many permutations p of {1, 2, … , n} such that f(p) = g(p). To make the problem more challenge, we also restrict the prefix of p to (a1, a2, … , ak). To sum up, you need to answer the question that how many permutations p of {1, 2, … , n} with the fixed prefix p1 = a1, p2 = a2, … , pk = ak such that f(p) = g(p). Since the answer may be very large, for convenience, you are only asked to output the remainder divided by (109 + 7).
The first line contains a positive integer T(1 <= T <= 100), which indicates the number of test cases. T lines follow. Each line contains n, k, a1, a2, a3, … , ak. (1 <= n <= 100, 0 <= k <= n, 1 <= ai <= n and all ai are distinct.)
The first line contains a positive integer T(1 <= T <= 100), which indicates the number of test cases. T lines follow. Each line contains n, k, a1, a2, a3, … , ak. (1 <= n <= 100, 0 <= k <= n, 1 <= ai <= n and all ai are distinct.)
2
3 0
5 2 1 4
Case #1: 5
Case #2: 3
Hint
Among all permutations over {1, 2, 3}, {3, 2, 1} is the only counter-example.
|
|
# Yuri Sulyma
Office 313 Kassar House firstname_lastnamebrown.edu Department of Mathematics Box 1917 Brown University 151 Thayer Street Providence, RI 02912
I am a Tamarkin Assistant Professor of Mathematics at Brown University. I received my PhD from the University of Texas at Austin under the supervision of Andrew Blumberg.
I am interested in algebraic K-theory, particularly trace methods; equivariant stable homotopy theory; higher category theory; and Goodwillie calculus. I am also frequently interested in how these interact with arithmetic geometry, particularly crystalline/prismatic cohomology.
I have compiled a reading list for $$p$$-adic cohomology and its connections with THH.
## Teaching
### Spring 2022
MATH 0540 Honors Linear Algebra
### Fall 2021
MATH 0180 Intermediate Calculus
### Spring 2021
Math 2420 Algebraic Topology
### Fall 2020
Math 2410 Algebraic Topology
### Spring 2020
Math 520 Linear Algebra
Math 1410 Topology
### Fall 2019
Math 0090 Introductory Calculus I
## Publications / Preprints
1. Floor, ceiling, slopes, and $$K$$-theory. Preprint, 2021.
2. A slice refinement of Bökstedt periodicity. Preprint, 2020.
3. $$\infty$$-categorical monadicity and descent. New York Journal of Mathematics 23 (2017), 749–777. arxiv version.
## In preparation
Drafts available by request.
1. $$RO(\mathbb T)$$-graded $$\mathrm{TF}$$ of perfectoid rings
2. Stable module categories as categorified Tate cohomology (joint with Aaron Royer)
## Talks / Slides
• A slice refinement of Bökstedt periodicity Video Slides
## Visualizations / Tools
I make videos about math at Epiplexis. Here are some additional math interactives I have created.
I am also the developer of Liqvid, a library for making interactive videos on the web.
|
|
## Intermediate Algebra (12th Edition)
Published by Pearson
# Chapter 7 - Section 7.2 - Rational Exponents - 7.2 Exercises: 71
#### Answer
$x^{3}y^{8}$
#### Work Step by Step
$\bf{\text{Solution Outline:}}$ Use the laws of exponents to simplify the given expression, $\dfrac{\left( x^{1/4}y^{2/5}\right)^{20}}{x^2} .$ $\bf{\text{Solution Details:}}$ Using the extended Power Rule of the laws of exponents which is given by $\left( x^my^n \right)^p=x^{mp}y^{np},$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{x^{\frac{1}{4}\cdot20}y^{\frac{2}{5}\cdot20}}{x^2} \\\\= \dfrac{x^{5}y^{8}}{x^2} .\end{array} Using the Quotient Rule of the laws of exponents which states that $\dfrac{x^m}{x^n}=x^{m-n},$ the expression above simplifies to \begin{array}{l}\require{cancel} x^{5-2}y^{8} \\\\= x^{3}y^{8} .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# Traces for homogeneous Sobolev spaces in infinite strip-like domains
created by leoni on 29 Aug 2018
[BibTeX]
preprint
Inserted: 29 aug 2018
Year: 2018
ArXiv: 1808.09305 PDF
Abstract:
In this paper we construct a trace operator for homogeneous Sobolev spaces defined on infinite strip-like domains. We identify an intrinsic seminorm on the resulting trace space that makes the trace operator bounded and allows us to construct a bounded right inverse. The intrinsic seminorm involves two features not encountered in the trace theory of bounded Lipschitz domains or half-spaces. First, due to the strip-like structure of the domain, the boundary splits into two infinite disconnected components. The traces onto each component are not completely independent, and the intrinsic seminorm contains a term that measures the difference between the two traces. Second, as in the usual trace theory, there is a term in the seminorm measuring the fractional Sobolev regularity of the trace functions with a difference quotient integral. However, the finite width of the strip-like domain gives rise to a screening effect that bounds the range of the difference quotient perturbation. The screened homogeneous fractional Sobolev spaces defined by this screened seminorm on arbitrary open sets are of independent interest, and we study their basic properties. We conclude the paper with applications of the trace theory to partial differential equations.
Credits | Cookie policy | HTML 5 | CSS 2.1
|
|
# How to Smooth a Path
Context-aware interpolation even when spline fails.
Written on November 4, 2018
One of the issues I ran into when writing about Map of BART was, literally connecting the dots wasn’t able to produce paths as aesthetically pleasing as I had hoped for. They are a bit too edgy for my taste, especially where the lines connect.
In some cases, the spline interpolation method could come in handy. However, it is not inconceivable to have a path so twisted that no function (in the mathematical sense) would be adequate to characterize the trajectories entirely, and this is where an alternative approach is called for.
## Problem Formulation
Given path $ABC$, find the optimal point $P$ such that path $APBC$ is visually smooth.
## A Heuristic Solution
where $\cos \angle{ABC}$ is given by:
point $D$ is the midpoint between point $A$ and point $B$:
and point $E$ is the projection of point $C$ on line $AB$:
($k$ is a constant)
## Discussion
Intuitively, the position of point $P$ is a function of several factors:
• how big $\lambda$ is
• how big $\angle{ABC}$ is
• how big $\| \overrightarrow{ AB } \|$ is
## Extension
What if we want more than one point between point $A$ and $B$? After solving for point $P$, it naturally becomes part of the given path, so any level of interpolation can be achieved iteratively.
What about the last segment $BC$ where there’s no more point ahead? The simplest answer would be to start all over again for the reversed path $CBA$. In fact, applying the algorithm twice (once forward and once backward) has at least one practical implication: by taking the average between the two sets of interpolated points, we are further taking into account the curvature defined by the points behind as well as those ahead. This is arguably superior to interpolation along either direction alone.
## Putting It All Together
As the name indicates, there isn’t much the forward/backward-looking approach can do when it comes to the last/first segment of the path, and this is where the average method shines the brightest.
## Final Thoughts: the Effect of $\lambda$
$\lambda$ determines how sensitive the position of the interpolated point $P$ is to the change in $\angle{ABC}$ and $\| \overrightarrow{ AB } \|$, so how does the choice of $\lambda$ affect the outcome? A small $\lambda$ probably won’t be very useful as it is going to produce something too similar to the original path. Meanwhile, a large $\lambda$ will break the interpolation in a different way by overstating the curvature. Somewhere in between lies the sweet spot, which turns out to be $0.25$ in my case.
|
|
# English THCSTiếng Anh
#### miniminiaiden
##### Học sinh
Thành viên
[TẶNG BẠN] TRỌN BỘ Bí kíp học tốt 08 môn
Chắc suất Đại học top - Giữ chỗ ngay!!
ĐĂNG BÀI NGAY để cùng trao đổi với các thành viên siêu nhiệt tình & dễ thương trên diễn đàn.
Complete the second sentence so that it has a similar meaning to the first sentence using the word given. Do not change the word given. You must use between two and five words icluding the word given.
47. He was offered a job abroad but rejected it for family reasons.
TURNED He was offered a job but … family reasons.
48. The cyclist had to stop because his bicycle had a flat tyre.
CONTINUE The cyclist … his tyre had been repaired.
49. Today’s football match is postponed and will be held next Wednesday.
PUT Today’s football match has … next Wednesday.
50. Unfortunately, Jessica couldn’t go on holiday because she didn’t have any money.
ABLE If Jessica had had some money, she … go on holiday.
51. There’s no chance of Mark getting to the train on time.
POSSIBLE It won’t be … to the train on time.
52. Cars couldn’t turn down the street because of road works.
PREVENTED Road works … turn down the street.
53. The restaurant we ate in was the best one that we could have chosen for last night.
MADE We couldn’t … choice than the restaurant we ate in last night.
54. “Did you go to the beach on Saturday?” David asked me.
BEEN David wanted to know … the beach on Saturday.
55. Joe had not expected the concert to be so good.
BETTER The concert … had expected.
56. This is a no smoking office.
ALLOWED You … in this office.
57. Paperback books are a lot cheaper than hardback books.
FAR Hardback books … paperback books.
58. My brother is too young to drive a car.
NOT My brother … drive a car.
59. Why are you interested in getting a new job?
WANT Why … a new job?
60. “Have you seen my gloves anywhere, Amy?” asked Mrs Wheatley.
SEEN Mrs Wheatley asked Amy … her gloves anywhere.
61. Suzanne was too excited to sleep.
THAT Suzanne was … not sleep.
62. She finished her last painting while staying in Paris.
DURING The painter’s last painting … stay in Paris
63. The newspaper offered Lynda $5000 for her story, but she refused. TURNED Lynda … of$5000 from the newspaper for her story.
64. She pretended to be ill in order to avoid going to school.
SO She pretended to be ill … to go to school.
65. Jamie found the instructions for assembling the furniture very difficult to understand.
IN Jamie had great … instructions for assembling the furniture.
66. We last went abroad a long time ago.
NOT We … a long time.
67. When did they start living in the suburbs?
HAVE How … in the suburbs?
68. I haven’t caught a cold for ages.
DOWN I last … ages ago.
69. I’m certain that Alice didn’t intend to keep my book.
INTENTION I’m certain Alice … my book.
70. I saw the film although I strongly dislike thrillers.
SPITE I saw the film … dislike of thrillers.
#### Lucasta
##### Học sinh chăm học
Thành viên
Complete the second sentence so that it has a similar meaning to the first sentence using the word given. Do not change the word given. You must use between two and five words icluding the word given.
48. The cyclist had to stop because his bicycle had a flat tyre.
CONTINUE The cyclist couldn't continue going because… his tyre had been repaired.
57. Paperback books are a lot cheaper than hardback books.
FAR Hardback books are far more expensive than paperback books.
58. My brother is too young to drive a car.
NOT My brother not enough age to … drive a car.
63. The newspaper offered Lynda $5000 for her story, but she refused. TURNED Lynda turned down the offer of$5000 from the newspaper for her story.
67. When did they start living in the suburbs?
HAVE How do they have start living in the suburbs?
68. I haven’t caught a cold for ages.
DOWN I last went down ago ages ago.
70. I saw the film although I strongly dislike thrillers.
SPITE I saw the film in spite of my… dislike of thrillers.
Last edited by a moderator:
|
|
, 21.06.2019 19:00 Hazy095
# What term makes it inconsistent y=2x - 4
### Another question on Mathematics
Mathematics, 21.06.2019 14:10
What is the perimeter, p, of a rectangle that has a length of x + 8 and a width of y − 1? p = 2x + 2y + 18 p = 2x + 2y + 14 p = x + y − 9 p = x + y + 7
Mathematics, 21.06.2019 16:30
What is true about the dilation? it is a reduction with a scale factor between 0 and 1. it is a reduction with a scale factor greater than 1. it is an enlargement with a scale factor between 0 and 1. it is an enlargement with a scale factor greater than 1.
For each of these three equations, determine whether there are any solution(s). if there are solutions, say what they are and show how you found them. if there aren't solutions, explain why not. (a) $\textcolor{white}.~~\frac x3+\frac x4+\frac x5 ~=~ \frac{2x}3 + \frac{2x}4 + \frac{2x}5$ (b) $\textcolor{white}.~~\frac y3+\frac y4+\frac y5 ~=~ \frac y3+\frac y4+\frac y5+1$ (c) $\textcolor{white}.~~\frac z3+\frac z4+\frac z5 ~=~ \frac z4+\frac z5+\frac z6+1$
|
|
I've always wanted to play with 3d images and it now turns out that Blender has a python console that will help you do just that (thanks EuroPython 2014). Blender is an open source 3d editing tool with a sizeable community. It does lack beginner code tutorials with simple examples. Most tutorials are focussed on using the tool by hand because in the end, I imagine that that is how the tool will be used most of the time. My interests are to generate visualisations only based on data and code so in this document I will share some simple, albeit a bit verbose, python code to generate cubes with blender. To keep things simple I will only use cubes. Hopefully this will help get people started with the joy of 3d images.
### Set Up with Blender
When you open up blender you can access a python console by clicking on the change editor button and then selecting the python console.
This python console runs python3 and you can veryify that basic python commands work as you would expect. This means that we can define anything function in python here and it would run. We also have access to everything that blender can do through python commands.
### Deleting everything
Define the following function;
def delete_all():
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=True)
This function will select everything in the scene and then delete everything that is selected. Now everytime you create something you can run delete_all() to remove it. After using this function in the console you should now see an empty scene.
The rest of the document will be python scripts to generate cubes. To show the resulting 3d image I will be using Sketchfab to show what the resulting 3d shape should look like in your blender view.
### A Simple Cube
In blender a cube has a location which represents the center of the cube and a radius which describes how large the square is.
bpy.ops.mesh.primitive_cube_add(radius=1, location = (0,0,0))
### Cube of Cubes
We can create many cubes in a regular pattern.
numcubes = 6
rcubes = 0.3
for x in range(numcubes):
for y in range(numcubes):
for z in range(numcubes):
bpy.ops.mesh.primitive_cube_add(
radius=rcubes, location = (x,y,z)
)
### Cubes of Different Sizes
Note that the radius doubles the size of an arc of a cube. So doubling the radius increase the volume of the cube by a factor of eight.
bpy.ops.mesh.primitive_cube_add(radius=4, location = (0,0,0))
bpy.ops.mesh.primitive_cube_add(radius=3, location = (10,0,0))
bpy.ops.mesh.primitive_cube_add(radius=2, location = (20,0,0))
bpy.ops.mesh.primitive_cube_add(radius=1, location = (30,0,0))
### The Sine and Cosine
import math
def f(x,y):
return 5*sin(x/15.0*math.pi) + 5*cos(y/15.0*math.pi)
numcubes = 8
for x in range(-numcubes,numcubes):
for y in range(-numcubes,numcubes):
bpy.ops.mesh.primitive_cube_add(
radius=0.2, location = (x,y,f(x,y))
)
### Recursion in all directions
This is my favorite. It shows you how you can make fractal like 3d constructs with only a few lines of python.
def new_cube(old_loc, direction, rad, dimmer):
res = []
for i in [0,1,2]:
res.append(old_loc[i] + direction[i]*dimmer + 2 * direction[i]*rad )
return [rad*dimmer,res]
def rec(cube, depth):
if depth == 4 :
return None
else:
bpy.ops.mesh.primitive_cube_add(
radius=cube[0], location = cube[1]
)
print(cube)
rec( new_cube(cube[1],(1,0,0),cube[0],0.4) , depth + 1 )
rec( new_cube(cube[1],(0,1,0),cube[0],0.4) , depth + 1 )
rec( new_cube(cube[1],(0,0,1),cube[0],0.4) , depth + 1 )
rec( new_cube(cube[1],(-1,0,0),cube[0],0.4) , depth + 1 )
rec( new_cube(cube[1],(0,-1,0),cube[0],0.4) , depth + 1 )
rec( new_cube(cube[1],(0,0,-1),cube[0],0.4) , depth + 1 )
rec([1,(0,0,0)],0)
### Random Ant Path
We can simulate a random path that an ant might walk underground. We sample a new cube based on the location of it's previous cube to get a 3d random path. Note that for this to work we need to keep in mind the radius of the previous cube and the new cube.
def randdir():
choices = [(1,0,0),(0,1,0),(0,0,1),(-1,0,0),(0,-1,0),(0,0,-1)]
return random.choice(choices)
def new_cube(old_loc, direction):
res = []
for i in [0,1,2]:
res.append(old_loc[i] + direction[i])
return tuple(res)
cube = (0,0,0)
for i in range(500):
cube = new_cube(cube,randdir())
bpy.ops.mesh.primitive_cube_add(radius=0.5, location = cube)
### Normal Distribution
As of this year, you could use numpy but before this wasn't possible. This code was written on a blender version without numpy to show that you can make 3d plots of normal distributions as well.
import random
def r():
return round(random.gauss(0,6))
def bins(s):
minx = min([ i[0] for i in s ])
maxx = max([ i[0] for i in s ])
miny = min([ i[1] for i in s ])
maxy = max([ i[1] for i in s ])
res = {}
for x in range(minx, maxx+1):
xdict = {}
for y in range(miny, maxy+1):
xdict[y] = sum([ 1 for c in s if c[0] == x and c[1] == y ])
res[x] = xdict
return res
bind = bins([ [r(),r()] for i in range(6000) ])
for x in bind.keys():
for y in bind[x].keys():
h = bind[x][y]
if h != 0:
bpy.ops.mesh.primitive_cube_add(
radius=0.5, location = (x,y,h)
)
### Conclusion
Blender is a lot of fun. High schools should be jumping on it for educational purposes pronto.
|
|
## VR 20 A certain toll station on a highway has 7 tollbooths
##### This topic has expert replies
Master | Next Rank: 500 Posts
Posts: 367
Joined: 05 Jun 2015
Thanked: 3 times
Followed by:2 members
### VR 20 A certain toll station on a highway has 7 tollbooths
by NandishSS » Sun Oct 20, 2019 3:37 am
A certain toll station on a highway has 7 tollbooths, and each tollbooth collects $0.75 from each vehicle that passes it. From 6 o'clock yesterday morning to 12 o'clock midnight, vehicles passed each of the tollbooths at the average rate of 4 vehicles per minute. Approximately how much money did the toll station collect during that time period? A.$1,500
B. $3,000 C.$11,500
D. $23,000 E.$30,000
Legendary Member
Posts: 1893
Joined: 29 Oct 2017
Followed by:6 members
by swerve » Sun Oct 20, 2019 1:38 pm
NandishSS wrote:A certain toll station on a highway has 7 tollbooths, and each tollbooth collects $0.75 from each vehicle that passes it. From 6 o'clock yesterday morning to 12 o'clock midnight, vehicles passed each of the tollbooths at the average rate of 4 vehicles per minute. Approximately how much money did the toll station collect during that time period? A.$1,500
B. $3,000 C.$11,500
D. $23,000 E.$30,000
From 6 am to 12 am - 18hrs
Vehicles per minute - 4
Vehicles per hour - 60×4 = 240
Therefore total number of vehicles passed - approx = 250×18 = 4500
Number of toll booths = 7
Money collected = 7×4500×0.75 = 23000$### GMAT/MBA Expert GMAT Instructor Posts: 6106 Joined: 25 Apr 2015 Location: Los Angeles, CA Thanked: 43 times Followed by:24 members by [email protected] » Wed Oct 23, 2019 6:00 pm NandishSS wrote:A certain toll station on a highway has 7 tollbooths, and each tollbooth collects$0.75 from each vehicle that passes it. From 6 o'clock yesterday morning to 12 o'clock midnight, vehicles passed each of the tollbooths at the average rate of 4 vehicles per minute. Approximately how much money did the toll station collect during that time period?
A. $1,500 B.$3,000
C. $11,500 D.$23,000
E. $30,000 We can create the expression: 7 tollbooths x$0.75 per vehicle x 4 vehicles per minute x 60 minutes x 18 hours = $22,680 ≈$23,000
|
|
# 4\sin(x)+7\cos(x)=6 where 0 \leq x \leq 360^{\circ} I put the equation into
$$\displaystyle{4}{\sin{{\left({x}\right)}}}+{7}{\cos{{\left({x}\right)}}}={6}$$
where $$\displaystyle{0}\leq{x}\leq{360}^{{\circ}}$$
I put the equation into the form $$\displaystyle{a}{\sin{{\left({x}\right)}}}+{b}{\cos{{\left({x}\right)}}}={R}{\sin{{\left({x}+{a}\right)}}}$$, but after determining that $$\displaystyle{R}{\cos{{\left({a}\right)}}}={4},\ {R}{\sin{{\left({a}\right)}}}={7}$$ and $$\displaystyle{R}{\sin{{\left({x}+{a}\right)}}}={6}$$, I don't know how to proceed.
• Questions are typically answered in as fast as 30 minutes
### Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
ramirezhereva
Starting from $$\displaystyle{R}=\sqrt{{{66}}},{a}={\arcsin{{\frac{{{7}}}{{\sqrt{{{65}}}}}}}}$$ we have
$$\displaystyle\sqrt{{{65}}}{\sin{{\left({x}+{a}\right)}}}={6}$$
$$\displaystyle\Rightarrow{x}={\arcsin{{\frac{{{6}}}{{\sqrt{{{65}}}}}}}}-{a}={a}{r}{c}{i}{s}{n}{\frac{{{6}}}{{\sqrt{{{65}}}}}}-{\arcsin{{\frac{{{7}}}{{\sqrt{{{65}}}}}}}}$$
Using
$$\displaystyle{\arcsin{{u}}}-{\arcsin{{v}}}={\arcsin{{\left({u}\sqrt{{{1}-{v}^{{2}}}}-{v}\sqrt{{{1}-{u}^{{2}}}}\right)}}}$$
$$\displaystyle{x}={\arcsin{{\left({\frac{{{6}}}{{\sqrt{{{65}}}}}}\cdot{\frac{{{4}}}{{\sqrt{{{65}}}}}}-{\frac{{{7}}}{{\sqrt{{{65}}}}}}\cdot{\frac{{\sqrt{{{65}-{6}^{{2}}}}}}{{\sqrt{{{65}}}}}}\right)}}}$$
$$\displaystyle{x}={\arcsin{{\left({\frac{{{24}-{7}\sqrt{{{29}}}}}{{{65}}}}\right)}}}$$
###### Not exactly what you’re looking for?
Vasquez
HINT:
NSK
The R is related to
so that $$\sin a=\frac{7}{\sqrt{65}}, \cos a=\frac{4}{\sqrt{65}}$$
which is more convenient. Divide both sides by $$\sqrt{65}$$
$$\sin(x+a)=\frac{6}{\sqrt{65}}$$
where
$$\tan \alpha=\frac74$$
|
|
# C# Fastest way to iterate over certain tiles overlapping multiple chunks?
What's the fastest way of iterating over certain tiles overlapping multiple chunks in C#.
Ideally without having to iterate over everything, just a specific set of coordinates? (green cubes example below)
I had some partial solutions, but i'd like to see what you guys come up with.
For example:
I would like to set every tile.IsBuiltOn = true for every tile beneath the building.
I could also use this to check if tile.IsBuiltOn = true to refuse placement of another building.
Chunks and each array of tiles of chunks start at [0,0].
class World
{
public Chunk[][] Chunks { get; set; }
}
class Chunk
{
public Tile[][] Tiles { get; set; }
public Dictionary<CoordinateKey, Building> Buildings { get; set; }
}
class Building
{
public BuildingID BuildingID { get; set; }
public CoordinateKey ChunkCoordinate { get; set; }
public CoordinateKey TileCoordinate { get; set; }
public int WidthInTiles { get; set; }
public int HeightInTiles { get; set; }
}
class Tile
{
public bool IsBuiltOn { get; set; } = false;
}
CoordinateKey is just a struct with int X, Y.
Here is my solution.
First I created a Google sheets to help me map out the chunks and tiles with their coordinates, what I want and what I get from my current iteration code:
I want to iterate over the green tiles, across multiple chunks and receive the correct tile coordinates and chunk coordinates, with only the building Width, Height, start tile X,Y and chunk start X,Y.
This allowed me to finally come up with a working solution (below), which is fast.
Note: To get this solution working quickly I have a lot of code in class constructors, this is not advisable and I will refactor later when I move this solution into my main project.
I used https://csharppad.com/ to quickly test the following code, which can be pasted in the "interactive shell" panel, then click "Go":
class World
{
public Chunk[][] Chunks;
public int ChunkSizeTilesXY = 3;
public World()
{
Chunks = new Chunk[3][];
for (int x = 0; x < 3; x++)
{
Chunks[x] = new Chunk[3];
for (int y = 0; y < 3; y++)
{
Chunks[x][y] = new Chunk()
{
chunkX = x,
chunkY = y
};
}
}
Test();
}
private void Test(){
int buildingWidthInTiles = 2;
int buildingHeightInTiles = 3;
int startTileX = 2;
int startTileY = 2;
int endTileX = startTileX + buildingWidthInTiles;
int endTileY = startTileY + buildingHeightInTiles;
int startChunkX = 0;
int startChunkY = 0;
int chunkX = startChunkX;
int chunkY = startChunkY;
int ctx = startTileX;
int cty = startTileY;
for (int tx = startTileX; tx < endTileX; tx++)
{
// Out of bounds of current chunk, increment chunk and ctx (chunk tile x)
if (ctx > ChunkSizeTilesXY-1)
{
chunkX++;
ctx = 0;
}
for (int ty = startTileY; ty < endTileY; ty++)
{
// Out of bounds of current chunk, increment chunk and cty (chunk tile y)
if (cty > ChunkSizeTilesXY-1)
{
chunkY++;
cty = 0;
}
Chunks[chunkX][chunkY].Tiles[ctx][cty].IsBuiltOn = true;
Console.WriteLine("tx:" + tx + ", ty: " + ty + ", ctx:" + ctx + ", cty: " + cty + ", chunkX: " + chunkX + ", chunkY:" + chunkY);
cty++;
}
ctx++;
// Reset chunkY
chunkY = startChunkY;
}
// Reset chunkX
chunkX = startChunkX;
}
}
class Chunk
{
public Tile[][] Tiles;
public int chunkX;
public int chunkY;
public Chunk()
{
Tiles = new Tile[3][];
for (int x = 0; x < 3; x++)
{
Tiles[x] = new Tile[3];
for (int y = 0; y < 3; y++)
{
Tiles[x][y] = new Tile()
{
chunkX = chunkX,
chunkY = chunkY
};
}
}
}
}
class Tile
{
public bool IsBuiltOn = false;
public int chunkX;
public int chunkY;
public Tile()
{
}
}
var world = new World();
I am still interested in hearing from others on alternate or faster solutions.
• This is basically how I would do it. There are a few code conventions that are a bit off but this isn’t codereview se and the concept is right. – Ed Marty Jun 30 '18 at 12:32
• @EdMarty Thanks for your feedback. Please by all means feel free to suggest small changes/improvements, I may submit this to codereview se at a later date for a thorough review. One thing I will be doing though is moving all the code from the constructors into separate methods. – Drominus Jun 30 '18 at 12:56
Your solution is close; what you should do is divide each coordinate to find which chunk each coordinate is in.
enum POSITION_TYPE
{
TOP_LEFT,
TOP_RIGHT,
BOTTOM_LEFT,
BOTTOM_RIGHT
};
TILE_WIDTH = 3;
for each coordinate
findChunk(Math.Floor(coordinate.X / TILE_WIDTH),
Math.Floor(coordinate.y / TILE_WIDTH));
markBuilt(coordinate.PositionType(),
coordinate.X % TILE_WIDTH, // finds offset into local coordinates of chunk
coordinate.Y % TILE_WIDTH, // finds offset into local coordinates of chunk
Each position type requires a different algorithm of filling squares, but they're all simple alterations (and it could possibly dealt away with with clever use of negatives or positives possibly.)
What this does is find which chunk you're in by realizing that:
• you're always a positive number
• Your tile-width is constant, so dividing by 3 gives you your chunk given a coordinate
• Your tile-local-grid is figured out by the modulus (the remainder of a divide) of Tile Width, for the same reasons as above
NOTE: Given that you'll only be marking up the grid when a building is created or destroyed, the divides/modulus aren't so bad. You may consider some hackery to avoid them; such as "x * .33" if you're using floats or moving to 2x2 or 4x4 super-tiles so that you can use bit-shifting to perform your divide for you (in the case of longs or ints.)
• Thanks for your solution, I like your use of the modulus operator and reducing 2 loops into 1. – Drominus Jun 30 '18 at 13:00
|
|
# simple android app that allows user just to choose IP and port
I developed a web application for mobile devices, every customer installs the webapplication on his webserver. So I have this scenario:
Customer 1: application at 210.132.1.23:87
Customer 2: application at 210.2.13.13:9944
...
Customer N: application at 132.1.23.14:112
With Phone Gap Build I was able to make a simple app (= source code) that directly opens the webapp at a fixed address, but I would like to have the option to choose the IP:port, I cannot make N apps one per customer.
Is there a very simple andorid app that does this? Or do you know if it is possible to do this directly with Phone Gap?
Thanks.
Note: Of course it is possbile to develop this feature, anyway since I am totally new to this world, and as far as I can see in the next future all I need is this if Can find it "ready" it would be very good.
-
Do you want customers to select from the list of all available hosts or just customize your application for every customer in a simple manner? – a.ch. Mar 2 '12 at 8:12
i want the user be able to type in the host (like 210.132.1.23:87). The simplest feature is type one only host, and this would be also the best feature. The app can have just a splash screen and then a list with the host/hosts defined (only one in case we allow single host). And then a as the user clicks on a listitem open the browser at the desired address. THne in android menu simply have delete host, add host. – user193655 Mar 2 '12 at 8:32
I have implemented the similar solution in PhoneGap + jQM. In our case though we have kept a public service which returns list of server to avoid manual typing.
I have put a quick (and dirty) solution in following fiddle. This might not be what you are looking for but may give you some idea. It is more or less what @ChrLipp has already mentioned.
http://jsfiddle.net/dhavaln/qcRuD/
Let me know if you would like to add anything.
-
ok thanks for the example. It is anyway a good idea. – user193655 Mar 8 '12 at 8:44
You have to provide an option dialog within your webapp which is also available offline. On first startup you force the user to enter this data. On following startups you redirect the user to a dynamic build URL. You store the ip address with HTML5 storage mechanism.
Alternatively - not possible with Phonegap build since this service only accept the webapp part and therefore doesn't allow native extensions - you mix native options with the webapp. The settings are implemented native, when you enter the phonegap activity, you use the URL from the options.
But it is very hard to provide you with specific answers since you don't state which (gui) framework(s) you are using.
-
I use Delphi with Raudus, but some will apply if using other Delphi RAD web app developement frameworks like UniGUI or Intraweb/VCLforTheWeb. – user193655 Mar 5 '12 at 16:29
what JS-Framework are you using for your webb app? – ChrLipp Mar 5 '12 at 17:02
|
|
## Chemistry (4th Edition)
(a) nitrogen trifluoride: NF$_{3}$ (b) phosphorus pentabromide: PBr$_{5}$ (c) sulfur dichloride: SCl$_{2}$
To name a molecular compound, we keep the name of the first element as-is, adding a prefix denoting the number of atoms of that element that are present in the compound only if that number is greater than one. For the second element in the compound, we change the ending of the element to $-ide$ and add a prefix reflecting how many atoms of that element are in the compound. To write molecular formulas given the structural formulas, we follow the following list: C, P, N, H, S, I, Br, Cl, O, F; therefore, the element that appears earlier in the list goes first, and the one that appears later in the list goes second. (a) nitrogen trifluoride: NF$_{3}$ (b) phosphorus pentabromide: PBr$_{5}$ (c) sulfur dichloride: SCl$_{2}$
|
|
# Why is the electric field outside a capacitor zero degree?
Date created: Tue, Jun 8, 2021 11:15 AM
Content
FAQ
Those who are looking for an answer to the question «Why is the electric field outside a capacitor zero degree?» often ask the following questions:
### 👉 Why is the electric field zero outside a capacitor?
Where q1 is the charge producing electric field, and q2 is the charge under cosideration. Hence, electric field is given by: E=F/q2; hence it can be seen that electric field is 0 only when either force between them is zero or the charge which produces the electric field is zero.
### 👉 Why is the electric field outside a capacitor zero force?
The electric field in the air around a capacitor is small, but not zero. A charged capacitor forms an electric dipole. But importantly for this case, the electric field in the leads of the capacitor become 0 only when the potential difference (voltage) at a capacitor plate is equal to the voltage of the battery terminal it is connected to.
### 👉 Why is the electric field outside a capacitor zero light?
The usual way you'd show that the electric field outside an infinite parallel-plate capacitor is zero, is by using the fact (derived using Gauss's law) that the electric field above an infinite plate, lying in the x y -plane for example, is given by. E → 1 = σ 2 ϵ 0 k ^. where σ is the surface charge density of the plate.
Because the charge on the two plates is the same magnitude, and of opposite sign, so that the net charge is zero. A spherical capacitor is spherically symmetric, so that the electric field must point radially outward everywhere, and be equal in ma...
The charge distributions $+Q$ and $-Q$ are effectively very close to the surface of the metal between each plate. From Coulomb’s law, when a positive test charge is placed anywhere within this region and released, it will mov...
The usual way you'd show that the electric field outside an infinite parallel-plate capacitor is zero, is by using the fact (derived using Gauss's law) that the electric field above an infinite plate, lying in the x y -plane for example, is given by. E → 1 = σ 2 ϵ 0 k ^. where σ is the surface charge density of the plate.
0. The electric field due to a plate of the capacitor is independent of the distance from it (its uniform) provided its not infinite. So if the finite identical plates have uniform charge density, away from the edges outside the capacitor the field should be 0.
The problem of determining the electrostatic potential and field outside a parallel plate capacitor is reduced, using symmetry, to a standard boundary value problem in the half space z0.
The electric field outside the capacitor will still be zero as before, since a Gaussian surface enclosing both plates will still contain zero net charge. The electric field inside will still be $\frac{\sigma}{\epsilon}$ as before, where $\sigma = \sigma_1 + \sigma_2$, because a Gaussian surface enclosing a single plate will still contain a net ...
Now, if another, oppositely charge plate is brought nearby to form a parallel plate capacitor, the electric field in the outside region (A in the images below) will fall to essentially zero, and that means $$E_\text{inside} = \frac{\sigma}{\epsilon_0}$$ There are two ways to explain this: The simple explanation is that in the outside region, the electric fields from the two plates cancel out.
As your voltage source moves past zero deg. it has 0 volts of output. However, the voltage is increasing quickly. So, the electric field strength in the dielectric of the cap is changing quickly, and as the field gets stronger, it pushes more electrons out of the positive side plate (due to increasing electric force on them created by the field).
If an imperfect constant current source charges the capacitor with infinite capacitance, the voltage drop across the capacitor will stay constantly zero and the constant DC current will ...
What else the electric field inside the conductors is zero. If there was an electric field then the mobile charge carrier would feel a force on them and move so it would cease to be static electricity.
We've handpicked 24 related questions for you, similar to «Why is the electric field outside a capacitor zero degree?» so you can surely find the answer!
### When electric field zero?
Gauss law states that the total electric flux through a hypothetical closed surface is always equal to (1/ε0) times the net charge enclosed by the surface. The electric flux is nothing but the rate of flow electric field through the given area. Since the electric charges are not present inside the conductors, the electric field will remain zero.
### Does zero electric flux imply zero electric field?
No. Flux is only the electric field component normal to some surface. Zero flux might just mean the electric field is parallel to the surface.
### Zero electric potential but non-zero electric field?
Hence the physical significance of a point where the electric field is non-zero, is that the electric field is non-zero, and that the potential is zero there has no physical significance, because the point-values of the potential are not unique.
### Does electric field change in capacitor?
Correct answer:Halved. The voltage drop through the capacitor needs to be equal to the voltage of the battery. The voltage drop of a parallel plate capacitor is equal to the internal electric field times the distance between them. From this, it can be seen that doubling the separation will halve the electric field.
### Electric potential is zero but non zero electric field?
Hence the physical significance of a point where the electric field is non-zero, is that the electric field is non-zero, and that the potential is zero there has no physical significance, because the point-values of the potential are not unique.
### Is electric potential zero when electric field is zero?
Yes, electric potential can be zero at a point even when the electric field is not zero at that point. Considering the case of the electric dipole will help us understand this concept.
### Can electric field be zero?
Where is the electric field equal to zero? If playback doesn't begin shortly, try restarting your device. Videos you watch may be added to the TV's watch history and influence TV recommendations ...
### Is electric field ever zero?
There are two charges. Charge 1 has a value of 1 nC and is located at the origin. Charge 2 is 5 nC at a position on the x-axis at a location of x = 0.3 met...
### Net electric field is zero?
Where is the net electric field zero? In Region II, between the charges, both vectors point in the same direction so there is no possibility of cancelling out. In Region III, the fields again point in opposite directions and there is a point where their magnitudes are the same. It is at this point where the net electric field is zero.
### When is electric field zero?
Where can the electric field be zero? There is a spot along the line connecting the charges, just to the “far” side of the positive charge (on the side away from the negative charge) where the electric field is zero. In general, the zero field point for opposite sign charges will be on the “outside” of the smaller magnitude charge…
### Where electric field equals zero?
Where is the electric field equal to zero? If playback doesn't begin shortly, try restarting your device. Videos you watch may be added to the TV's watch history and influence TV recommendations.
### Where is electric field zero?
Where is the electric field equal to zero? If playback doesn't begin shortly, try restarting your device. Videos you watch may be added to the TV's watch history and influence TV recommendations.
### Electric potential is zero when electric field strength is zero?
Youre talking about masses in the opening sentence - there wont be a point where the potential is zero. There will be a point where the field strngth is zero . Th electric field strength is related to the GRADIENT of the potential. ie whenever the potential is max or min, the field strength and hence the resultant force is zero. 0.
### Electrostatics - electric potential is zero but non zero electric field?
Hence the physical significance of a point where the electric field is non-zero, is that the electric field is non-zero, and that the potential is zero there has no physical significance, because the point-values of the potential are not unique.
### Is electric field strength zero when electric potential is zero?
Yes, electric potential can be zero at a point even when the electric field is not zero at that point. Considering the case of the electric dipole will help us understand this concept. At the midpoint of the charges of the electric dipole, the electric field due to the charges is non zero, but the electric potential is zero.
### Does dielectric effect electric field in capacitor?
• The polarization of the dielectric in the capacitor does reduce the effective electric field of the capacitor, but doesn't completely cancel it out. The reason is the molecules of the dielectric material are not perfectly polarized by the capacitor's electric field.
### What produces a uniform electric field capacitor?
You know that field in the region between the plates of parallel plate capacitor is uniform if the plates are separated by distance fairly small compared to size of the plates. The electric field of uniformly charged very large (in principle infinite) plane is uniform 3.3K views
### Can electric force zero on electric field?
Because F = qE, if there is no electric field at a point then a test charge placed at that point would feel no force. How can we calculate where the point is? If the point is a distance x from the +3Q charge, then it is x-4 away from the -Q charge. If we define right as positive, we can write this as: k (3Q / x 2) - k (Q / (x - 4) 2) = 0
### Electric potential when electric field is zero?
Is electric potential zero when electric field is zero? If the electric field is zero, then the potential has no gradient i.e.: the potential is equal across space. But potential is always measured relative to a baseline, so it can therefore be considered as zero. Where is the electric potential zero?
### When electric field is zero why isn't potential zero?
In those equations what is V is the potential gradient/difference and not the electric potential. The electric potential is a scalar quantity. Even if in an area the electric potential is non-zero but constant there won't be an electric field generated - so in this area the electric field strength is zero but the electric potential is not zero.
### Calculate where electric field is zero?
In Region III, the fields again point in opposite directions and there is a point where their magnitudes are the same. It is at this point where the net electric field is zero. What happens at this point? Because F= qE, if there is no electric field at a point then a test charge placed at that point would feel no force.
### Can net electric field be zero?
(c) Thus, we can conclude that there is only one point at which the net electric field is zero. Let's say this point is a distance x to the left of the +2Q charge. Equating the magnitude of the field from one charge at that point to the magnitude of the field from the second charge gives: .
|
|
# Vector Space R^3 and rigorous proof
1. Mar 5, 2012
### bugatti79
1. The problem statement, all variables and given/known data
1) Consider the 3 norms in vector space R^3, $\| \|_i$ where i=1,2 and infinity. Given x = (2, -5,3) and y = ( -3, 2,0).
Calculate $\|x\|_1, \|x+y\|_2, \|x-2y\|_\infty$
2)Prove Rigorously that
$\displaystyle \lim_{n \to \infty}=\frac{4n^2+1}{2n^2-1}=2$
2. Relevant equations
$x=(x_1,x_2,x_3), \|x\|_1= \sum^{3}_{i=1} |x_i|, (\|x\|_2= \sum^{3}_{i=1} |x_i|^2)^{1/2}, \|x\|_\infty=\underbrace{max}_{i=1,2,3} |x_i|$
I calculate
1) $\|x\|+1= |x_1|+|x_2|+|x_3|=10$
$\|x+y\|_2=\sqrt{(2^2)+(-5)^2+(3^2)+(-3^2)+(2^2)+(0^2)}=\sqrt{51}$
$\|x-2y\|_\infty= |3-2(-3)|=9$
2) Proof:
let $\epsilon$ be given.
Find $\displaystyle n_0 \in \mathbb{N}$ s.t $\left | \frac{4n^2+1}{2n^2-1} -2 \right | < \epsilon \forall n> n_0$
$\left | \frac{4n^2-4n^2+2}{2n^2-1} \right | =\left | \frac{3}{2n^2-1} \right |=\frac{3}{2n^2-1}< \epsilon$ iff
$1+\frac{3}{\epsilon} < 2n^2$ ie
$\forall n>n_0 > \sqrt{\frac{1}{2} (1+\frac{3}{\epsilon})}$ we have
$\left | \frac{3}{2n^2-1} -2 \right | < \epsilon \forall n > n_0$
Last edited: Mar 6, 2012
2. Mar 5, 2012
### lanedance
are x and y given in the question? the way its written is pretty confusing
3. Mar 6, 2012
### Staff: Mentor
Where did the 10 come from? Are you given a specific vector x? If so, you didn't include this information in the problem statement.
The 2nd problem looks fine.
Last edited: Mar 6, 2012
4. Mar 6, 2012
### bugatti79
I have updated original post. Thanks
5. Mar 6, 2012
### Fredrik
Staff Emeritus
You need to get into the habit of reading the stuff you write an extra time before you post it. It took me a while to figure out that when you wrote $\|x\|+1$, you meant $\|x\|_1$. Assuming that I'm right about that, you did that one correctly.
Your calculations of $\|x+y\|_2$ and $\|x-2y\|_\infty$ are however horribly wrong. You showed in the other thread that you don't know what x+y means. Here you're making it clear that you don't know what 2y means. You will not be able to solve any problem that involves expressions like x+y or 2y until you have made sure that you understand what they mean. So please, look up the definition of the addition and scalar multiplication operations on ℝ2 and ℝ3. Forget everything else, and just use your books to try to answer this:
If x = (2,-5,3) and y = (-3, 2,0), then what is
a) x+y
b) 2y
c) x-2y
Edit: By "scalar multiplication operation", I mean the rule for how to multiply a vector by a number. (In particular, I don't mean the rule for how to multiply two vectors to get a number. That operation is often called a "scalar product". I prefer the term "inner product" for that, so that it sounds less similar to "scalar multiplication", which is just multiplication by a scalar (i.e. a number)).
Last edited: Mar 6, 2012
6. Mar 6, 2012
### bugatti79
x+y = (-1,-3,3)
2y = (-6,4,0)
x-2y = (8, -9,3)
above ok? Will have a look at other thread.
7. Mar 6, 2012
### Staff: Mentor
#1 is incorrect and #3 is incorrect. #2 looks fine.
For #1, it's not ||x|| + 1 (which you have already been told - please read the responses you get more carefully); it's ||x||1. IOW, it's the "1" norm (or taxicab norm.
For #3, evaluate x - 2y (you already did), and take the infinity norm of that vector. I don't see how you came up with |-1 - (-5)|, which by the way, happens to be 4, not 5. In any case, neither 4 nor 5 is the answer.
IMO, you spend too much time crafting your stuff in LaTeX, and not enough time dealing with the actual mathematics. It is preferable to have something crude-looking that is correct, than something very nicely formatted that is completely wrong.
8. Mar 6, 2012
### bugatti79
Disastrous typos.
It should read |2-(-6)|=8 for #3. Ie, I have taken the maximum calculated value from (8,-9,3)
Ok, but I thought having crude looking stuff people wont read it, they'll just skim over it and exit.
9. Mar 6, 2012
### Fredrik
Staff Emeritus
It looks like you're still not thinking about how the various notations you're working with are defined. $\max_i |x_i|$ is the largest member of the set $\big\{|x_1|,|x_2|,|x_3|\big\}$.
I think you're confusing yourself by trying to do several things at once. To evaluate $\|x-2y\|_\infty$, you must first use the definitions of x, y, scalar multiplication and addition to find x-2y. Now you can rewrite $\|x-2y\|_\infty$ in the form $\|(a,b,c)\|_\infty$. Then you use the definition of $\|\ \|_\infty$.
Last edited: Mar 6, 2012
10. Mar 6, 2012
### Staff: Mentor
I don't think that they are merely typos.
Where does 2 - (-6) come from?
To evaluate this expression: ||x - 2y||
focus on one thing at a time.
1. Evaluate x - 2y. This is a vector in R3. It is NOT the difference of two numbers.
2. Calculate the maximum norm of this vector.
I guarantee that people will be more impressed by something that makes sense, over something that makes no sense, but isn't quite as pretty. Certainly it's nice to have both, but if you have to choose, lean toward getting the mathematics right.
Besides, and I've said this before to you, posts with lots and lots of LaTeX take an inordinate amount of time to load in some browsers, and that ticks me off when it takes forever for a page to load. For that reason I tend to use LaTeX only where I need to use it.
11. Mar 6, 2012
### Fredrik
Staff Emeritus
Is that really still a problem? Some old version of IE (that very few people use) had problems before, but I thought it was fixed by the recent upgrade of MathJax. I'm using Firefox, and I've never had any problems.
12. Mar 6, 2012
### Staff: Mentor
Yes, it's still a problem in IE9, which is not an old version.
13. Mar 6, 2012
### bugatti79
1) $x-2y=(x_1-2y_1, x_2-2y_2, x_3-2y_3)$
2)$\|x-2y\|_\infty=max (x_1-2y_1, x_2-2y_2, x_3-2y_3)$ for i=1,2,3
$=(8, -9, 3)$
$=8$?
14. Mar 6, 2012
### Staff: Mentor
= ? You are given specific vectors.
You haven't taken the max norm yet, so why did that go away?
No, but at least you're not committing grievous errors.
The max norm is the maximum |xi|, for i = 1, 2, 3.
15. Mar 6, 2012
### Fredrik
Staff Emeritus
This is a good start. Edit: But as Mark said, you were given specific vectors x and y, so you should use the numbers you've been given.
This notation is weird. The words "for i=1,2,3" add no information. Either write $\max_{i\in\{1,2,3\}}\{|x_i-2y_i|\}$ or $\max\{|x_1-2y_1|,|x_2-2y_2|,|x_3-2y_3|\}$.
A real number is never equal to a triple of real numbers.
A triple of real numbers is never equal to a real number. Also, you seem to have forgotten about the absolute value symbols in the definition of $\|\ \|_\infty$.
16. Mar 6, 2012
### bugatti79
It is equal to (8,-9,3)
So the maximum norm is 9?
Not sure I follow what you are trying to say.
17. Mar 6, 2012
### Staff: Mentor
It's not clear to me what "it" refers to.
||<8, -9, 3>|| = 9
What Fredrik was saying is that you are saying that incomparable things are equal. A vector is not a number; the norm of a vector is a number. You can't compare (i.e., with =) a vector with a number.
Boiled down a bit, what you said was
||<x1 - 2y1, x2 - 2y2, x3 - 2y3>|| = <8, -9, 3> = 8
The first thing above is a number. The second thing is a vector. The third thing is a number. Again, a number can never be equal to a vector in R3 and vice-versa.
Also notice that I used no LaTeX in the above.
18. Mar 6, 2012
### bugatti79
Ok, one of my problems is that I'm very sloppy with definitions. I need to buckle up.
Thanks, at least this thread is finish.
19. Mar 6, 2012
### Fredrik
Staff Emeritus
OK, since we have arrived at the correct final result (but still no acceptable way of arriving at that result), I will show you how you should have done this.
Since x = (2,-5,3) and y = (-3, 2,0), we have x-2y=(2,-5,3)-2(-3,2,0)=(8,-9,3). So
$$\|x-2y\|_\infty=\|(8,-9,3)\|_\infty=\max\big\{|8|,\,|-9|,\,|3|\big\}=9.$$ As you can see, this is a trivial problem if you just use the definitions and do one thing at a time.
20. Mar 6, 2012
### Staff: Mentor
And it wouldn't hurt to spend some time reviewing vector algebra. It doesn't seem that you have a good handle on that area, which is preventing you from making progress in the area you're currently studying.
|
|
# Math Help - A tank of water.
1. ## A tank of water.
I am getting perplexed with this problem. I have my method below following the question but the more I look at it the more it doesn't seem right.
A tank contains a perfectly mixed solution of 5kg of salt and 500 litres of water. Starting at t=0, fresh water is poured into the tank at a rate of 4 litres/min. A mixing device mainntains homogeneity. The solution leaves the tank at a rate of 4 litres/min.
a). What is the differential equation governing the amount of salt in the tank at any time?
There is a part b to this question but I can figure that part out if I can do this part.
I need to find $\frac{dm}{dt}$ where m is the mass of salt in kg and t is the time in minutes. Since we have 5kg and 500 litres to start with the concentration is $\frac{5}{500}=\frac{1}{100}$kg/litre.
Using the relationship $\frac{dm}{dt}=\frac{dm}{dv} \frac{dv}{dt}$ (from the chain rule) I get:
$\frac{dm}{dt}=\frac{1}{100}-4t\frac{1}{100}$.
Here $\frac{dm}{dv}=\frac{1}{100}-4t\frac{1}{100}$ and $\frac{dv}{dt}=1$ since the volume of the tank is staying the same.
To me it makes sense because the mass of the salt with respect to time is getting smaller, but as time increases the concentration would get further from $\frac{1}{100}$ so it wouldn't be $4t\frac{1}{100}$ at the end of the expression. I can't see a way past this unless I could somehow incorporate natural logarthms (ie. exponential decay).
Am I approaching this from the right angle?
2. A tank contains a perfectly mixed solution of 5kg of salt and 500 litres of water. Starting at t=0, fresh water is poured into the tank at a rate of 4 litres/min. A mixing device mainntains homogeneity. The solution leaves the tank at a rate of 4 litres/min.
a). What is the differential equation governing the amount of salt in the tank at any time?
Let $y(t)$ be the amount of salt at time $t$.
This means,
$\frac{dy}{dt} = \text{ rate in } - \text{ rate out }$
The rate (of salt) in is 0 since only clean water is coming in.
The rate (of salt) out is $\frac{y}{500}\cdot 5 = \frac{y}{100}$.
Thus, $\frac{dy}{dt} = - \frac{y}{100} \text{ and }y(0)=5$
3. awesome!
This allowed me to do part B (i figured i'd post this because something I predicted happens!)
b). In how many minutes will the concentration of salt reach a 0.1% level (ie. initial concentration is 1%)?
$\frac{dy}{dt}=\frac{-y}{100}$
Solving this gives:
$y=Ae^{\frac{-t}{100}}$ <---- The natural logarthm I mentioned! =O
$t=0, \ y=5$
$5=A$
$y=5e^{\frac{-t}{100}}$
$\frac{0.5}{5}=e^{\frac{-t}{100}}$
$ln\frac{0.5}{5}=\frac{-t}{100}$
$t=-100ln\frac{0.5}{5}$
$t=230 \ minutes \ and \ 16 \ seconds$
EDIT: I was talking to one of my friends today and she pointed out that the step $
\frac{y}{500}\cdot 5 = \frac{y}{100}
$
was a little strange. Wouldn't it make more sense if it was $
\frac{y}{500}\cdot 4 = \frac{y}{125}
$
since 4 litres of water are leaving the tank?
|
|
## Top new questions this week:
### Is it worthwhile to give off-topic talks?
I am a graduate student. Occasionally for some reason I am asked to give a talk on my research at a conference whose stated purpose is almost completely unrelated to my research. To preserve my …
soft-question career
### Does Fermat's last theorem hold in the ordinals?
My question is whether there are no nontrivial solutions in the ordinals of the equations arising in Fermat's last theorem $$x^n+y^n=z^n$$ where $n\gt 2$, and where we use the natural ordinal …
set-theory diophantine-equations
asked by Joel David Hamkins 23 votes
### In any Lie group with finitely many connected components, does there exist a finite subgroup which meets every component?
This question concerns a statement in a short paper by S. P. Wang titled “A note on free subgroups in linear groups" from 1981. The main result of this paper is the following theorem. Theorem (Wang, …
gr.group-theory lie-groups
### The exponent of Ш of y^2 = x^3 + px, where p is a Fermat prime
For $d$ a non-zero integer, let $E_d$ be the elliptic curve $$E_d \colon y^2 = x^3+dx.$$ When we let $d$ be $p = 2^{2^k}+1$, for $k \in \{1,2,3,4\}$, sage tells us that, conditionally on BSD, \# …
nt.number-theory elliptic-curves tate-shafarevich-groups
### Verlinde's formula
"Verlinde's formula" predicts the dimension of the space of conformal blocks of a chiral CFT. Depending on... • which chiral CFT one considers (does one restrict to WZW models, or not?) …
ag.algebraic-geometry reference-request conformal-field-theory vertex-algebras
### Counting 2m X 2m 0-1 matrices with m ones in each row and each column.
Given $m>1$, what is the number of $2m\times 2m$ matrices, made of $0$ and $1$, such that each row has exactly $m$ ones, and each column has exactly $m$ zeros. I am not sure if this is a …
co.combinatorics
### A function composed with itself produces the identity
Let $B$ be the closed unit ball in $\mathbb R^3$ and $f: B\to B$ continuous, such that $f\circ f$ is the identity (i.e., $f\circ f=\mathbb 1_B$) and $f$ restricted on $\partial B$ is also the identity …
gt.geometric-topology gn.general-topology involutions
## Greatest hits from previous weeks:
### Linear Algebra Texts?
Can anyone suggest a relatively gentle linear algebra text that integrates vector spaces and matrix algebra right from the start? I've found in the past that students react in very negative ways to …
books big-list ra.rings-and-algebras linear-algebra
### Why is differentiating mechanics and integration art?
It is often said that "Differentiation is mechanics, integration is art." We have more or less simple rules in one direction but not in the other (e.g. product rule/simple <-> integration by …
ca.analysis-and-odes real-analysis integration
## Can you answer these?
### Equiareal shapes in $\mathbb{R}^d$
There was quite a bit of work on the so-called equichordal problem throughout the 20th century, to decide if some plane convex curve could have two equichordal points. A point is equichordal for a …
mg.metric-geometry
### List of cubical spaces
Suppose I have a three-dimensional cube (I tend to think of it as a regular ideal cube in $\mathbb{H}^3,$ but you don't have to). I glue up its sides in some way to obtain topological spaces. The …
gt.geometric-topology
|
|
# The Length-Constrained Brachistochrone¶
Things you'll learn through this example
• How to connect the outputs from a trajectory to a downstream system.
This is a modified take on the brachistochrone problem. In this instance, we assume that the quantity of wire available is limited. Now, we seek to find the minimum time brachistochrone trajectory subject to a upper-limit on the arclength of the wire.
The most efficient way to approach this problem would be to treat the arc-length $S$ as an integrated state variable. In this case, as is often the case in real-world MDO analyses, the implementation of our arc-length function is not integrated into our pseudospectral approach. Rather than rewrite an analysis tool to accommodate the pseudospectral approach, the arc-length analysis simply takes the result of the trajectory in its entirety and computes the arc-length constraint via the trapezoidal rule:\
\begin{align} S &= \frac{1}{2} \left( \sum_{i=1}^{N-1} \sqrt{1 + \frac{1}{\tan{\theta_{i-1}}}} + \sqrt{1 + \frac{1}{\tan{\theta_{i}}}} \right) \left(x_{i-1} - x_i \right) \end{align}
The OpenMDAO component used to compute the arclength is defined as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 from __future__ import print_function, division, absolute_import import numpy as np from openmdao.api import ExplicitComponent class ArcLengthComp(ExplicitComponent): def initialize(self): self.options.declare('num_nodes', types=(int,)) def setup(self): nn = self.options['num_nodes'] self.add_input('x', val=np.ones(nn), units='m', desc='x at points along the trajectory') self.add_input('theta', val=np.ones(nn), units='rad', desc='wire angle with vertical along the trajectory') self.add_output('S', val=1.0, units='m', desc='arclength of wire') self.declare_partials(of='S', wrt='*', method='cs') def compute(self, inputs, outputs, discrete_inputs=None, discrete_outputs=None): x = inputs['x'] theta = inputs['theta'] dy_dx = -1.0 / np.tan(theta) dx = np.diff(x) f = np.sqrt(1 + dy_dx**2) # trapezoidal rule fxm1 = f[:-1] fx = f[1:] outputs['S'] = 0.5 * np.dot(fxm1 + fx, dx)
Note
In this example, the number of nodes used to compute the arclength is needed when building the problem. The transcription object is initialized and its attribute grid_data.num_nodes is used to provide the number of total nodes (the number of points in the timeseries) to the downstream arc length calculation.
import openmdao.api as om
import dymos as dm
import matplotlib.pyplot as plt
from dymos.examples.brachistochrone.brachistochrone_ode import BrachistochroneODE
from dymos.examples.length_constrained_brachistochrone.arc_length_comp import ArcLengthComp
MAX_ARCLENGTH = 11.9
OPTIMIZER = 'SLSQP'
p = om.Problem(model=om.Group())
if OPTIMIZER == 'SNOPT':
p.driver = om.pyOptSparseDriver()
p.driver.options['optimizer'] = OPTIMIZER
p.driver.opt_settings['Major iterations limit'] = 1000
p.driver.opt_settings['Major feasibility tolerance'] = 1.0E-6
p.driver.opt_settings['Major optimality tolerance'] = 1.0E-5
p.driver.opt_settings['iSumm'] = 6
p.driver.opt_settings['Verify level'] = 3
else:
p.driver = om.ScipyOptimizeDriver()
p.driver.declare_coloring()
# Create the transcription so we can get the number of nodes for the downstream analysis
traj = dm.Trajectory()
phase = dm.Phase(transcription=tx, ode_class=BrachistochroneODE)
phase.set_time_options(fix_initial=True, duration_bounds=(.5, 10))
continuity=True, rate_continuity=True)
# Minimize time at the end of the phase
# p.model.options['assembled_jac_type'] = top_level_jacobian.lower()
# p.model.linear_solver = DirectSolver(assemble_jac=True)
# Add the arc length component
subsys=ArcLengthComp(num_nodes=tx.grid_data.num_nodes))
p.model.connect('traj.phase0.timeseries.controls:theta', 'arc_length_comp.theta')
p.model.connect('traj.phase0.timeseries.states:x', 'arc_length_comp.x')
p.setup(check=True)
p.set_val('traj.phase0.t_initial', 0.0)
p.set_val('traj.phase0.t_duration', 2.0)
p.set_val('traj.phase0.states:x', phase.interpolate(ys=[0, 10], nodes='state_input'))
p.set_val('traj.phase0.states:y', phase.interpolate(ys=[10, 5], nodes='state_input'))
p.set_val('traj.phase0.states:v', phase.interpolate(ys=[0, 9.9], nodes='state_input'))
p.set_val('traj.phase0.controls:theta', phase.interpolate(ys=[5, 100], nodes='control_input'))
p.set_val('traj.phase0.parameters:g', 9.80665)
p.run_driver()
p.record(case_name='final')
# Plot results
if SHOW_PLOTS:
# Generate the explicitly simulated trajectory
exp_out = traj.simulate()
# Extract the timeseries from the implicit solution and the explicit simulation
x = p.get_val('traj.phase0.timeseries.states:x')
y = p.get_val('traj.phase0.timeseries.states:y')
t = p.get_val('traj.phase0.timeseries.time')
theta = p.get_val('traj.phase0.timeseries.controls:theta')
x_exp = exp_out.get_val('traj.phase0.timeseries.states:x')
y_exp = exp_out.get_val('traj.phase0.timeseries.states:y')
t_exp = exp_out.get_val('traj.phase0.timeseries.time')
theta_exp = exp_out.get_val('traj.phase0.timeseries.controls:theta')
fig, axes = plt.subplots(nrows=2, ncols=1)
axes[0].plot(x, y, 'o')
axes[0].plot(x_exp, y_exp, '-')
axes[0].set_xlabel('x (m)')
axes[0].set_ylabel('y (m)')
axes[1].plot(t, theta, 'o')
axes[1].plot(t_exp, theta_exp, '-')
axes[1].set_xlabel('time (s)')
axes[1].set_ylabel(r'$\theta$ (deg)')
plt.show()
return p
INFO: checking out_of_order
INFO: checking system
INFO: checking solvers
INFO: checking dup_inputs
INFO: checking missing_recorders
WARNING: The Problem has no recorder of any kind attached
INFO: checking comp_has_no_outputs
INFO: checking auto_ivc_warnings
Full total jacobian was computed 3 times, taking 0.468075 seconds.
Total jacobian shape: (220, 296)
Jacobian shape: (220, 296) ( 3.57% nonzero)
FWD solves: 4 REV solves: 14
Total colors vs. total size: 18 vs 220 (91.8% improvement)
Sparsity computed using tolerance: 1e-25
Time to compute sparsity: 0.468075 sec.
Time to compute coloring: 0.049731 sec.
Optimization terminated successfully. (Exit mode 0)
Current function value: 1.808598520786357
Iterations: 5
Function evaluations: 5
|
|
# please i need help fast and need why
if total liabilities increased by $14000 during a period of time and owner's equity decreased by$6000 during the same period, then the amount and direction (increase or decrease) of the period's change in total assets is a :
1) $14000 increase 2)$20000 increase
3) $8000 decrease 4)$ 8000 increase
Answer: (4) Change in total assets = Change in total liabilities Change in total equity Change in total assets = $14,000 -$6,000...
## Related Questions in Accounting Concepts and Principles
• ### multiple choice
(Solved) May 22, 2011
If total liabilities decreased by $15,000 and owner's equity increased by$5,000 during a period of time , then total assets must change by what amount and direction during that same period ? $20,000 increase$10,000 decrease $15,000 decrease$10,000 increase
• ### acat
(Solved) January 24, 2011
If total liabilities increased by $20,000 during a period of time and owner's equity increased by$5,000 during the same period , the amount and direction (increase or decrease) of the period 's change in total assets is:a. $20,000 increase. b.$ 20,000 decrease. c. $25,000 increase. d.$25
• ### accounting
(Solved) September 10, 2011
If the liabilities of a company increased $92,000 during a period of time and equity in the business decreased$30,000 during the same period , did the assets of the company increase or decrease? By what amount ?
• ### If total assets increased $20,000 during a period and total liabilities increased$12,000 during the same period
July 08, 2014
If total assets increased $20,000 during a period and total liabilities increased$12,000 during the same period , the amount and direction (increase or decrease) of the change in owner’s equity for that period is a(n): A. $32,000 increase. B.$32,000 decrease. C. $8,000 increase. D.$ 8,000
• ### 11. If total assets decreased by $5,000 during a period of ti me and capital increased by$15,000...
November 20, 2015
11. If total assets decreased by $5,000 during a period of ti me and capital increased by$15,000 during the same period , the amount and direction (increase or decrease) of the period 's change in total liabilities is: (a) $10,000 increase, (b)$10,000 decrease, (c) \$ 20,000 increase, (d
|
|
# How do you prove (cos[theta]+cot[theta])/(csc[theta]+1)= cos[theta]?
Using the definitions of $\cot \left(\theta\right)$ and $\csc \left(\theta\right)$, for $\sin \left(\theta\right) \ne 0$ and $\sin \left(\theta\right) \ne - 1$, we have
$\frac{\cos \left(\theta\right) + \cot \left(\theta\right)}{\csc \left(\theta\right) + 1} = \frac{\cos \left(\theta\right) + \cos \frac{\theta}{\sin} \left(\theta\right)}{\frac{1}{\sin} \left(\theta\right) + 1}$
$= \cos \left(\theta\right) \frac{1 + \frac{1}{\sin} \left(\theta\right)}{1 + \frac{1}{\sin} \left(\theta\right)}$
$= \cos \left(\theta\right) \cdot 1$
$= \cos \left(\theta\right)$
|
|
# FizzBuzz from a casual rubyist
As a casual Rubyist I am mainly interested, how ideomatic my solution is.
class Integer
def divisible_by?(n)
(self % n).zero?
end
end
def fizzbuzz(upper_bound)
1.upto(upper_bound).map do |number|
next "fizzbuzz" if number.divisible_by? 3 and number.divisible_by? 5
next "fizz" if number.divisible_by? 3
next "buzz" if number.divisible_by? 5
next number
end
end
puts fizzbuzz 100
Remarks: I borrowed the idea for monkeypatching Integer from @tokland here
If you're aiming for readability rather than efficiency or maintainability — and that is a reasonable tradeoff for FizzBuzz — then I would say "Well done." (You could test for divisibility by 15 for efficiency and avoid monkey-patching for maintainability.)
Conventionally, the output should be one entry per line, rather than an array.
If you're going to take an upper bound as a parameter, you may as well take a range, and the parameter should default to 1..100.
Therefore, a better way to call the code would be
fizzbuzz(1..100).each { |output| puts output }
or, using the default,
fizzbuzz.each { |output| puts output }
Since we have hardcoded the divisor as 3 and 5, we could do this:
next "fizzbuzz" if number.divisible_by? 15
If you wish to go by the fact that your program should dictate the problem as it is, we could use the following:
• use a flag: isFizz = (num%3 == 0)
• use a flag: isBuzz = (num%5 == 0)
• use the boolean to decide the output.
Analyse the number of times the modulo operation is performed here:
// for n numbers
next "fizzbuzz" if number.divisible_by? 3 and number.divisible_by? 5
100 times (once for each number) + 33 times (if number.divisible_by? 3 will be true 33 times and thus the second operation will be performed)
next "fizz" if number.divisible_by? 3
fizzbuzz above will be false 94 times, so this will happen 94 times.
next "buzz" if number.divisible_by? 5
fizz above will be false 67 times, so this will happen 67 times.
Total = 100 + 33 +94 + 67 = 267
For using 15 , total = 234
Using the booleans that I suggested, you will end up with 200
• I am not sure what this answer is trying to say. How are you counting the number of divisions? There are no divisions in the code. Maybe you mean “modulo operations” or “method invocations”? Currently, your code snippet is full of magic numbers without much indication what they could mean. Your analysis is probably interesting, but you need to explain what you did. Also, Ruby uses a hash/octothorpe # to introduce comments. – amon Dec 27 '14 at 16:23
• @amon yes, it seems like thepace was trying to count the number of modulo operations. I also found the answer unclear, but I managed to "decipher" it and tried to improve it. – Simon Forsberg Dec 27 '14 at 16:29
• By the way, it is perfectly possible to do a FizzBuzz with 0 modulo operations: codereview.stackexchange.com/a/56896/31562 . It would increase other operations though. And honestly, 267, 234, or 200 modulo's... is the minor possible performance gain really worth it? – Simon Forsberg Dec 27 '14 at 16:31
|
|
# A researcher is studying the population of a small town of N=3000 people.
#### soklang
##### New Member
A researcher is studying the population of a small town of N=3000 people. She's interested in estimating p for several yes/no questions on a survey. The estimate is to be within E=0.05 of the true proportion.
Q1: If, from the pilot study, the proportion is estimated to be 0.27 and the 90% confidence level is used, what is the minimum sample size required?
Q2: If the 95% confidence level is used, what is the minimum sample size required?
Q1:
200
213
243
90
Q2:
341
385
384
95
|
|
# Show that all roots are complex
Show that all roots of $p(x)$ $=$ $11x^{10} - 10x^9 - 10x + 11$ lie on the unit circle $abs(x)$ = $1$ in the complex plane.
My progress so far enter image description here
enter preformatted text here
I tried messing around with the polar form complex numbers but my proof turned out to be incorrect. Any help/suggestions would be greatly appreciated. I simply assumed all the roots were complex because the graph of the polynomial shows that there are no real zeroes. The rational root theorem fails and I tried to use the Fundamental Theorem of Algebra. Also, I couldn't LaTeX 11x^10 for some reason...
• Try 11 x^{10} for $11 x^{10}\,$.Note that $(11x)(11x^9) \ne 11 x^{10}$ if that's what you meant. – dxiv Mar 2 '17 at 0:58
• whoops typo thank you @dxiv – Sanjoy Kundu Mar 2 '17 at 0:59
• @dxiv but the LaTeX for that still gave me problems. Maybe someone can edit this problem's formatting. – Sanjoy Kundu Mar 2 '17 at 1:00
Hint: $\;p(x)=x^5\left(11(x^5+\frac{1}{x^5}) - 10(x^4+\frac{1}{x^4})\right)$. Let $z=x+\frac{1}{x}\,$, express $x^2+\frac{1}{x^2}=z^2-2$ etc in terms of $z\,$, and derive an equation in $z$.
[ EDIT ] The resulting equation in $z$ is the quintic $\,11 z^5 - 10 z^4 - 55 z^3 + 40 z^2 + 55 z - 20 = 0\,$ which can be shown to have $5$ real distinct roots in $(-2,2)\,$. For each of those $z$ roots, the corresponding equation $x^2 - z\,x + 1 =0$ will have a pair of complex conjugate $x$ roots because the discriminant $\Delta = z^2-4 \lt 0\,$, and since their product is $1$ by Vieta's relations, they will all have magnitude $1$ i.e. lie on the unit circle.
• @IntegralBatman The calculations are not the prettiest, but they work out in the end. – dxiv Mar 2 '17 at 1:25
• I would emphasize a point I had not known, if $t$ is real and $x + \frac{1}{x}=t,$ we conclude that $x$ is on the unit circle if $-2 \leq t \leq 2,$ otherwise $x$ is real with absolute value $|x| \neq 1.$ – Will Jagy Mar 2 '17 at 1:55
• @WillJagy Thanks. Edited to make that point more clear. – dxiv Mar 2 '17 at 2:22
When I look it up on the knowledge engine it says "factorization over infinite fields" is
$$(x+1)^2(x^4+x^3+x^2+x+1)^2$$
although neither (x-i) nor (x+i) are regular factors, strangely.
The proof in this one is showing that there are no real factors, I believe.
• That's not the factored form of my polynomial though... – Sanjoy Kundu Mar 3 '17 at 15:22
• If you are allowed to use wolfram, then you can show that all the solutions lie on abs(x)=1. I was just surprised your polynomial didn't have +/- i as a factor. – Tim2see Mar 4 '17 at 0:52
|
|
# Non-targeted detection of food adulteration using an ensemble machine-learning model
### Normal and spiked raw milk samples
Archive data of 65,547 normal bovine raw milk samples sampled between 2017 and 2019 were provided by Mengniu and retrieved from in-house laboratory information management systems (LIMS). The data included data from tests that were routinely performed during industrial quality check testing; one such routine testing was performed on MilkoScan FT120 (FOSS Analytical, Denmark) using FTIR spectroscopy. Compositional data from MilkoScan FT120 comprised eight physiochemical properties of the milk samples: fat, protein, NFS, TS, lactose, RD, FPD, and acidity. The numerical values for the different milk components were determined by a series of calculations based on a multiple linear regression (MLR) model that considered the absorbance of light energy by the sample for specific wavelength regions obtained using an FTIR equipment. The readings were performed once. Among the 65,547 raw milk samples, 1,469 (2.21%) were removed including samples that were labelled as “testing in progress”, (normal) samples that were labelled as “fail”, and samples labelled as “pass”, “unlabelled”, or “untreated” but with one or more compositional features that fell outside the range of mean ± 3 standard deviation (SD) based on the three-sigma rule31.
Because no real adulterated milk had been found, spiked samples were used to train and test the model. From April to August 2020, 912 raw bovine milk samples were tested by Mengniu using MilkoScan FT120. A total of 27 samples (2.96%), which included samples that were labelled as “unlabelled” but with one or more compositional features that fell outside the range of mean ± 3 SD, were excluded. Among the remaining 885 samples, 834 were normal (94.24%) and 51 (5.76%) were intentionally spoilt with cow smell, improperly stored for 36 h, and spiked with potassium sulfate, potassium dichromate, water, citric acid, and sodium citrate. Table 5 shows the concentrations of the adulterants added. The compositional data (n = 885) were obtained using FTIR spectroscopy, and the readings were performed once.
From September 2020 to February 2021, 770 raw bovine milk samples were tested by Mengniu using FOSS FT120. A total of 113 samples (14.689%), which included samples that were labelled as “unlabelled” but with one or more compositional features falling outside the range of mean ± 3 SD, were removed. Among the 6579 remaining samples, 372 (56.62%) were normal raw milk samples and 2855 (43.38%) were spiked raw milk samples. The spiked raw milk samples included samples spiked potassium sulfate, citric acid, potassium dichromate, ammonium sulfate, melamine, urea, lactose, glucose, sucrose, maltodextrin, fructose, water, whole milk powder, whey protein, skimmed milk powder, starch, soy milk, and trisodium citrate. Table 5 shows the number of samples spiked with the corresponding concentrations of adulterants. The compositional data and full absorbance spectra with a wavenumber range of 1000–3550 cm−1 were considered in triplicate. Infrared spectra were obtained using the FTIR technique and were formed by 1056 points measured at wavenumbers ranging from 3000 to 1000 cm−1.
In April 2021, 155 raw bovine milk samples were tested by Mengniu using FOSS FT120 and used for cross-validation. A total of 65 (41.93%) samples were normal raw milk samples, and 90 (58.06%) samples were spiked raw milk samples. The spiked raw milk samples included samples spiked with hydrogen peroxide, glucose, sodium hydroxide, salt, fructose, and sucrose. Table 5 shows the number of spiked samples with the corresponding concentrations of adulterants. The compositional data and full absorbance spectra with a wavenumber range of 1000–3550 cm−1 were considered in triplicate.
Table 6 presents a summary of the number of normal, spiked, and all raw milk samples in their respective years of sampling. Potassium dichromate, potassium sulfate, and hydrogen peroxide are common chemicals used to increase shelf life; sodium citrate, citric acid, sodium hydroxide, and salt are common chemicals used to maintain correct pH. Nitrogen-based adulterants, such as ammonium sulfate and urea, are used to increase shelf life and volume; while melamine, whey protein, soy milk, and whole and skimmed milk powder are used as diluent to artificially alter the protein content after dilution with water. Carbohydrate-based adulterants, such as starch, sucrose, glucose, lactose, fructose, and maltodextrin are used to increase the carbohydrate content and density of the milk. Finally, water is commonly used as a diluent in milk32. All of the abovementioned adulterants are not commonly tested in the dairy industry, and specific tests for these adulterants are not required by the national standard GB 19,301-201017.
### Standardization of full absorbance spectra into selected coordinates of 7 peaks and 1 average
Standardisation of the full absorbance spectra into eight coordinates was performed by the selection of seven peaks within the spectrum regions 1000–1100, 1500–1600, 1730–1800, 2840–2940, and 3450–3550 cm−1 and an average absorbance value for 1250–1450 cm−1 for each sample (Fig. 2)33,34,35.
### Squared Mahalanobis distance (MD) scoring method
The performances of the decision tree and non-decision tree methods were compared. The MD scoring method is a non-decision tree method used to authenticate raw milk samples. The compositional and absorbance spectral data were used to calculate the squared MD score between each sample and the centroid. Upon iterating a range of MD scores, the MD score with the highest F1 score was considered the MD cutoff for distinguishing atypical from typical raw milk. F1 score considers both false positives and false negatives through the weighted average of precision and recall.
### ExtraTrees
ExtraTrees is a machine-learning algorithm proposed by Pierre Geurts et. al in 2006 that consists of multiple decision trees36. Compared with RF, Extratrees has a high discrimination ability and can be more resilient to noise in the dataset because it uses the entire original sample instead of a bootstrap replica to train each decision tree. In this study, we used compositional and spectral data to evaluate how ExtraTrees can be used for the binary classification of a sample as typical or atypical37,38. The original dataset was randomly split into training and testing datasets. The training dataset was first used to train the ExtraTrees predictive model, and the model was verified using a testing dataset to compare the actual and predicted labels. The selection of the best proportion for splitting into the training and test datasets and the number of iterations are discussed in the next section.
### XGBoost
XGBoost is an ensemble learning approach based on (CART)30. XGBoost ensembles trees in a top-down manner. Each tree consists of internal (or split) and terminal (or leaf) nodes. Each split node makes a binary decision, and the final decision is based on the terminal node reached by the input feature. Tree-ensemble methods regard different decision trees as weak learners and then construct a strong learner by either bagging or boosting. Mathematically, the model can be represented by the following objective function with respect to the model parameter (uptheta 🙂
$$objleft( theta right) = Lleft( theta right) + Omega left( theta right),$$
where (Lleft( theta right)) is the empirical loss that must be minimised and (Omega left( theta right)) is a regularisation of the model complexity to prevent overfitting. Considering a tree-ensemble model where the overall prediction is the summation of ({text{K}}) predictive values across all trees (f_{k} left( {x_{i} } right)),
$${text{p}}_{{text{i}}} = mathop sum limits_{{{text{k}} = 1}}^{{text{K}}} {text{f}}_{{text{k}}} left( {{text{x}}_{{text{i}}} } right),$$
the objective function can be expressed as:
$${text{obj}}left( theta right) = mathop sum limits_{i}^{n} lleft( {p_{i} ,t_{i} } right) + mathop sum limits_{k = 1}^{K} Omega left( {f_{k} } right),$$
where (lleft({p}_{i},{t}_{i}right)) is the mean-squared loss imposed on each sample (i), ({p}_{i}) is its predictive value, and the labels ({t}_{i}), (Omega left({f}_{k}right)) are the regularisation constraints imposed on each tree.
In this study, we used compositional and spectral data to evaluate how XGBoost could be used to classify atypical raw milk samples. The original dataset was randomly split into training and testing datasets. The training dataset was first used to train the XGBoost predictive model, and the model was predicted using the testing dataset to compare the actual and predicted labels. The two basic hyperparameters, the learning rate of XGBoost and maximum depth of the tree, were set empirically at 0.01 and 5°, respectively. The hyper-parameters “min_child_weight” and “col_sample_by_tree” were also tuned carefully with a grid search with tenfold cross-validation, and different seeds were applied in each search process to increase the variance of the model and to find an optimal parameter setting that could maximise the generalisation. For each search iteration, we used the prediction score and calculated the binary cross-entropy with respect to the ground-truth labels, that is, the label indicating whether the testing dataset was normal or spiked. The minimum sum of instance weight (Hessian) required in a child was set to 0.5, the subsample ratio of columns when constructing each tree was set to 0.8, and the objective in specifying the learning task and corresponding learning objective were linear. The hyperparameters “subsample” and “num_boost_weight” required for the selection of the best proportion for splitting into training and test datasets and the number of boosting iterations are discussed in the next section.
### Ensemble model: voting and weighting
The ensemble results of the three methods (MD, ExtraTrees, and XGBoost) were investigated to improve the model performance. First, a voting strategy was adopted to combine the results of each method. Training data were used to individually train the MD, ExtraTrees, and XGBoost models. After obtaining three sets of the initial predicted results from each model, the final predicted result was reported as the majority vote among the three results. The voting strategy was evaluated by comparing the voted result with the label.
In addition to voting, a weighting strategy was adopted. Weights from each of the three methods were assigned based on the individual F1 scores. After training the model for MD, ExtraTrees, and XGBoost individually, the initial predicted results of the testing data were obtained in binary form (({r}_{1}), ({r}_{2}), ({r}_{3})). The F1 score of each model (({f}_{{m}_{1}}), ({f}_{{m}_{2}}), and ({f}_{{m}_{3}})) and weights (({w}_{1}), ({w}_{2}), ({w}_{3})) were calculated as follows:
$$w_{1} + w_{2} + w_{3} = 1,$$
$$frac{{f_{{m_{1} }} }}{{w_{1} }} = frac{{f_{{m_{2} }} }}{{w_{2} }} = frac{{f_{{m_{3} }} }}{{w_{3} }}$$
The final predicted result calculated as (r = w_{1} r_{1} + w_{2} r_{2} + w_{3} r_{3}) was evaluated with the labels.
### Selection of the best proportion for splitting into training and test datasets and number of iterations for ExtraTrees and XGBoost
An arbitrary range of proportions for splitting into training and test datasets was examined to determine the optimal proportion of training and testing datasets for ExtraTrees and XGBoost. Each arbitrary splitting was repeated thrice and with one iteration. The splitting proportions of training-to-testing ratios attempted were 50:50, 60:40, 70:30, 80:20, and 90:10. The proportion with the highest F1 score was selected as the optimal proportion for the corresponding model.
Similarly, a range of iterations was performed to determine the optimal number of iterations for ExtraTrees and XGBoost. Splitting was performed with the selected proportion of the training and testing datasets, and each splitting was repeated thrice. The iterations were attempted 1, 5, 10, 50, and 100 times. The iteration with the highest F1 score was selected as the optimal iteration for the corresponding model.
The results were reported in terms of accuracy, sensitivity or recall, specificity, precision or positive predictive value, negative predictive value, false alarm, and F1 score. TP, TN, FP, and FN represent true positive, true negative, false positive, and false negative, respectively, and normal raw milk was considered negative whereas spiked raw milk was considered positive.
$${text{Accuracy}}:frac{{{text{TP }} + {text{ TN}}}}{{{text{TP}} + {text{ TN}} + {text{FP}} + {text{FN}}}}$$
$${text{Sensitivity }};{text{or }};{text{recall}}:frac{{{text{TP}}}}{{{text{TP}} + {text{ FN}}}}$$
$${text{Specificity}}:frac{{{text{TN}}}}{{{text{FP}} + {text{ TN}}}}$$
$${text{Precision }};{text{or }};{text{positive}};{text{ predictive }};{text{value}}:frac{{{text{TP}}}}{{{text{TP}} + {text{ FP}}}}$$
$${text{Negative}};{text{ predictive}};{text{ value}}:frac{{{text{TN}}}}{{{text{TN}} + {text{ FN}}}}$$
$${text{False}};{text{ alarm}}:frac{{{text{FP}}}}{{{text{FP }} + {text{ TN}}}}$$
$${text{F}}_{{1}} ;{text{score}}:frac{{{text{Precision }} times {text{recall}}}}{{{text{Precision }} + {text{recall}}}} times 2$$
The model parameters were selected based on the highest F1 scores. In detecting food adulteration, outlier detection implies an unbalanced dataset, with the vast majority of samples being normal raw milk. With such an uneven class distribution, the cost of false positives and false negatives in our dataset can differ significantly. Hence, the F1 score was used instead of accuracy to select the best model as the F1 score considers both false positives and false negatives through the weighted average of precision and recall. The MD calculations, ExtraTrees, and XGBoost were performed in Python and visualised using PyCharm Community Edition 2021.3.
### Assessment of seasonal and annual variations in raw milk samples
Normal raw milk samples were sub-grouped according to season and year using SPSS. Statistical analysis of annual variations was performed using analysis of variance (ANOVA) with Fisher’s least significant difference (LSD) post hoc test. To further examine if the drift effects affected the modelling results, raw milk samples from 2020 (n = 1002, of which 821 were normal and 181 treated) were used to train the models using XGBoost (found to be the best model in terms of compositional data), and the same trained model was used to predict different samples from 2020 (n = 273, of which 226 were normal and 47 treated) and 2021 (n = 276, 171 normal and 105 treated). A comparison of the models for the raw milk samples from 2020 and 2021 was performed using an independent sample t-test with equal variance, which was not assumed in SPSS. Statistical significance was set at P < 0.05.
### Cross validation of the selected machine-learning model with blinded samples
Cross-validation was performed by testing spiked samples blinded from the training data. Model training was performed on previously available samples using both compositional and spectral data (n = 657, of which 372 were normal raw milk samples and 285 were spiked raw milk samples). Model testing were was performed on 65 normal raw milk samples and 90 raw milk samples spiked hydrogen peroxide (n = 12), sodium hydroxide (n = 15), salt (n = 15), glucose (n = 15), fructose (n = 15), and sucrose (n = 15) with serial dilutions of 0.01, 0.02, 0.05, 0.1, and 0.2 g/100 g of raw milk provided by Mengniu. Hydrogen peroxide, sodium hydroxide, and salt represented new adulterants not presented in the previous dataset. The compositional data and full absorbance spectra were both used for the testing. The weighting method of ExtraTrees and XGBoost was used to model the compositional data and selected coordinates from the full absorbance spectra of raw milk.
To address the problem of drift effect, the inclusion and exclusion of sugar (glucose, fructose, and sucrose) adulterants (n = 45) from the cross-validation dataset into the training dataset were studied and compared. To examine the effect of training the model with more data from the cross-validation dataset, a comparison analysis by including and excluding other adulterants not excluded from the training dataset was also performed. Furthermore, model testing with each adulterant blinded from the training dataset was evaluated.
### Effect of sample size using 8 compositional features
A range of sample sizes was used to determine the relationship between the sample size and predictive power for ExtraTrees and XGBoost. Each splitting was repeated thrice, with one iteration and a training-to-testing ratio of 90:10. The samples sizes attempted were 20%, 40%, 60%, 80%, and 100% of the original sample size (n = 65, 632).
### Performance comparison to GB 19,301-2010
Each sample was labelled as “pass” or “fail” according to the national standards, as described in GB 19,301-2010.
### Ethical approval
None to declare as no human subjects or animal models were required in this study.
|
|
# Questions about analytic functions, and zeros.
I'm studying Silverman's complex analysis but this book seems a lot slack. I have a question in page273:
Suppose $f(z)$ is a nonconstant analytic at $z_0$ with $f(z_0)$=0. Then, by the corollary to Theorem 8.13, there exists a neighborhood of $z_0$ that contains no other zeros of $f(z)$. Thus we may express $f(z)$ as $f(z)=(z-z_0)^k F(z)$ ($k$ a positive integer), where $F(z)$ is analytic at $z_0$ with no zeros in the neighborhood or on its boundary $C$.
I can't understand the last sentence(bold fonts). How can we know there is a factor $(z-z_0)^k$ and even more $F(z)$ is analytic?
I also have Ahlfors' book so you can give me references in Silverman or Ahlfors.
-
Express $f$ as a power series at $z_0$, namely $\sum_n a_n(z-z_0)^n$. Take $k$ the smallest integer $n$ such that $a_n\neq 0$. Then you get a factor $(z-z_0)^k$ and we can defined $F(z)$. Since it's expressed as a power series, it's analytic. – Davide Giraudo Sep 5 '12 at 15:58
Since $f$ is analytic at $z_0$, we can write $f(z)=\sum_{n=0}^{+\infty}a_n(z-z_0)^n$. Let $$k:=\inf\{n\geq 0, a_n\neq 0\}.$$ We have $$f(z)=\sum_{n=k}^{+\infty}a_n(z-z_0)^n=\sum_{j=0}^{+\infty}a_{k+j}(z-z_0)^{k+j},$$ so we define $F(z):=\sum_{j=0}^{+\infty}a_{k+j}(z-z_0)^j$. It's an analytic function, and doesn't have any zero in the neighborhood (otherwise so will have $f$).
-
$k$ exists since $f$ is not constant, of course... – Thomas Andrews Sep 5 '12 at 17:02
In a neighborhood of $z_0$, $$f(z)=c_0+c_1(z-z_0)+c_2(z-z_0)^2+c_3(z-z_0)^3+\dots\tag{1}$$ If $f(z_0)=0$, then $c_0 = 0$. Each subsequent term has at least one factor of $(z-z_0)$. Suppose that $c_k(z-z_0)^k$ is the first non-zero term in the series $(1)$. Divide $f(z)$ by $(z-z_0)^k$, then we get $$F(z)=\frac{f(z)}{(z-z_0)^k}=c_k+c_{k+1}(z-z_0)+c_{k+2}(z-z_0)^2+\dots\tag{2}$$ The series in $(2)$ converges on the same neighborhood of $z_0$ that the series in $(1)$ does. Since $f(z)$ only vanishes at $z_0$ and $F(z_0)=c_k$, we get that $F(z)$ does not vanish anywhere in this neighborhood.
-
|
|
# Recent posts tagged job
1
Formula for no.of calls in f(n) =2f(n)-1 when f(0)=1 only applied fibonacci series where f(0)=1 f(1)=1 f(2)=2 i,e (f(0)+f(1)) f(3)=3............f(7)=21 so no.of calls 2(21)-1=41 remember when f(0)=0,f(1)=1 then formula changes as 2f(n+1)-1
2
When window scaling happens, a 14 bit shift count is used in TCP header. sir why 14 bit only ?? any reason or just an assumption ??
4
Consider the relation R(ABCD) with FD's set F={ AB->CD,C->.A,D->B} which of the following is false ? 1.C->A is a partial Dependency 2.C->A is a transitive Dependency 3.D->B IS A Partial dependency 4. All of these
5
Sir, can we do like this Let there be 4 process P,Q,R,S and 5 resources. Let they be allocated one resource each initially.Now after allocation need of P is 1 , need of Q is 1, need of R is 2 and need of S is 2. Now only 1 resource is available as 1 has been ... to R.....and hence R could complete.....Now available becomes 4. We can now allocate 2 resources to S and hence S can complete......
6
Instead of finding a function 'f' for the first MUX (in terms of z,x and y) and the second MUX (in terms of f , x and y), we can logically find out the answer.We can put in 4 values of A and B and analyse the output C. We can get the values appropriate values of x and y by drawing and comparing with a 2x1 MUX when A and B are given as inputs respectively.
7
Super Key is any set of attributes that uniquely determines a tuple in a relation. Since $E$ is the only key, $E$ should be present in any super key. Excluding $E$, there are three attributes in the relation, namely $F, G , H$. Hence, if we add $E$ to any subset of those three ... the answer is $8$. The following are Super Keys: $\left \{ \substack{E\\EF\\EG\\EH\\EFG\\EFH\\EGH\\EFGH} \right \}$
To see more, click for the full list of questions or popular tags.
|
|
# What occurs as two atoms of fluorine combine to become a molecule of fluorine?
Mar 14, 2018
#### Explanation:
When two fluorine atoms combine, they become more stable, and form a molecule called ${F}_{2}$. When referring to fluorine itself, ${F}_{2}$ is usually used, as fluorine never exists alone due to its reactivity.
At room temperature and pressure, fluorine exists as a yellowish-greenish gas, and will almost react with anything just to get that extra electron.
Since the bonding of ${F}_{2}$ is between two same elements, which are non-metals, non-polar covalent bonding occurs. Here is a diagram of the molecule:
I hope this helps!
|
|
# Talk:Injective function
WikiProject Mathematics (Rated Start-class, Mid-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Mid Importance
Field: Foundations, logic, and set theory
Sorry, are the number of injective functions from X to Y (m in X, n in Y) n^m? I thought that was functions in general. What about (n!/((n-m)!)) ?
Physicproducer (talk) 00:26, 21 November 2012 (UTC)
I don't like the title 'injections are invertible'. To me invertible means to have an inverse, (what the paragraph I am critisising calls a 'full inverse').
I would prefer something like 'injections have left inverses' or maybe 'injections are left-invertible'. Then the section on bijections could have 'bijections are invertible', and the section on surjections could have 'surjections have right inverses'.
I don't like that the main heading of this article is "Injective function" and not "One-to-one function". I am not a PHD or anything, but it seems to me that one-to-one function is the term that is used within highschool and college mathematics, not "Injective function". Psyadam 20:23, 29 March 2007 (UTC)Adam Henderson, March 29, 2007
Perhaps in high-school, but the formal term injection is quite regular in even introductory college courses. --Jeff Wheeler (talk) 22:41, 15 December 2010 (UTC)
I added the paragraph in the first screen with the ugly notice in parenthesis, in hopes that someone of greater mathematical expertise will soon happen upon it and fix/remove the notice and accompanying text, while providing some explanation in the article that clarifies a possible misunderstanding that readers (especially new to the subject) might have in understanding an injection.
I'm attempting to address the following question a student reader might have:
"What if a function is defined as:
F(x) : A -> B
where A = {1} and B = {a,b,c,d}
and the function F is defined as:
1 -> a
1 -> b
1 -> c
1 -> d
Then isn't this 'function' F an injection according to the mathematical rule invoked to determine the property of injection?"
The problem here (as I understand it) is that a mathematical function cannot be designated as having multiple outputs, unless they are specified as an ordered set. So in the example given,
1 -> (a,b,c,d)
would be the appropriate way to define the desired function, with F(x) : A -> BxBxBxB serving as the function's context (or perhaps a better revision, F(x) : A -> {a}x{b}x{c}x{d}).
Austinflorida 20:13, 23 April 2007 (UTC)
Yep. They don't even have to be ordered or numbered, you can have a variable amount of outputs in the form of an unordered set (set operators like "union" works like this). Of course, they will then map the parameter to the set of sets, so the argument is the same.
Being a comp sci kiddie, I don't have the hubris to change it, but it looks right to me. Maybe except for the counter-intuitive bit :P
80.203.114.197 11:52, 29 April 2007 (UTC)
I removed the paragraph on the first screen with the ugly notice in parentheses. Although correct, it badly disrupted the flow of the article, and the issue of whether a function can have multiple outputs is already answered on the first page of the article function, which is conspicuously liked to in this article. Besides, if a student conceives of a multivalued "function" such as the one described by Austinflorida as being injective, they will be absolutely correct, so there's no real danger of confusion here. If it's still an issue, a link to the article Multivalued function would probably be more appropriate than an exposition on the page such as the one provided here. 70.58.35.214 10:53, 30 April 2007 (UTC)
All examples given by functions were bijective, so I replaced exp's codomain R+ by R. I hope that's okay. Tendays 11:20, 28 October 2007 (UTC)
## Inverses and intuitionism
Hello, User:Chinju has added a statement about absence of inverses for injectives in intuitionistic systems. By {0,1}, do you mean the open interval (0,1)? Also, do you have a handy reference for this? Thanks Sam Staton (talk) 07:47, 23 May 2008 (UTC)
I don't have a reference, but constructivist analysis might be a better link than intuitionistic logic. I think Chinju meant the two-point set {0,1}, though what's written applies equally well to (0,1). Algebraist 13:52, 26 May 2008 (UTC)
Thanks (I had misread the sentence). I understand this has to do with indecomposability, as now linked; correct me if I'm wrong. Sam Staton (talk) 15:52, 27 May 2008 (UTC)
I don't know that this is such an important point to include here; intuitionism is an extreme minority view within mathematics, and we don't want to discuss it in every mathematics article as if it is a common viewpoint. That is: the mathematics community has soundly rejected intuitionism over the last 50 years, so we don't want to give a biased picture of contemporary mathematics by discussion intuitionism at every possible point. It is difficult to find a mathematics text that mentions intuitionism at all, apart from texts specifically about it and texts in proof theory. I don't think any generic undergraduate analysis text that covers inverse functions even mentions intuitionism.
Finding the right balance (how often to mention these minority views) is somewhat difficult. I will try to pare down what is here without completely removing it; the article on intuitionism can discuss these things in more detail. I think this article is likely of interest to particularly young readers, so giving everything due weight is important. — Carl (CBM · talk) 16:49, 27 May 2008 (UTC)
Hi Carl, I agree that it is important to get the right balance, and that it's tricky. I think your footnote is appropriate. But I re-added the text "in conventional mathematics" as an aside, in brackets. I don't think this adds undue weight to constructivism.
It might be inappropriate to mention constructivism in every wikipedia article, but this article is a topic in discrete mathematics and set theory, and I think it is appropriate to mention at this stage that the foundations might be questioned. For instance, when teaching, I often find that students find non-constructive principles harder to understand, and it is sometimes nice to tell the keen ones that these principles are debatable. I don't think it's fair to say that "the mathematics community has soundly rejected intuitionism over the last 50 years", since various related topics are still active research areas. Note also that many computer scientists do accept constructivism, and that injective functions and discrete maths are also a part of every undergraduate computer science course (though explicit constructivism admittedly isn't). Sam Staton (talk) 10:14, 29 May 2008 (UTC)
My research is in proof theory and computability theory, so I have spent some time learning and working with intuitionistic systems. While it is true that there is active research in constructive mathematics within mathematical logic, that's not because the principles of ordinary mathematics are being debated. The main contemporary motivations for studying intuitionistic systems are (1) they are interesting on their own, and (2) properties of intuitionistic systems can be used to obtain results about classical systems.
Outside of mathematical logic, it appears to me that the community of constructive mathematicians is on the same scale as the community of ultrafinitists, and both are very small minorities within the broader mathematics community. This is reflected in the complete lack of coverage of either intuitionism or finitism in standard undergraduate mathematics texts. I don't know the CS community well enough to know how many of them might espouse some sort of constructivism or finitism, but I haven't seen those philosophies discussed in the few computer science texts I have seen.
We would do a disservice to readers (especially students) if we gave them the impression that there is some sort of active debate within the mathematics or mathematical logic community about the validity of classical mathematics. So I don't think the "in conventional mathematics" disclaimer should be included here. The entire article (apart from the footnote) is about conventional mathematics, as are virtually all other math articles on WP. I think the footnote is reasonable, though, as a pointer for those who want to learn more. — Carl (CBM · talk) 10:51, 29 May 2008 (UTC)
Hi Carl. I agree it would be wrong if readers got the idea that there was some kind of large guerrilla group trying to rise up against conventional mathematics. (Though there is some active debate about the validity of classical mathematics, I admit it is not within the general mathematics community.)
I'd like to add a third item to your list, though: (3) constructive proofs are often more informative than non-constructive ones. I've just been in our library here, and all the "discrete maths" textbooks I looked in discuss constructivism (Rosen, Ross & Wright, Huth & Ryan, not to mention Paulson, Forster, Taylor). So I think it is quite normal to let students know about this. All the books go further, suggesting that it is best to write constructive proofs where possible. I think that many students are thus aware that there is something better about constructive proofs, and I think those people will be interested to know when something as basic as inverting an injection is non-constructive.
I'm not sure whether the footnote alone is enough. My concern is that footnotes are usually used to clarify unclear sentences, whereas the sentence, without the "(in conventional mathematics)", is not unclear. Sam Staton (talk) 18:09, 29 May 2008 (UTC)
Are you convinced those textbooks are actually discussing mathematical constructivism (e.g. intuitionism) and not just a vague sense that some classical proofs are "constructive" and others aren't? — Carl (CBM · talk) 17:10, 9 June 2008 (UTC)
Sorry to show up so late to this discussion; in response to the first (already answered) question, yes, I had been talking about the two point set, and indecomposability is exactly the right thing to link to. As for the more ongoing debate, I agree with Carl's concern about the infeasibility of discussing constructive logic (or other well-known but "non-standard" systems of math/logic) differences in every mathematics article, but in this particular case, it struck me as a reasonable thing to point out, not because I want to (falsely) suggest that there is some very large controversy over the use of classical systems, but simply to present interesting and illustrative illumination of the concept under discussion (injective functions) as viewed from other, not necessarily competing, perspectives. I had actually been a bit uncomfortable myself with giving my own parenthetical injection of intuitionistic logic so much prominence but couldn't figure out how to present it better; however, I think the current solution, with the parenthetical qualifier of "conventional mathematics" and then the footnote explaining the constructive difference, is pretty good. -Chinju (talk) 20:20, 3 June 2008 (UTC)
Though I don't know if it's fair to say that "This principle may fail in constructive mathematics, where the concepts of function and set are treated differently than in mainstream mathematics." In a sense they are treated differently, sure, but not generally due to any direct difference of definition, but only because of the inevitable effects of differences elsewhere in the system. That is, in such systems as I have in mind, a set remains a collection of elements and a function remains a mapping between such sets (as given by a binary relation satisfying the appropriate properties of "totality" and "single-valuedness"). At this level, the concepts are treated no differently. The only problem is that some binary relations which could classically be proved to be "total", in the relevant sense, could no longer could be proved "total" constructively; but that's hardly a difference in the concepts of set and function, merely a difference in the strength of the logic (to prove certain things to actually be total functions). -Chinju (talk) 20:34, 3 June 2008 (UTC)
I agree, and I've simplified the footnote. Sam (talk) 15:39, 9 June 2008 (UTC)
I don't agree. The counterexample given in the footnote is extremely subtle, as it permits the inclusion "function" f from the "set" {0,1} into the real numbers, but denies the existence of the same "function" when {0,1} is viewed as a subset of the reals rather than as an independent set. From a classical viewpoint, this inclusion map f literally is its own inverse (f and f-1 are the same set {{0,{0}},{1,{1}}}), so only a difference in the definitions of "set" and "function" could permit the "function" f to exist without the inverse of f existing simultaneously.
You may have intended to mean that the domain of f was the set of natural numbers {0,1}. In that case, the above argument doesn't hold, but a stronger one still does: a set, classically, is the collection of elements that satisfy a given property. The property "is either the image of 0 or the image of 1 under the map f" thus defines a set. In order to claim that the inverse of f doesn't exist, you would either have to claim its domain isn't a set (thus departing from the classical definition of sets) or that despite its domain existing, the function itself doesn't exist (departing from the classical definition of function, as even constructively it is possible to distinguish the real number 0 from the real number 1, as they are separated at nonzero distance). This is another facet of the subtlety of the issue.
My motivation in adding that explanatory text in the first place was to include some vague explanation, for a reader grounded only in classical math, telling why the classical principle in question may fail. I didn't want to get into so much detail in the footnote, though. Can you reword it to add some explanation? — Carl (CBM · talk) 17:10, 9 June 2008 (UTC)
Hi Carl, I'm having trouble with your comment. I don't think that the function f-1 that you mention is total. (Sure, it is constructively true that every injection has a partial inverse.) Sam (talk) 17:24, 9 June 2008 (UTC)
I see. I am so used to thinking about functions without an explicitly given codomain that I missed the terminology here that the left inverse needs to be total on the original codomain. Of course that's impossible constructively, like the example shows. My last comment was under the impression that the footnote made a stronger claim.
I went back to see what was confusing me, and I'm going to change the wording slightly to prevent this happening to anyone else, by being more explicit that the left inverse is meant to be a retraction. — Carl (CBM · talk) 18:35, 9 June 2008 (UTC)
I've never heard the phrase "information-preserving function", and a Google search confirms that the phrase is extremely rare. The claim that injective functions are sometimes called information-preserving may be technically true, but it is very misleading. Listing the phrase before "one-to-one function" (a very common phrase) is simply absurd. 204.77.35.12 (talk) 16:58, 31 January 2009 (UTC)
I agree. I've removed the phrase from the intro now. 87.114.27.44 (talk) 21:52, 2 February 2009 (UTC)
## Notation
How frequent is the notation f: XY ? I'm a professional mathematician and I never encountered it. On the other hand I've sometimes seen $f:X\hookrightarrow Y$. 82.229.188.151 (talk) 16:07, 12 March 2012 (UTC)
## I think an edit is required, opinions?
The 3rd paragraph has real problems, I think. I'm not a mathematician, or even skilled in the art but please consider the following. "A function f that is not injective is sometimes called many-to-one. However, this terminology is also sometimes used to mean "single-valued", i.e., each argument is mapped to at most one value; this is the case for any function, but is used to stress the opposition with multi-valued functions." (as of Nov 1, 2013) 1. I find "this terminology" confusing, in the second sentence. The entire first sentence is jargon laden so which "terminology" is being specified? I do know what the sentence means, but believe it WILL confuse many others...Why not say something like: "However "many-to-one" is sometimes used to mean "all-to-one", i.e. all arguments are mapped to the same single value." I also suggest removal of the "at most", since, in my ignorance, I don't believe you map can an argument to null, so it is NOT "at most one", it is "exactly one". I am not confident enough in my knowledge here to make the change. Someone help, please? 2. The second part is really terrible and must be fixed. It seems to have been written by a non-English speaker (non-native?). Opposition???? This is awful. First: WHAT is used to stress the opposition? Second "stress the opposition" is virtually incomprehensible and terrible usage (did author mean "stress the contrast" or simply "contrast x with"? (I am not clear what is supposedly "used" here). Third use of the phrase "multivalued functions" is really going to confuse anyone that knows that functions are NOT multivalued. (see Multivalued_function). It seems this clause is saying that even though it isn't saying anything, it is used anyway. I am removing the clause, perhaps someone can fix what was being communicated and add another sentence here about contrast with multivalued functions (which are not functions). Thanks.Abitslow (talk) 23:33, 1 November 2013 (UTC)
|
|
# Projectile motion problem
narutoish
## Homework Statement
starting at 2.00m away from a waterfall .55m in height, at what minimum speed must a salmon jumping at an angle of 32.0° leave the water to continue upstream?
## Homework Equations
Δx=vi(cosθ)Δt
Δy=vi(sinθ)Δt-1/2g(Δt)2
## The Attempt at a Solution
there were some other equations in the book, but i just can't make the connection, i know i can find vx,i and vy,i if i had vi but i don't know any velocities. i tried using cos32°=(2.0m/h) but i can't get any further, so little help would be appreciated. also i am new.
## Answers and Replies
voko
From the first equation, you can express Δt (unknown) in terms of everything else in it.
You can plug that Δt into the second equation, thus getting an equation for the unknown initial speed.
azizlwl
starting at 2.00m away from a waterfall .55m in height, at what minimum speed must a salmon jumping at an angle of 32.0° leave the water to continue upstream?
..........
First you have to know about vector operation.
$\vec{A}$=$\vec{B}$+$\vec{C}$
You have to think of the reversal.
The salmon is jumping at minimum speed with 32.0° angle.
So will call this velocity $\vec{A}$
Thus $\vec{A}$ has 2 components $\vec{B}$ which say in forward direction and $\vec{C}$ in upward direction.
Horizontal velocity is constant.
Vertical motion is affected by gravity.
Last edited:
|
|
# LHCb observes four new tetraquarks
3 March 2021
The LHCb collaboration has added four new exotic particles to the growing list of hadrons discovered so far at the LHC. In a paper posted to the arXiv preprint server yesterday the collaboration reports the observation of two tetraquarks with a new quark content (cc̄us̄): a narrow one, Zcs(4000)+, and a broader one Zcs(4220)+. Two other new tetraquarks, X(4685) and X(4630), with a quark content cc̄ss̄, were also observed. The results, which emerged thanks to adding the statistical power from LHC Run 2 to previous datasets, follow four tetraquarks discovered by the collaboration in 2016 and provide grist for the mill of theorists seeking to explain the nature of tetraquark binding mechanisms.
The new exotic states were observed in an almost pure sample of 24 thousand B+→J/ψφK+ decays, which, as a three-body decay, may be visualised using a Dalitz plot (see “Mountain ridges” figure). Horizontal and vertical bands indicate the temporary production of tetraquark resonances which subsequently decay to a J/ψ meson and a K+ meson or a J/ψ meson and a φ meson, respectively. The most prominent vertical bands correspond to the cc̄ss̄ tetraquarks X(4140), X(4274), X(4500) and X(4700) which were first observed in June 2016. The collaboration has now resolved two new horizontal bands corresponding to the cc̄us̄ states Zcs(4000)+ and Zcs(4220)+, and two additional vertical bands corresponding to the cc̄ss̄ states X(4685) and X(4630).
These states may have very different inner structures
Liming Zhang
The results have already triggered theoretical head scratching. In November, the BESIII collaboration at the Beijing Electron–Positron Collider II reported the discovery of the first candidate for a charged hidden-charm tetraquark with strangeness, tentatively dubbed Zcs(3985) (CERN Courier January/February 2021 p12). It is unclear whether the new Zcs(4000)+ tetraquark can be identified with this state, say physicists. Though their masses are consistent, the width of the BESIII particle is ten times smaller. “These states may have very different inner structures,” says lead analyst Liming Zhang of the LHCb collaboration. “The one seen by BESIII is a narrow and longer-lived particle, and is easier to understand with a nuclear-like hadronic molecular picture, where two hadrons interact via a residual strong force. The one we observed is much broader, which would make it more natural to interpret as a compact multiquark candidate.”
The new observations take the tally of new hadronic states discovered at the LHC – which includes several pentaquarks as well as rare and excited mesons and baryons – to 59 (see “Diagram of discovery” figure). Though quantum chromodynamics naturally allows the existence of states beyond conventional two- and three-quark mesons and baryons, the detailed mechanisms responsible for binding multi-quark states are still largely mysterious. Tetraquarks, for example, could be tightly bound pairs of diquarks or loosely bound meson-meson molecules – or even both, depending on the production process.
Who would have guessed we’d find so many exotic hadrons?
Patrick Koppenburg
“Who would have guessed we’d find so many exotic hadrons?” says former LHCb physics coordinator Patrick Koppenburg, who put the plot together. “I hope that they bring us to a better modelling of the strong interaction, which is very much needed to understand, for instance, the anomalies we see in B-meson decays.”
|
|
## Pooled variance procedure
Let and be the sample standard deviations constructed from and , respectively. When it is reasonable to assume ,'' we can construct the pooled sample variance
The test statistic
has the -distribution with degrees of freedom under the null hypothesis . Thus, we reject the null hypothesis with significant level when the observed value of satisfies . Or, equivalently we can compute the -value
with having a -distribution with degrees of freedom, and reject when .
Confidence interval. The following table shows the corresponding confidence interval of the population mean difference , when your null hypothesis is rejected.
Hypothesis testing -level confidence interval vs. . vs. . vs. .
Example. Suppose that we consider the significant level , and that we have obtained and from the control group of size , and and from the experimental group of size . Here we have assumed that . Then we can compute the square root of the pooled sample variance , and the test statistic
Thus, we can obtain , and reject . We conclude that the two population means are significantly different. And the 99% confidence interval for the mean difference is .
Generated by MATH GO: 2006-03-21
|
|
# American Institute of Mathematical Sciences
• Previous Article
Global bifurcation of solutions of the mean curvature spacelike equation in certain standard static spacetimes
• DCDS-S Home
• This Issue
• Next Article
Stability analysis of an equation with two delays and application to the production of platelets
November 2020, 13(11): 3029-3045. doi: 10.3934/dcdss.2020117
## Dynamical stabilization and traveling waves in integrodifference equations
1 Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON K1N 6N5, Canada 2 Department of Mathematics and Statistics, Department of Biology, University of Ottawa, Ottawa, ON K1N 6N5, Canada
* Corresponding author: flutsche@uottawa.ca
Received December 2018 Revised April 2019 Published October 2019
Fund Project: VL and FL are supported by respective Discovery Grants from the Natural Sciences and Engineering Research Council (NSERC) of Canada (RGPIN-2016-04318 and RGPIN-2016-04795). FL is grateful for a Discovery Accelerator Supplement award from NSERC (RGPAS 492878-2016). FL thanks the participants of the workshop "Integrodifference equations in spatial ecology: 30 years and counting" (16w5121) at the Banff International Research Station for their feedback on an oral presentation of this material
Integrodifference equations are discrete-time analogues of reaction-diffusion equations and can be used to model the spatial spread and invasion of non-native species. They support solutions in the form of traveling waves, and the speed of these waves gives important insights about the speed of biological invasions. Typically, a traveling wave leaves in its wake a stable state of the system. Dynamical stabilization is the phenomenon that an unstable state arises in the wake of such a wave and appears stable for potentially long periods of time, before it is replaced with a stable state via another transition wave. While dynamical stabilization has been studied in systems of reaction-diffusion equations, we here present the first such study for integrodifference equations. We use linear stability analysis of traveling-wave profiles to determine necessary conditions for the emergence of dynamical stabilization and relate it to the theory of stacked fronts. We find that the phenomenon is the norm rather than the exception when the non-spatial dynamics exhibit a stable two-cycle.
Citation: Adèle Bourgeois, Victor LeBlanc, Frithjof Lutscher. Dynamical stabilization and traveling waves in integrodifference equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (11) : 3029-3045. doi: 10.3934/dcdss.2020117
##### References:
show all references
##### References:
Solution of the integrodifference equation (left) and its 'phase plane' (right), where $F$ is the Ricker function with $r = 0.8$ (solid) and $r = 1.03$ (dashed) and $K$ is the Laplace kernel with $a = 15$
Numerical solution of the IDE in 1, plotted for even (top panel) and odd (bottom panel) generations every 10 time steps. The solid lines correspond to the Ricker growth function 2 with $r = 2.2$ and the dashed line to the logistic growth function 3 with $r = 2.44.$ The dispersal kernel is the Laplace kernel 4 with $a = 15.$ The initial condition was the function $N_0(x) = n_-\chi_{x\leq 10}.$
Plot of the implicit functions defined by equations 15 (thin blue lines) and 16 (thicker red line) with $c = c^*$ for the Ricker function with $r = 1.0327$ (left) and $r = 2.526$ (right). Note that there are no negative real roots as we chose $r>r^*.$ Only the upper half plane is plotted; the lower half plane is symmetric
Solution of the integrodifference equation (left) and its phase plane (right), where $F$ is the Ricker function with $r = 1.8$ and $K$ is the Laplace kernel with $a = 15$
Phase plane of the solution in Figure 1. The solid curve corresponds to the Ricker function, the dashed curve to the logistic function, both with parameter $r = 2.2.$ $K$ is the Laplace kernel with $a = 15.$
Solution of the integrodifference equation (left) and its phase plane (right), where $F$ is the Ricker function with $r = 2.525$ and $K$ is the Laplace kernel with $a = 15$
Solution of the integrodifference equation (left) and its phase plane (right), where $F$ is the logistic function with $r = 2.5$ and $K$ is the Laplace kernel with $a = 15$
Solution of the IDE, with Ricker function and Laplace kernel ($a = 15$). The growth parameter is $r = 2.6,$ so that the two-cycle $n_\pm$ is unstable for the Ricker dynamics and a stable four-cycle exist (denoted by $n^-_-, n^-_+, n^+_-, n^+_+$). Initial conditions are $N_0 = n^+_+\chi_{[x\geq 10]}.$
Dynamic behavior of the map $N \mapsto F(N)$ with $F$ as in 2 or 3. The abbreviation 'g.a.s.' stands for globally asymptotically stable within all non-stationary, non-negative solutions
Dynamic behavior Ricker function 2 Logistic function 3 $N^*=1$ g.a.s. $0< r< 1$ $0< r< 1$ monotone approach $N^*=1$ g.a.s. $1< r< 2$ $1< r< 2$ oscillatory approach $N^*=1$ unstable $2< r< 2.526$ $2 Dynamic behavior Ricker function 2 Logistic function 3$ N^*=1 $g.a.s.$ 0< r< 1 0< r< 1 $monotone approach$ N^*=1 $g.a.s.$ 1< r< 2 1< r< 2 $oscillatory approach$ N^*=1 $unstable$ 2< r< 2.526 2
Shape of the traveling profile emerging from $N^* = 0$ in IDE 1 as a function of parameter $r$ for the Ricker and the logistic function and with Laplace dispersal kernel. When the kernel has compact support, monotone traveling waves may not exist even if they do with a Laplace kernel [29]
Shape of the traveling profile Ricker function Logistic function Monotone on $[0,1]$ $0< r< 1.0327$ $0< r< 1.0686$ Damped oscillations at $N=1$ $1.0327< r< 2.5072$ $1.0686< r< 2.570$ Wavetrain around $N = 1$ $2.5072< r< 2.692$ NA
Shape of the traveling profile Ricker function Logistic function Monotone on $[0,1]$ $0< r< 1.0327$ $0< r< 1.0686$ Damped oscillations at $N=1$ $1.0327< r< 2.5072$ $1.0686< r< 2.570$ Wavetrain around $N = 1$ $2.5072< r< 2.692$ NA
[1] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 [2] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020273 [3] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 [4] Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 [5] Anton A. Kutsenko. Isomorphism between one-Dimensional and multidimensional finite difference operators. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020270 [6] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 [7] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [8] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [9] Chao Xing, Jiaojiao Pan, Hong Luo. Stability and dynamic transition of a toxin-producing phytoplankton-zooplankton model with additional food. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020275 [10] Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 [11] A. M. Elaiw, N. H. AlShamrani, A. Abdel-Aty, H. Dutta. Stability analysis of a general HIV dynamics model with multi-stages of infected cells and two routes of infection. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020441 [12] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [13] Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020466 [14] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [15] Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073 [16] Thomas Bartsch, Tian Xu. Strongly localized semiclassical states for nonlinear Dirac equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 29-60. doi: 10.3934/dcds.2020297 [17] Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264 [18] Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 [19] Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020050 [20] Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020323
2019 Impact Factor: 1.233
|
|
## probability theory – Brownian motion first exit time from interval with deterministic structure
I’ve been working on the following problem from a survey paper. It says the following:
Let $$(B_t: t geq 0)$$ be a standard Brownian motion and $$mu,$$ a probability measure on $$mathbb{R}$$ such that $$int_{mathbb{R}}xmu(dx) = 0.$$ Then for any $$lambda geq 0,$$ define
$$-rho(lambda) = infleft{y in mathbb{R}: int_{mathbb{R}} {1}_{{x leq y} cup {x geq lambda}}x mu(dx) leq 0right}.$$
Let $$R>0$$ be a random variable independent of the Brownian motion. The distribution function of $$R$$ is $$mathbb{P}(R leq x) = int_{mathbb{R}} {1}_{y leq x}left(1+frac{y}{rho(y)}right)mu(dy).$$
If we set the stopping time $$T = inf{t geq 0:B_t notin (-rho(R),R)},$$ then the stopped process $$B_T$$ has also law $$mu.$$
Now, I know how to usually deal with this kind of exit times in the classical setting when for example $$T = inf{t geq 0:B_t notin (-a,b)}.$$ What I’ve tried so far is to write the following:
$$mathbb{P}(B_T geq z) = int_{z}^{infty} mathbb{P}(B_T geq z|R=x)mathbb{P}(R=x) = int_{z}^{infty} mathbb{P}(B_T geq z|R=x)left(1+ frac{x}{rho(x)}right)mu(dx).$$
I’m stuck on how to find $$mathbb{P}(B_T geq z|R=x)$$ as the end, after computing the integral, I should arrive at something like $$int_{z}^{infty} mu(dx).$$
Any help would be appreciated. Thank you!
## unity – What is the standard practice for animating motion — move character or not move character?
I’ve downloaded a bunch of (free) 3d warriors with animations. I’ve noticed for about 25% of them, the ‘run’ animation physically moves the character forward in the z direction. For the other 75%, the animation just loops with the characters feet moving etc., but does so in place, without changing the character’s physical location.
I could fix this by:
1.) Manually updating the transform in code for this 75%, to physically move the character
2.) Alter the animation by re-recording it, with a positive z value at the end (when I did this it caused the character to shift really far away the rest of the units, probably something to do with local space vs world space I haven’t figured out yet).
But before I go too far down this rabbit hole, I wonder if there is any kind of standard? In the general case, are ‘run’ / ‘walk’ animations supposed to move the character themselves, or is it up to the coder to manually update the transform while the legs move and arms swing in place? Is one approach objectively better than the other, or maybe it depends on the use case? If so, what are the drawbacks of each? I know nothing about animation, so I don’t want to break convention (if there is one).
## How to Make GameObject move in Circular Motion
I’ve been trying to work an enemy that moves in a circular motion for my RPG game, but for some reason, whenever I press play the GameObject instantly goes hundreds of units in the X and Y coordinates. It also move back to -1 on the Z axis. Here’s the script to my enemy’s movement:
``````using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class EnemyScript : MonoBehaviour
{
private Rigidbody2D rb;
(SerializeField)
float rotationRadius = 2f, angularSpeed = 2f;
float posX, posY, angle = 0f;
// Start is called before the first frame update
void Start()
{
// Gets the RigidBody2D component
rb = GetComponent<Rigidbody2D>();
}
// Update is called once per frame
void Update()
{
Movement_1();
}
void Movement_1()
{
posX = rb.position.x + Mathf.Cos(angle) + rotationRadius;
posY = rb.position.y + Mathf.Sin(angle) + rotationRadius;
transform.position = new Vector2(posX, posY);
angle = angle + Time.deltaTime * angularSpeed;
if (angle >= 360f)
{
angle = 0f;
}
}
}
``````
## stochastic processes – Is this integral with respect a Brownian motion a Brownian motion?
Is there any theorem to see that $$M= (M_t)_{tgeq 0}$$ is a Brownian motion? Or should I check the sole definition? M is defined by
$$M_t = int_0^{e^t-1}frac{1}{sqrt{1+s}}d B_s$$
where $$B = (B_t)_{tgeq 0}$$ is a BM.
My attempt is to use Itô’s formula to $$varphi(t,x) = frac{x}{sqrt{1+t}}$$ to get that
$$frac{B_t}{sqrt{1+t}} = int_0^t frac{1}{sqrt{1+s}}dB_s -frac{1}{2} int_{0}^t B_s(1+s)^{-3/2}d s.$$
Defining
$$N_t = int_0^t frac{1}{sqrt{1+s}}dB_s$$
We see that
$$M_t = N_{e^t-1} = e^{-t/2}B_{e^t-1} + frac{1}{2}int_0^{e^t-1}B_s(1+s)^{-3/2}ds$$
Doing this, I’m struggling to see that $$M_t-M_s sim M_{t-s}$$. Am I in a correct way? Is there any other? Maybe Lévy’s theorem? Thanks
Art in Motion is a creative and boutique production company founded in 2009. Specialized on the design and development of audiovisual projects with emphasis on stereoscopic techniques (3DS).
We develop and produce digital videos, stereoscopic graphics and high impact performances for projects and online campaigns, television and cultural events.
Art in motion designs and develops high impact videos to:
– Corporate events
– Products and services release
– Shows and concerts
-…
## Make a Motion Video Animation From a Still Photo for \$40
#### Make a Motion Video Animation From a Still Photo
Nowadays everyone adds a cool video effect to the STILL IMAGE. Such effects are called Plotagraph.
I will convert your still image into a cool video. Turn any static photo into an eye-catching dynamic masterpiece, adding motion and giving any image a “Live Photo” feel with moving water, fire, snow, cloud movement, and more.
Tell your story in a fun and unique way with animated images that can now be shared on most of your favorite social media platforms as a looping video.
Why choose me?
✔️There will be a cool effect and a great High-Quality photo
✔️Fast Delivery
✔️1 Images + Music 10s duration
– To avoid any inconvenience please send me your photo before ordering
– This is not for creating animation from the scratch, I add motion to the existing elements with some additions if needed
– Lots of images? Contact me for a Great Deal
Thanks
.
## I will send 1000 projects after effects title motion animated for \$3
#### I will send 1000 projects after effects title motion animated
Animated After Effects Templates
______________________________
______________________________
Simply amazing design, great quality images in Format: After Effects.
Main features:
– More than 1000 exclusive titles and lower thirds
– After Effects CS5 and above
– More than 120 shape elements
– Full HD and 4k resolution
– 100% after effects
– No plug-ins required
– Easy to customize
– Works for all language of after effects
– Video tutorial included
– Fast Rendering
————————————————– —————————-
The package includes:
– TypeMax – library
– TypeMax – project
– Video tutorials
– Helpfile
– PDF Tutorial
Any doubt we are at your disposal.
We guarantee the operation of all items in the collection.
Created by: Baixxar
.
## python – Active Brownian Motion
I am attempting to write a Python code to simulate many particles in a confined box. These particles behave in such a way that they move in the box in straight lines with a slight angular noise (small changes in the direction of the particle path). They should interact by acknowledging the other particle and ‘shuffle/squeeze’ past each other and continue on their intended path, much like humans on a busy street. Eventually, the particles should cluster together when the density of particles (or packing fraction) reaches a certain value, but I haven’t got to this stage yet.
The code currently has particle interactions and I have attempted the angular noise but without success so far.
However, I have a feeling there are parts of the code that are inefficient or which could be either sped up or written more conveniently.
If anyone has any improvements for the code speed or ideas which may help with the interactions and/or angular noise that would be much appreciated. I will also leave an example of an animation which is my aim: https://warwick.ac.uk/fac/sci/physics/staff/research/cwhitfield/abpsimulations
The above link shows the animation I am looking for, although I don’t need the sliders, just the box, and moving particles. The whole code is shown below:
``````import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
def set_initial_coordinates():
x_co = (np.random.uniform(0, 2) for i in range(n_particles))
y_co = (np.random.uniform(0, 2) for i in range(n_particles))
return x_co, y_co
def set_initial_velocities():
x_vel = np.array((np.random.uniform(-1, 1) for i in range(n_particles)))
y_vel = np.array((np.random.uniform(-1, 1) for i in range(n_particles)))
return x_vel, y_vel
def init():
ax.set_xlim(-0.05, 2.05)
ax.set_ylim(-0.07, 2.07)
return ln,
def update(dt):
xdata = initialx + vx * dt
ydata = initialy + vy * dt
fx = np.abs((xdata + 2) % 4 - 2)
fy = np.abs((ydata + 2) % 4 - 2)
for i in range(n_particles):
for j in range(n_particles):
if i == j:
continue
dx = fx(j) - fx(i) # distance in x direction
dy = fy(j) - fy(i) # distance in y direction
dr = np.sqrt((dx ** 2) + (dy ** 2)) # distance between x
if dr <= r:
force = k * ((2 * r) - dr) # size of the force if distance is less than or equal to radius
# Imagine a unit vector going from i to j
x_comp = dx / dr # x component of force
y_comp = dy / dr # y component of force
fx(i) += -x_comp * force # x force
fy(i) += -y_comp * force # y force
ln.set_data(fx, fy)
return ln,
# theta = np.random.uniform(0, 2) for i in range(n_particles)
n_particles = 10
initialx, initialy = set_initial_coordinates()
vx, vy = set_initial_velocities()
fig, ax = plt.subplots()
x_co, y_co = (), ()
ln, = plt.plot((), (), 'bo', markersize=15) # radius 0.05
plt.xlim(0, 2)
plt.ylim(0, 2)
k = 1
r = 0.1
t = np.linspace(0, 10, 1000)
ani = FuncAnimation(fig, update, t, init_func=init, blit=True, repeat=False)
plt.show()
$$```$$
``````
## Instagram stories and local videos play in slow motion but sound is fine
A friend of mine started facing this problem yesterday night: when she browses through her Instagram stories, the video plays at a slower rate than normal, however the sound is totally fine. She went ahead and checked the gallery as well – same problem there. In TikTok though, there are no issues at all.
As she did not provide any more information (she is not tech savvy at all) here is the info that I know about her phone: it’s a Huawei Y5 2018, with Android 8.
I searched online about this, but I didn’t come up with anything related to Android. Any help is appreciated.
## Motion camera app using video
I just started using Alfred camera which is a security camera app. I noticed in my Samsung note 9 that when I switch from photo to video the quality improves and location changes. Which tells me it is using another lenses since this one has several lenses. But I think Alfred camera app is only using the photo lense as the quality is bad and the placement seems to be coming from the camera lense.
Do apps like Alfred camera only access the camera lenses and not the video lense? Are there any apps that can access video lenses so quality is better? I need a mention sensor app that can tell me when there is motion. It does not have to do live view as it can just record motion and email or text me. Or do all these apps not access the better quality lenses?
|
|
jump to navigation
More on the Z lineshape at LHCDecember 19, 2008
Posted by dorigo in personal, physics, science.
Tags: , , ,
comments closed
Yesterday I posted a nice-looking graph without abounding in explanations on how I determined it. Let me fill that gap here today.
A short introduction
Z bosons will be produced copiously at the LHC in proton-proton collisions. What happens is that a quark from one proton hits an antiquark of the same flavour in the other proton, and the pair annihilates, producing the Z. This is a weak interaction: a relatively rare process, because weak interactions are much less frequent than strong interactions. Quarks carry colour charge as well as weak hypercharge, and most of the times when they hit each other what “reacts” is their colour, not their hypercharge. Similarly, when you meet John at the coffee machine you discuss football more often than chinese checkers: in particle physics terms, that is because your football coupling with John is stronger than your chinese-checkers coupling.
Result now, explanation laterDecember 18, 2008
Posted by dorigo in personal, physics, science.
Tags: , , ,
comments closed
Tonight I feel accomplished, since I have completed a crucial update of the cornerstone of the algorithm which provides the calibration of the CMS momentum scale. I have no time to discuss the details tonight, but I will share with you the final result of a complicated multi-part calculation (at least, for my mediocre standards): the probability distribution function of measuring the Z boson mass at a certain value $M$, using the quadrimomenta of two muon tracks which correspond to an estimated mass resolution $\sigma_M$, when the rapidity of the Z boson is $Y_Z$.
The above might -and should, if you are not a HEP physicist- sound rather meaningless, but the family of two-dimensional functions $P(M,\sigma_M)_Y$ is needed for a precise calibration of the CMS tracker. They can be derived by convoluting the production cross-section of Z bosons $\sigma_M$ at a given rapidity $Y$ with the proton’s parton distribution functions using a factorization integral, and then convoluting the resulting functions with a smearing Gaussian distribution of width $\sigma_M$.
Still confused ? No worry. Today I will only show one sample result – the probability distribution as a function of $M$ and $\sigma_M$ for Z bosons produced at a rapidity $2.8< |Y| <2.9$, and tomorrow I will explain in simple terms how I obtained that curve and the other 39 I have extracted today.
In the three-dimensional graph above, one axis has the reconstructed mass of muon pairs $M$ (from 71 to 111 GeV), the other has the expected mass resolution $\sigma_M$ (from 0 to 10 GeV). The height of the function is the probability of observing the mass value $M$, if the expected resolution is $\sigma_M$. On top of the graph one also sees in colors the curves of equal probability displayed on a projected plane. It will not escape to the keen eye that the function is asymmetric in mass around its peak: that is entirely the effect of the parton distribution functions…
Hectic weekDecember 4, 2008
Posted by dorigo in personal, physics, science.
Tags: , , , ,
comments closed
The regulars here will have already noticed by now that my posting rate has fallen this week. I have been busy with three different physics analyses, trying to make some progress in each.
The first project is the calibration of the momentum scale in CMS. I have discussed the issue elsewhere a couple of times; I am slowly converging to an understanding of how to treat the Z boson lineshape -which receives contributions from a number of different sources and effects: parton distribution functions in the projectiles, electromagnetic and weak radiation effects, interaction of the final state products of Z decay with the material of the tracker. All this must be dealt with in a coherent fashion to extract the most information possible from the Z decays we will reconstruct in CMS. We have a small but focused group working at the momentum scale calibration, including worthy physicists from Torino University, plus Marco and me. This week, I have tried to determine the effect of parton distribution functions alone, to insert it in our algorithm, but something still escapes me, and I want to do things as well as I can -which sometimes take little extra effort from a mediocre result, but in this case seems to be requiring a lot more care.
The second is the search for Higgs boson decays in the final state arising when H decays to two Z bosons, and one of the Z decays to a lepton pair, while the other decays to a pair of jets. Usually this final state, which is very hard to exploit at low Higgs masses due to the large backgrounds, is used for high-mass searches only (above 200 GeV). We want to extend it to lower masses, where the Higgs is more likely to be, using the $Z \to b \bar b$ decay, which Mia and I have a lot of experience in detecting in hadronic environments. Mia will present some results of this study tomorrow at CERN, so we have been working at this heavily this week.
The third topic is the evaluation of the chances of CMS to detect a similar signature of multi-muon events that CDF has seen in its data. The CDF signal is probably just a not well understood background, but it makes sense to size up the capability of CMS to detect a similar signature with early data. This requires understanding muon sources without using real data, and it is a bit far-fetched, but it is perfectly sound as a masters’ thesis topic, one on which Franco and I in fact have a student working. I have not worked much on this topic this week, but it still has absorbed a little of CPU.
I have a thick agenda of pending things to do, which has grown longer in the last few days. One thing is to post more commentaries on the multi-muon analysis by CDF here. Another is to progress with a document I am writing. A third is to review a 40-pages long CDF paper draft for the Spokespersons Reading Group, to which I proudly belong. A fourth is to organize the upcoming meeting of the CMS-Padova software-analysis group, which will convene in ten days. A fifth is to prepare my next trip to CERN, which will be from next Monday to next Friday. I do hope that I will be able to post more in the next few days… if I survive.
|
|
# Gaps in $$\sqrt{n}mod 1$$ and Ergodic Theory
Title: Gaps in $$\sqrt{n}mod 1$$ and Ergodic Theory Author: McMullen, Curtis T.; Elkies, Noam David Note: Order does not necessarily reflect citation order of authors. Citation: Elkies, Noam D., and Curtis T. McMullen. 2004. Gaps in √ n mod 1 and ergodic theory. Duke Mathematical Journal 123(1): 95-139. Revised 2005. Full Text & Related Files: McMullen_GapsErgodoticTheory.pdf (426.9Kb; PDF) Abstract: Cut the unit circle $$S^1 = \mathbb{R}/\mathbb{Z}$$ at the points $$\{\sqrt{1}\}, \{\sqrt{2}\}, . . ., \{\sqrt{N}\}$$, where $$\{x\} = x mod 1$$, and let $$J_1, . . . , J_N$$ denote the complementary intervals, or gaps, that remain. We show that, in contrast to the case of random points (whose gaps are exponentially distributed), the lengths $$\mid J_i\mid/N$$ are governed by an explicit piecewise real-analytic distribution $$F(t)dt$$ with phase transitions at $$t=\frac{1}{2}$$ and $$t=2$$. The gap distribution is related to the probability $$p(t)$$ that a random unimodular lattice translate $$\Lambda \subset \mathbb{R}^2$$ meets a fixed triangle $$S_t$$ of area $$t$$; in fact $$p^"(t) = -F(t)$$. The proof uses ergodic theory on the universal elliptic curve: $$E = (SL_2(\mathbb{R}) ⋉ \mathbb{R}^2) / (SL_2(\mathbb{Z}) ⋉ \mathbb{Z}^2)$$ Published Version: doi:10.1215/S0012-7094-04-12314-0 Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:3637161 Downloads of this work:
|
|
# How should US SSN be anonimized?
I have just gotten access to a US government dataset. It is not open, but could eventually be made open. The dataset includes hashed US SSNs. It looks like they used some general hashing function that is causing collisions. The collisions are already a problem in the relatively small version of the dataset I have now. Once the full version is pulled, the collisions will be even worse. How should one anonimized a US SSN to avoid collisions while still protecting the private information?
Since SSN has only 9 digits, changing hash function will not suffice because attacker can simply apply the function to all 10^9 SSN's and match the result against the database.
One option is to use a permutation cipher, destroying the private key afterward. Make sure that the cipher is resilient to known plaintext attack (since attackers are likely to know the content of the database that pertains to them, and possible to a handful of others).
Another option is to generate a private permutation of 10^9 element set using "true" randomness (from /dev/random to specialized hardware). There is numerous literature on how to generate random permutations. Of course, it is important to apply random permutation to the entire 10^9 elements, not just to the subset you have!
The best way is to combine the two approaches.
No matter what you do, do not forget to buy the liability insurance.
Another option is to generate a private permutation of 10^9 element set using "true" randomness
To anonymize the data for sharing AND to keep an id field for joining, you need to make a list of all unique SSNs, generate a random string for each, and then re-write the random string in the place of the SSN. This way, multiple tables/files are still joinable but the SSN is no longer part of the data.
Say I have 2 SSNs in one file:
1112223333, A, 2
2223334444, B, 3
And another 2 in another file (where one is the same and can join the files)
2223334444, B, 3
9998887777, C, 4
A small code would then read these files and create a list (set) of unique SSNs.
1112223333
2223334444
9998887777
Then the code would generate a random string that is independent from the SSN (not a hash function). There is no constraint to 10e9 integer characters. There is a constraint that no two random string is the same, but that is not hard to avoid based on either generating a new random string if it already exists, or using a string with a length that makes it impossible* to have duplicates.
1112223333,M2dMCUl80c6WNHYbBKvJ
2223334444,7kDZBCWAmuS9UpyKT9JV
9998887777,zIHKMe7DYfrRNDb0FirU
Or with a dictionary form:
my_dict = {1112223333 : 'M2dMCUl80c6WNHYbBKvJ'
,2223334444 : '7kDZBCWAmuS9UpyKT9JV'
, 9998887777 : 'zIHKMe7DYfrRNDb0FirU'}
Then re-write the files/tables with the value from the dictionary, instead of the key. With python
#my_dict = create_random_dict(ssn_list) # no function exists in this example
rows = [] # sample input data for first file
rows.append([1112223333, 'A', 2])
rows.append([2223334444, 'B', 3])
for row in rows:
print ','.join([str(my_dict.get(row[0]))] + [str(x) for x in row[1:]])
Gives as an output for the first file:
M2dMCUl80c6WNHYbBKvJ,A,2
7kDZBCWAmuS9UpyKT9JV,B,3
And as an output for the second file:
7kDZBCWAmuS9UpyKT9JV, B, 3
zIHKMe7DYfrRNDb0FirU, C, 4
If my_dict is only in memory, then there is no chance to map the anonymized string back to the SSN, but you can still link different records by the anonymized string.
Since the SSNs you received are already hashed (poorly), why not just replace the existing hashes with random unique strings? That way there's absolutely no way to attack the encryption, since the original data is completely destroyed -- and it solves your collision problem as well (assuming there aren't any actual duplicates in the data). This also assumes that the only reason you're keeping around the SSN data at all is to use it as a unique identifier. If you don't need it for that, you should ask yourself if you need to distribute it at all, or can simply delete that portion of the data set.
• There are "duplicates"in the data, such that each person can have more than one entry, which is why each person needs a unique identifier. That identifier in the unanonimized dataset is the SSN. – StrongBad Jul 20 '15 at 2:48
• So it is just a unique identifier -- so just generate a random string (UUID, perhaps?) for each person, assign it to the relevant rows, and drop the SSNs altogether. Safe and secure. – lazyreader Jul 20 '15 at 12:38
|
|
How do you graph f(x)=x^4-3x^2+2x?
Mar 29, 2015
I assume that you want to graph this without technology.
The graph is at the end of the solution. (I believe it's more instructive to not have it to start.)
$f \left(x\right) = {x}^{4} - 3 {x}^{2} + 2 x$
$f$ is a polynomial, so, of course its domain is the set of all real numbers.
$y$-intercept $f \left(0\right) = 0$
The $y$-intercept is $0$. (Or $\left(0 , 0\right)$ if you prefer.)
$x$- intercept(s) :
Solve $f \left(x\right) = {x}^{4} - 3 {x}^{2} + 2 x = 0$
Factor out the $x$: to get $x \left({x}^{3} - 3 x + 2\right) = 0$
By inspection or by the Rational Zero Theorem (or "Test"), $1$ is a zero of $\left({x}^{3} - 3 x + 2\right)$.
By the Factor Theorem, $\left(x - 1\right)$ is a factor.
Use division (in some form) to get the quadratic factor:
$\left({x}^{3} - 3 x + 2\right) = \left(x - 1\right) \left({x}^{2} + x - 2\right)$
The quadratic is straightforward to factor: $\left({x}^{2} + x - 2\right) = \left(x + 2\right) \left(x - 1\right)$
So we see that $f \left(x\right) = x \left(x + 2\right) {\left(x - 1\right)}^{2}$ the zeros (the $x$-intercepts are: $- 2 , 0 , 1$ ($1$ is a multiple zero of even multiplicity.)
(Write the intercepts: $\left(- 2 , 0\right) , \left(0 , 0\right)$, and $\left(1 , 0\right)$ if you prefer.)
Analysis of $f '$
Although we can work with $f \left(x\right)$ in any form, I prefer the standard polynomial form over the 3 factor form.
$f \left(x\right) = {x}^{4} - 3 {x}^{2} + 2 x$
So $f ' \left(x\right) = 4 {x}^{3} - 6 x + 2$ Which is never non-existent, so we only need to solve: $f ' \left(x\right) = 4 {x}^{3} - 6 x + 2 = 0$
By inspection or by the Rational Zero Theorem or by observing that multiple zeros of a polynomial are also zeros of the derivative, we see that $1$ is again a zero, so $\left(x - 1\right)$ is a factor and:
$f ' \left(x\right) = 4 {x}^{3} - 6 x + 2 = 2 \left(2 {x}^{3} - 3 x + 1\right) = 2 \left(x - 1\right) \left(2 {x}^{2} + 2 x - 1\right)$
The quadratic factor has irrational zeros: $\frac{- 1 \pm \sqrt{3}}{2}$.
Observe that $1 < \sqrt{3} < 2$, so one of the zeros is negative and the other positive. And $\frac{- 1 + \sqrt{3}}{2} < \frac{- 1 + 2}{2} = \frac{1}{2} < 1$.
The critical numbers for $f$ are Left to right
: $\frac{- 1 - \sqrt{3}}{2} , \frac{- 1 + \sqrt{3}}{2} , \text{ and } 1$.
For ease of reference, let's call the negative critical number ${z}_{1} = \frac{- 1 - \sqrt{3}}{2}$ and the positive one ${z}_{2} = \frac{- 1 + \sqrt{3}}{2}$
We need to investigate the sign of $f '$ on each of the intervals:
$\left(- \infty , {z}_{1}\right)$, $\left({z}_{1} , {z}_{2} ,\right)$, $\left({z}_{2} , 1\right)$, $\left(1 , \infty\right)$
If you like test numbers, I'd suggest: $- 10 , 0 , \frac{1}{2} , 10$.
It may not be clear that $\frac{1}{2}$ is in $\left({z}_{2} , 1\right)$ until it is observed that:
$\sqrt{3} < 2$ $\implies$ $\frac{- 1 + \sqrt{3}}{2} < \frac{- 1 + 2}{2} = \frac{1}{2}$
If you prefer, use a factor table.
Whichever method you use, you'll find that:
Increasing/Decreasing
$f ' \left(x\right) < 0$ on $\left(- \infty , {z}_{1}\right)$, so $f$ is decreasing on $\left(- \infty , {z}_{1}\right)$
$f ' \left(x\right) > 0$ on $\left({z}_{1} , {z}_{2} ,\right)$, so $f$ is increasing on $\left({z}_{1} , {z}_{2} ,\right)$
$f ' \left(x\right) < 0$ on $\left({z}_{2} , 1\right)$, so $f$ is decreasing on $\left({z}_{2} , 1\right)$
$f ' \left(x\right) > 0$ on $\left(1 , \infty\right)$, so $f$ is increasing on $\left(1 , \infty\right)$
Local extrema:
$f \left({z}_{1}\right) = f \left(\frac{- 1 - \sqrt{3}}{2}\right)$ is a local minimum (also global)
$f \left({z}_{2}\right) = f \left(\frac{- 1 + \sqrt{3}}{2}\right)$ is a local maximum
$f \left(1\right) = 0$ is a local minimum.
Analysis of $f ' '$
$f ' ' \left(x\right) = 12 {x}^{2} - 6 = 6 \left(2 {x}^{2} - 1\right)$
Whose zeros are : $\pm \frac{1}{\sqrt{2}} = \pm \frac{\sqrt{2}}{2}$
Investigating the sign of $f ' '$ on the appropriate intervals leads us to:
Concavity:
$f ' ' \left(x\right) > 0$ on $\left(- \infty , - \frac{\sqrt{2}}{2}\right)$ So $f$ is concave up.
$f ' ' \left(x\right) < 0$ on $\left(- \frac{\sqrt{2}}{2} , \frac{\sqrt{2}}{2}\right)$ So $f$ is concave down.
$f ' ' \left(x\right) > 0$ on $\left(\frac{\sqrt{2}}{2} , \infty\right)$ So $f$ is concave up.
There are two inflection points. They are:
((-sqrt2 / 2, f(-sqrt2 / 2) ) which is $\left(- \frac{\sqrt{2}}{2} \text{,} - \frac{5}{4} - \sqrt{2}\right)$
and
((sqrt2 / 2, f(sqrt2 / 2) ) which is $\left(\frac{\sqrt{2}}{2} \text{,} - \frac{5}{4} + \sqrt{2}\right)$
Now sketch the graph (It may take a couple of rough sketches first:
graph{y=x^4-3x^2+2x [-10, 10, -5, 5]}
|
|
# Mechanism of size reduction
The mechanism of size reduction depends upon the nature of the material, and each material requires different treatment. Fracture generally occurs along the lines of weakness. During size reduction, fresh surfaces are created or existing cracks and fissures are opened up, wherein the former requires more energy. There may be a tendency that after processing, agglomerates of particles are formed. Size reduction is an energy-inefficient process because a small amount of energy is required to subdivide the particles. A lot of the energy is spent overcoming the friction and inertia of machine parts and the friction between particles and deforming the particles without breaking them. This energy is released as heat.
## Compression
In this mechanism, the material is crushed by the application of pressure. Compressive forces are used for the coarse crushing of hard materials. Coarse crushing implies reduction to a size of about 3 mm.
## Impact
When a substance is more or less immobile and is struck by a fast-moving object, or when a moving particle collides with a stationary surface, impact occurs. The material is crushed in both cases into smaller pieces. Because the substance is hit by a moving hammer and the particles created are subsequently hurled against the machine’s shell, both of these things usually happen. Impact forces are general-purpose forces that can be found in the coarse, medium and fine grinding of a wide range of materials.
## Attrition
Attrition applies pressure to the material in the same way compression does, but the surfaces move relative to each other, resulting in shear forces that break the particles. When the size of the products can reach the micrometre range, shear or attrition forces are used in fine pulverization. Ultra-fine grinding is a phrase that is sometimes used to describe procedures that produce particles in the sub-micron range.
## Cutting
Cutting lowers the size of solid materials by separating them into smaller particles via mechanical action (sharp blade/s). Cutting is a technique for breaking down big chunks of material into smaller bits with a defined shape that can be processed further, such as powders and granules.
Make sure check our amazing article: Applications of size reduction
|
|
# Trying To Plot Points Around A Disk From Direction Vector.
This topic is 1525 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi All.
I'm trying to create a particle thruster effect. What I'm thinking of doing is to spawn a ring of particles at the starting location
from a 3d direction vector, Where the ring of points are all perpenicular to the Direction vector. Once I have this ring of points I
will create a vector from the centre and give it length to form a apex where all points on the ring move towards.
here is a image of what I'm doing.
I found some steps which is as follows,
//1. Choose any point P randomly which doesn't lie on the line through P1 and P2
//2. Calculate the vector R as the cross product between the vectors P - P1 and P2 - P1. This vector R is now perpendicular to P2 - P1. (If R is 0 then 1. wasn't satisfied)
//3. Calculate the vector S as the cross product between the vectors R and P2 - P1. This vector S is now perpendicular to both R and the P2 - P1.
//4. The unit vectors ||R|| and ||S|| are two orthonormal vectors in the plane perpendicular to P2 - P1.
I tryed following the above but when I set a minus Direction like so d3dvector3(0.0, -1.0, 0.0) or (-1,0,0) My particles dont render, I'm wondering
if I have the math wrong here is what Im doing.
float3 plane[3];
float3 n,p,r,s,p1p2;
//the starting point
float3 p1 = gEmitPosW.xyz;
//apex location
float3 p2 = dir * 1900.0;//add some length to the direction
//p.x = rand(); /* Create a random vector */
//p.y = rand();
//p.z = rand();
//now creating a perp vector
randompoint = normalize(Perpendicular(p1p2));
r = randompoint;
s = normalize(cross(p1p2, r));
dtheta = 36.0;
float th =0.0;
for(theta=0;theta<360;theta+=dtheta)
{
n.x = (r.x) * cos(th) + (s.x) * sin(th);
n.y = (r.y) * cos(th) + (s.y) * sin(th);
n.z = (r.z) * cos(th) + (s.z) * sin(th);
normalize(n);
//set the particl up
////////////////////////////////////////////
Particle p;
p.initialPosW = p1 + n* radius;
p.initialVelW = gEmitDirW.xyz;
p.pDirW = normalize(p2 - p.initialPosW );//p.initialVelW;
p.sizeW = float2(gParticleWidth, gParticleHeight);
p.age = 0.0f;//vRandom.x;
p.type = PT_FLARE;
p.arrayid = 0;
p.initialVelW.x = gFlashspeed;
ptStream.Append(p);
}
Is there some thing wrong in the above code.
If I change the start and end points I can get it to render in the minus directions, but how do I do it when the direction can be unknown at the time.
I tryed doing this but it does nothing
//dir
if(length(p2) - length(p1)< 0)
p1p2 =dir;//p2 - p1;
else
p1p2 =normalize(p1 - p2);
This one has a up direction d3dvector3(0,1,0).
Don't worry about the bit comming out the buttom thats just my flame thrower particle.
Edited by ankhd
##### Share on other sites
Pix is telling me some thing but I don't know what it is, googles not saying.
Pix is telling me all my position and direction vars are all equal to 1#QO,??????
Is this divide by zero. because if I set my Direction Vector to d3dvector3(0.001, -1.0, 0.001) it works but if I set it to d3dvector3(0.0,-1.0,0.0) I get 1#QO
What should I do here.
##### Share on other sites
Why are you passing p1p2 (assuming this is the P2-P1 vector from you algorithm ?) and then calculating p1 and p2 separately? Also, isn't the "dir" variable the same as the normalized P2-P1 vector? Why are you passing it separately to your shader (or is it just a constant)? Are you sure all of these variables are according to the algorithm you described? By the looks of it, they aren't. You should only pass P1 and P2 into your shader, then calculate everything else based on those. If you just change of any of these variables (p1, p2, dir, p1p2) - even just the sign of one of their components - it affects what all of the other variables should be. For example, if you just change the sign of dir.y, then P1 and P2's y values should also be swapped with one another (assuming x and z are 0), and the sign of p1p2.y should also be reversed.
Also, you are declaring the local variable "Particle p" with the same name as the global variable "float3 p" (or are they all local variables? - if so, then your shader really makes no sense at all, because most of them are not initialized anywhere)... things can go wrong here as well.
Is that even a shader or just C++ code? :) Anyway, the problem is clear: a lot of your variables are not initialized anywhere.
Edited by tonemgub
##### Share on other sites
Ok I've cleand the code up. But Its not working in the minus x direction or minus y directions.
Works only when directions +. The first image shows the thruster with a right direction vector3(1.0, 0.0, 0.0). I increased the radius to show the ring positions.
The second Image show the thruster with vector3(-1.0, 0.0, 0.0).
Oh and Its strange that the shader compiled with float3 p and a Particle p in the same function no error
But uninitialized vars return errors so I had none of them.
Heres the HLSL code.
float radius = 50.0;
float3 n,r,s;
float3 p1 = gEmitPosW.xyz;//passed from the app
//the app only has a direction
float3 p1p2 = gEmitDirW.xyz;//passed from the app
float3 randompoint = Perpendicular(p1p2);//this is our random point
//apex location defined by user length member
//we dont have a point2 we create it in the direction we have
float3 p2 = p1 + p1p2 * 1900.0;//add some length to the direction
float theta = 0.0;
float dtheta = 36.0;//segment size
r = normalize(cross(p1p2, randompoint));
s = normalize(cross(r,p1p2 ));
float th = 0.0;
for(theta = 0; theta < 360; theta += dtheta)
{
n.x = (s.x) * cos(th) + (r.x) * sin(th);
n.y = (s.y) * cos(th) + (r.y) * sin(th);
n.z = (s.z) * cos(th) + (r.z) * sin(th);
normalize(n);
//set the particl up
////////////////////////////////////////////
Particle p;
p.initialPosW = p1 + n * radius;//places this particle on the ring
p.initialVelW = gEmitDirW.xyz;//not used in this particle shader
p.pDirW = normalize(p2 - p.initialPosW );//alows us to move from the outer ring to the apex like a cone
p.sizeW = float2(gParticleWidth, gParticleHeight);
p.age = 0.0f;
p.type = PT_FLARE;
p.arrayid = 0;//todo set array index to textures
p.initialVelW.x = gFlashspeed;//used to end the particles life
ptStream.Append(p);
}
// reset the time to emit
gIn[0].age = 0.0f;
##### Share on other sites
Hey Again.
I think I know what the problem is. It's in my perpendicular functions not returning a perpendicular.
So I changed this bit of the code
// measure the projection of "direction" onto each of the axes
float id = (dot(i, direction));//i.dot (direction);
float jd = (dot(j, direction));//j.dot (direction);
float kd = (dot(k, direction));//k.dot (direction);
To this .
// measure the projection of "direction" onto each of the axes
float id = abs(dot(i, direction));//i.dot (direction);
float jd = abs(dot(j, direction));//j.dot (direction);
float kd = abs(dot(k, direction));//k.dot (direction);
Can Someone verify that thats the right way to get a perpendicula vector.
I'm going to see if I can break it now.
Heres the whole Function
//---------------------------------------------------------------
//returns the mid point between 2 points
//---------------------------------------------------------------
float3 Perpendicular(float3 direction)
{
// to be filled in
float3 quasiPerp;// a direction which is "almost perpendicular"
float3 result;// the computed perpendicular to be returned
// three mutually perpendicular basis vectors
float3 i = float3(1, 0, 0);
float3 j = float3(0, 1, 0);
float3 k = float3(0, 0, 1);
// measure the projection of "direction" onto each of the axes
float id = abs(dot(i, direction));//i.dot (direction);
float jd =abs( dot(j, direction));//j.dot (direction);
float kd = abs(dot(k, direction));//k.dot (direction);
// set quasiPerp to the basis which is least parallel to "direction"
if ((id <= jd) && (id <= kd))
{
quasiPerp = i; // projection onto i was the smallest
}
else
{
if ((jd <= id) && (jd <= kd))
quasiPerp = j; // projection onto j was the smallest
else
quasiPerp = k; // projection onto k was the smallest
}
// return the cross product (direction x quasiPerp)
// which is guaranteed to be perpendicular to both of them
// result.cross (direction, quasiPerp);
result = cross(direction, quasiPerp);
return result;
}//end Perpendicular
/////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////
##### Share on other sites
Can Someone verify that thats the right way to get a perpendicula vector.
The vector perpendicular to two other vectors in 3D is the vector given by by the cross-product of the two other vectors. The direction of the resulting perpendicular vector follows the "right- hand" rule.
I'm not sure what your "Perpendicular" function is trying to do. All this does is return the (absolute of) x, y, and z values from "direction" into id, jd and kd:
// three mutually perpendicular basis vectors
float3 i = float3(1, 0, 0);
float3 j = float3(0, 1, 0);
float3 k = float3(0, 0, 1);
// measure the projection of "direction" onto each of the axes
float id = abs(dot(i, direction));//i.dot (direction);
float jd =abs( dot(j, direction));//j.dot (direction);
float kd = abs(dot(k, direction));//k.dot (direction);
And the final return value from your Perpendicular function will be the cross product between the input "direction" (your p1p2 vector) and one of the i, j, k vectors (the one "least parallel" to "direction"/p1p2)...
Anyway, it seems all other position vectors are in world space, so maybe you need to rotate the i, j and k basis vectors with the rotation part of your world matrix as well...
Edited by tonemgub
1. 1
2. 2
Rutin
17
3. 3
4. 4
5. 5
• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics
• Total Topics
633735
• Total Posts
3013596
×
|
|
# Stock market
Simon one day decided to invest € 62000 to the stock market. After six months he invested July 25 stock markets fell by 47%. Fortunately for Simon from July 25 5 to October 25 his shares have risen by 39%.
Simon is then:
« Correct result
Result
=: 0
#### Solution:
$k = (1\dfrac{ 47}{100})\cdot (1+\dfrac{ 39}{100}) = 0.737 = 73.67 \% \ \\ \ \\ m = 62000 \cdot (1+0.737) = 45694 \ Eur \ \\ 62000 < 45694 \ \\$
= 0
Calculated by our linear equations calculator.
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
1. Shopping center
The shopping center buys from the manufacturer bikes at a purchase price of 180 €. It sells them for a sale price of 250 €. However, in the advertising for the sale of these goods, the shopping center spent 20% of the selling price of all bicycles in stock
2. Loss
A bookstore purchased from a publisher the biography of a well-known politician for R15 per copy, but sales have been very poor. The manager has decided to mark the copies down to R12 each to make a quick sale. Calculate the loss on each book as a percenta
3. Ice skates
Ice skates were raised twice, the first time by 25%, the second time by 10%. After the second price, their cost was 82.5 euros. What was the original price of skates?
4. Pine
How much of a mixed forest rangers want to cut down, if their head said: "We only cut down pine trees, which in our mixed forest 98%. After cut pine trees constitute 94% of all trees left."
5. Kaufland sales-off
As a rule the goods to the approaching end consumption in Kaufland three days before the end of the consumption to 30% off and one day before the end of the consumption to 50% discounts. Sometimes however the seller has been mistaken and the second discoun
6. Cinema tickets
Cinema sold 180 tickets this Thursday, which is 20%. Monday 14%, Tuesday 6%, Wednesday 9%, Friday 24%, Saturday 12%, and Sunday 15%. How many tickets were sold per week?
7. Spending
If spends 25% of my net pay of $922.28 on entertainment. How much money is that? 8. Socks One pair of socks worth CZK 27. Set of 3 pairs of these socks are sold with 10% discount. How many we will pay CZK for two offered sets of socks? 9. Chocolate Chocolate, which originally cost 1.5 euros, was increased by 40%. How many euros did it cost? 10. Commission Daniel works at a nearby electronics store. He makes a commission of 15%, percent on everything he sells. If he sells a laptop for 293.00$ how much money does Daniel make in commission?
11. Real estate 2
A real estate agent sold a lot for Php.550,000. If his agency pays at 10% commission, how much commission will he or she receive?
12. Trip to a city
Lin wants to save $75 for a trip to the city. If she has saved$37.50 so far, what percentage of her goal has she saved? What percentage remains?
13. Competitors
In the first round of slalom fell 15% of all competitors and in the second round another 10 racers. Together, 40% of all competitors fell. What was the total number of competitors?
14. Percents
How many percents is 900 greater than the number 750?
15. Highway repair
The highway repair was planned for 15 days. However, it was reduced by 30%. How many days did the repair of the highway last?
16. Percent
Calculate how many % is the number 26.25 less than the number 105.
|
|
# Factors of 15: Prime Factorization, Methods, and Examples
All the natural numbers that perfectly divide the number 15 leaving a whole number as the quotient and zero as the remainder are called the factors of 15.
Factors of 15 can also be the two numbers that multiply perfectly and produce the number 15.
This article illustrates all the necessary details to have complete knowledge of the factors of 15 and how to find them by using diverse methods of which prime factorization and division methods are the most commonly used methods.
### Important Properties
Following are some essential and fundamental properties of the number 15 which must be acknowledged to help find out the factors of 15.
1. 15 is an odd number.
2. 15 is a composite number.
3. 15 is not a perfect square.
## What Are the Factors of 15?
The factors of 15 are 1, 3, 5, and 15.
As 15 is an odd composite number, it has only 4 factors which are mentioned above. When 15 is divided by any of the mentioned numbers, it is divided wholly and does not leave any remainder. So, all of these numbers are said to be the perfect divisors of the number 15.
## How To Calculate the Factors of 15?
The basic division method can be used to find out the factors of 15. Consider the smallest natural number for this purpose to divide 15, if the remainder is 0, it will be a factor of 15.
Dividing 15 by the smallest natural number is 1.
$\dfrac{15}{1} = 15$
The number 15 has completely been divided by the 1 and has not left any remainder. So, 1 is a factor of 32.
Now consider the smallest even prime number to divide 15 into its factors.
$\dfrac{15}{2} = 7.50$
As the number 15 has not been divided evenly by the number 2. So, 2 is not a factor of 15
To find out the remaining factors of 15, divided 15 by other natural numbers that completely divide 15 and leave no remainder.
$\dfrac{15}{3} = 5$
$\dfrac{15}{5} = 3$
$\dfrac{15}{15} = 1$
It can be noticed that the number 15 has completely been divided by these numbers and has left no remainder. Therefore, the only factors of 15 are 1, 3, 5, and 15.
Following are some important which can help in the further understanding of the factors of 15.
1. The number 1 is the smallest factor of 15.
2. Any given number cannot have a factor larger than itself. So, the largest factor of 15 is the number 15 itself.
3. The number 15 only has the odd numbers as its factors.
4. Number 15 has both Prime numbers (3 and 5) and a composite number (15) as its factors. Whereas, 1 is neither a prime nor a composite number.
5. The number 15 has only one composite factor which is the 15 itself.
6. The cross sum of the number 15 is 6. As 6 is divisible by 3. so, 15 is also divisible by 3.
7. The Sum of divisors of 15 is 24.
## Factors of 15 by Prime Factorization
When the number 15 is demonstrated as a product of all of its possible prime factors, it is called the prime factorization of the number 15. This method is most commonly used to calculate the factors of a given number.
First, divide the number 15 by the smallest prime number which has the property to divide 15 completely without leaving any remainder.
The resultant number from this division is divided again by the smallest prime number and the procedure keeps reoccurring until the final quotient is achieved as 1 which cannot be divided further.
Following are the steps in sequence to calculate factors of 15 by the prime factorization method.
The procedure is carried out by dividing the smallest available prime number which, in this case, is 3 with the given number 15.
$\dfrac{15}{3} = 5$
As the quotient 5 is an odd prime number, it can only be divided further by 5.
$\dfrac{5}{5} = 1$
The quotient 1 cannot be divided anymore and thus marks the procedure to stop.
Figure 1
The prime factorization of 15 can be expressed as:
$15 = 3 \times 5$
## Factor Tree of 15
A factor tree is a method devised to easily find the factors of 15. It uses the rules of prime factorization presented in the form of a tree where the branching of the tree represents the division of the given number 15.
When a branch splits, it produces either a prime or a composite number. As long as any one of the two branches has a composite number on it, the branching keeps going on until a split produces prime numbers on both of its branches which cannot be divided any further. Here, the branching stops.
Considering the rules of division by factor tree method, If we write 15 into multiples, it would be: $15 = 3 \times 5$
It is very important to note here that the number 15 has produced prime numbers on both of the branches in a single split. Thus, it cannot go on any further and its factor tree appears as follows:
Figure 2
## Factors of 15 in Pairs
Factors of 15 in pairs are the set of two natural numbers that, when multiplied, produce the number 15.
In other words, it is the product of the factors of the number 15 represented in the form of pairs.
$1 \times 15 = 15$
$3 \times 5 = 15$
$5 \times 3 = 15$
$15 \times 1 = 15$
The number 15 has only 4 factors in total which can be written in sets of pairs as follows:
(1, 15)
(3, 5)
The number 15 can have negative pair factors as well because the multiplication of two negative factors also produces a positive product.
$(-1) \times (-15) = 15$
$(-3) \times (-5) = 15$
The negative pair factors of number 15 are as follows:
(-1, -15)
(-3, -5)
### Important Tips
1. Only integers and whole numbers can be the factors of a given number.
2. Factors of a number cannot be in the form of decimals or fractions.
3. A given number has the same pair of factors in both its positive and negative forms.
## Factors of 15 Solved Examples
Following are some solved examples.
### Example 1
Julia has been asked to pick a pair of factors with the following properties from a given set of pair factors of 15.
• A pair factor with both factors as prime numbers.
(1, 15)
(3, 5)
### Solution:
Consider the option given below:
(3, 5)
Both of these factors cannot be divided completely by any other number and are divisible only by themselves and the number 1.
So these numbers fulfill both of the conditions for factors of the pair of prime numbers.
Hence, the correct option for Julia to choose is (3, 5).
### Example 2
John gets a pack of candies on Christmas. He decides to eat 3 candies daily. On the 5th day, the pack gets empty as John takes out 3 candies for the present day. Please help John to find out the total number of candies that the pack contained.
### Solution
The total number of candies that the pack contained can be found by the product of the total number of days John had eaten the candies and the number of candies he ate each day.
Number of days = 5
Number of candies eaten per day = 3
Total number of candies the box contained = 5 x 3
Total number of candies the box contained = 15
Hence, the pack contained 15 candies.
### Example 3
Pick out the false statement about the factors of 15 from the following.
1. All the factors of 15 are odd numbers.
2. Factors of 15 have only one composite number which is 15 itself.
3. 15 can have a pair of one positive and one negative factor.
4. Pair Factors of 15 can have one prime and one composite number.
### Solution
When a positive number is multiplied by a negative number, the result is always a negative number. Since pair factors multiply to produce a given number, so the 3rd option is a false statement.
### Example 4
Stephen has been asked to pick a pair of factors of 15, where any of the two factors of the pair has all the following properties:
• Odd number
• Composite number
(3, 5)
(-3, -5)
(1, 15)
### Solution
Using the basic rules of division and multiplication, it can be found that the first two options (regardless of the negative sign) fulfill the properties of being an odd number but neither 3 nor 5 is a composite number as they divide only by themselves and the number 1.
However, the 3rd option (1, 15) fulfills all the required conditions where 1 serves the condition of being an odd number and 15 fulfills both the conditions of being an odd and composite number for having more than two divisors.
So the right option for Stephen to choose is (1, 15).
Images/mathematical drawings are created with GeoGebra
|
|
# How To Check If Two Strings Are Equal In Typescript
Knowing how to check if two strings are equal in Typescript will help you a lot when working with the string like you can check input from use. Using equality operator is one of those methods to do it. Let’s go into detail now.
## Check if two strings are equal in Typescript
### Use equality operator
The equality operator(==) helps you check if two strings are equal and then returns a boolean.
Example:
const str1:string = 'WooLa'
const str2:string = 'WooLa'
let result: boolean
result = str1 == str2
console.log(result)
result = (str1 == str2)
console.log(result)
Output:
[LOG]: true
[LOG]: true
Here if my string has whitespace, then the result will be false.
Example:
const str1:string = 'WooLa'
const str2:string = 'WooLa'
const str3:string = ' WooLa'
let result: boolean
let result2: boolean
result = (str1 == str2)
result2= (str1 == str3)
console.log(result)
console.log(result2)
Output:
[LOG]: true
[LOG]: false
But with the equality operator in Typescript will try to convert and compare the different types.
Example:
const str1:string = '1'
const str2:number = 1
let result: boolean
result = (str1 == str2) // This condition will always return 'false' since the types' string' and 'number' have no overlap
console.log(result)
Output:
[LOG]: true
Typescript also warned us, but it still works if we try to run the code.
### Use strict equality operator
To solve the problem with type, we can use strict equality operator(===). The strict equality operator is like the equality operator, but it will never deal with different types. So I recommend you always use a strict equality operator.
Example:
const str1:string = '1'
const str2:number = 1
let result: boolean
result = (str1 === str2) //This condition will always return 'false' since the types' string' and 'number' have no overlap.
console.log(result)
Output:
[LOG]: false
The strict equality operators also provides us with the IsStrictlyEqual semantic(!==) to check if two strings are not equal.
Example:
const str1:string = 'Hello'
const str2:string = 'World'
let result: boolean
result1 = (str1 !== str2)
console.log(result1)
result2 = (str1 === str2)
console.log(result2)
Output:
[LOG]: true
[LOG]: false
You can also apply the equality operator inside conditional logic code.
Example:
const str1:string = 'Hello'
const str2:string = 'Hello'
if (str1===str2){
console.log("Hello World")
}
Output:
[LOG]: "Hello World"
## Summary
In this tutorial, we showed and explained how to check if two strings are equal in Typescript. You can use the equality operator or strict equality operator. You can also apply it to your conditional logic code. Good luck for you!
Maybe you are interested:
|
|
# 4.4: Latin Squares - Mathematics
Definition: Latin square
A Latin square of order (n) is an (n imes n) grid filled with (n) symbols so that each symbol appears once in each row and column.
Example (PageIndex{1})
Here is a Latin square of order 4:
♥ ♣ ♠ ♦ ♣ ♠ ♦ ♥ ♠ ♦ ♥ ♣ ♦ ♥ ♣ ♠
Usually we use the integers (1ldots n) for the symbols. There are many, many Latin squares of order (n), so it pays to limit the number by agreeing not to count Latin squares that are "really the same'' as different. The simplest way to do this is to consider reduced Latin squares. A reduced Latin square is one in which the first row is (1ldots n) (in order) and the first column is likewise (1ldots n).
Example (PageIndex{2})
Consider this Latin square:
4 2 3 1 2 4 1 3 1 3 4 2 3 1 2 4
The order of the rows and columns is not really important to the idea of a Latin square. If we reorder the rows and columns, we can consider the result to be in essence the same Latin square. By reordering the columns, we can turn the square above into this:
1 2 3 4 3 4 1 2 2 3 4 1 4 1 2 3
Then we can swap rows two and three:
1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3
This Latin square is in reduced form, and is essentially the same as the original.
Another simple way to change the appearance of a Latin square without changing its essential structure is to interchange the symbols.
Example (PageIndex{3})
Starting with the same Latin square as before:
4 2 3 1 2 4 1 3 1 3 4 2 3 1 2 4
we can interchange the symbols 1 and 4 to get:
1 2 3 4 2 1 4 3 4 3 1 2 3 4 2 1
Now if we swap rows three and four we get:
1 2 3 4 2 1 4 3 3 4 2 1 4 3 1 2
Notice that this Latin square is in reduced form, but it is not the same as the reduced form from the previous example, even though we started with the same Latin square. Thus, we may want to consider some reduced Latin squares to be the same as each other.
Definition: isotopic and Isotopy Classes
Two Latin squares are isotopic if each can be turned into the other by permuting the rows, columns, and symbols. This isotopy relation is an equivalence relation; the equivalence classes are the isotopy classes.
Latin squares are apparently quite difficult to count without substantial computing power. The number of Latin squares is known only up to (n=11). Here are the first few values for all Latin squares, reduced Latin squares, and non-isotopic Latin squares (that is, the number of isotopy classes):
(n)AllReducedNon-isotopic
1111
2211
31211
457642
5161280562
How can we produce a Latin square? If you know what a group is, you should know that the multiplication table of any finite group is a Latin square. (Also, any Latin square is the multiplication table of a quasigroup.) Even if you have not encountered groups by that name, you may know of some. For example, considering the integers modulo (n) under addition, the addition table is a Latin square.
Example (PageIndex{4})
Example 4.3.6 Here is the addition table for the integers modulo 6:
0 1 2 3 4 5 1 2 3 4 5 0 2 3 4 5 0 1 3 4 5 0 1 2 4 5 0 1 2 3 5 0 1 2 3 4
Example 4.3.7 Here is another way to potentially generate many Latin squares. Start with first row (1,ldots, n). Consider the sets (A_i=[n]ackslash{i}). From exercise 1 in section 4.1 we know that this set system has many sdrs; if (x_1,x_2,ldots,x_n) is an sdr, we may use it for row two. In general, after we have chosen rows (1,ldots,j), we let (A_i) be the set of integers that have not yet been chosen for column (i). This set system has an sdr, which we use for row (j+1).
Definition 4.3.8 Suppose (A) and (B) are two Latin squares of order (n), with entries (A_{i,j}) and (B_{i,j}) in row (i) and column (j). Form the matrix (M) with entries (M_{i,j}=(A_{i,j},B_{i,j})); we will denote this operation as (M=Acup B). We say that (A) and (B) are orthogonal if (M) contains all (n^2) ordered pairs ((a,b)), (1le ale n), (1le ble n), that is, all elements of ({0,1,ldots,n-1} imes{0,1,ldots,n-1}).
As we will see, it is easy to find orthogonal Latin squares of order (n) if (n) is odd; not too hard to find orthogonal Latin squares of order (4k), and difficult but possible to find orthogonal Latin squares of order (4k+2), with the exception of orders (2) and (6). In the 1700s, Euler showed that there are orthogonal Latin squares of all orders except of order (4k+2), and he conjectured that there are no orthogonal Latin squares of of order (6). In 1901, the amateur mathematician Gaston Tarry showed that indeed there are none of order (6), by showing that all possibilities for such Latin squares failed to be orthogonal. In 1959 it was finally shown that there are orthogonal Latin squares of all other orders.
Theorem 4.3.9
There are pairs of orthogonal Latin squares of order (n) when (n) is odd.
Proof
This proof can be shortened by using ideas of group theory, but we will present a self-contained version. Consider the addition table for addition mod (n):
0 (cdots) (j) (cdots) (n-1) 0 0 (cdots) (j) (cdots) (n-1) (vdots) (i) (i) (cdots) (i+j) (cdots) (n+i-1) (vdots) (n-1) (n-1) (cdots) (n+j-1) (cdots) (n-2)
We claim first that this (without the first row and column, of course) is a Latin square with symbols (0,1,ldots,n-1). Consider two entries in row (i), say (i+j) and (i+k). If (i+jequiv i+j pmod{n}), then (jequiv k), so (j=k). Thus, all entries of row (i) are distinct, so each of (0,1,ldots,n-1) appears exactly once in row (i). The proof that each appears once in any column is similar. Call this Latin square (A). (Note that so far everything is true whether (n) is odd or even.)
Now form a new square (B) with entries (B_{i,j}=A_{2i,j}=2i+j), where by (2i) and (2i+j) we mean those values mod (n). Thus row (i) of (B) is the same as row (2i) of (A). Now we claim that in fact the rows of (B) are exactly the rows of (A), in a different order. To do this, it suffices to show that if (2iequiv 2kpmod{n}), then (i=k). This implies that all the rows of (B) are distinct, and hence must be all the rows of (A).
Suppose without loss of generality that (ige k). If (2iequiv 2kpmod{n}) then (ndivides 2(i-k)). Since (n) is odd, (ndivides (i-k)). Since (i) and (k) are in (0,1,ldots,n-1), (0le i-kle n-1). Of these values, only (0) is divisible by (n), so (i-k=0). Thus (B) is also a Latin square.
To show that (Acup B) contains all (n^2) elements of ({0,1,ldots,n-1} imes{0,1,ldots,n-1}), it suffices to show that no two elements of (Acup B) are the same. Suppose that ((i_1+j_1,2i_1+j_1)=(i_2+j_2,2i_2+j_2)) (arithmetic is mod (n)). Then by subtracting equations, (i_1=i_2); with the first equation this implies (j_1=j_2).
(square)
Example 4.3.10 When (n=3), $$left[matrix{ 0&1&2cr 1&2&0cr 2&0&1cr} ight]cup left[matrix{ 0&1&2cr 2&0&1cr 1&2&0cr} ight]= left[matrix{ (0,0)&(1,1)&(2,2)cr (1,2)&(2,0)&(0,1)cr (2,1)&(0,2)&(1,0)cr} ight].$$
One obvious approach to constructing Latin squares, and pairs of orthogonal Latin squares, is to start with smaller Latin squares and use them to produce larger ones. We will produce a Latin square of order (mn) from a Latin square of order (m) and one of order (n).
Let (A) be a Latin square of order (m) with symbols (1,ldots,m), and (B) one of order (n) with symbols (1,ldots,n). Let (c_{i,j}), (1le ile m), (1le jle n), be (mn) new symbols. Form an (mn imes mn) grid by replacing each entry of (B) with a copy of (A). Then replace each entry (i) in this copy of (A) with (c_{i,j}), where (j) is the entry of (B) that was replaced. We denote this new Latin square (A imes B). Here is an example, combining a (4 imes 4) Latin square with a (3 imes 3) Latin square to form a (12 imes 12) Latin square: }
(1) (2) (3) (4) (2) (3) (4) (1) (3) (4) (1) (2) (4) (1) (2) (3)
( imes)
(1) (2) (3) (2) (3) (1) (3) (1) (2)
(=)
(c_{1,1}) (c_{2,1}) (c_{3,1}) (c_{4,1}) (c_{2,1}) (c_{3,1}) (c_{4,1}) (c_{1,1}) (c_{3,1}) (c_{4,1}) (c_{1,1}) (c_{2,1}) (c_{4,1}) (c_{1,1}) (c_{2,1}) (c_{3,1})
(c_{1,2}) (c_{2,2}) (c_{3,2}) (c_{4,2}) (c_{2,2}) (c_{3,2}) (c_{4,2}) (c_{1,2}) (c_{3,2}) (c_{4,2}) (c_{1,2}) (c_{2,2}) (c_{4,2}) (c_{1,2}) (c_{2,2}) (c_{3,2})
(c_{1,3}) (c_{2,3}) (c_{3,3}) (c_{4,3}) (c_{2,3}) (c_{3,3}) (c_{4,3}) (c_{1,3}) (c_{3,3}) (c_{4,3}) (c_{1,3}) (c_{2,3}) (c_{4,3}) (c_{1,3}) (c_{2,3}) (c_{3,3})
(c_{1,2}) (c_{2,2}) (c_{3,2}) (c_{4,2}) (c_{2,2}) (c_{3,2}) (c_{4,2}) (c_{1,2}) (c_{3,2}) (c_{4,2}) (c_{1,2}) (c_{2,2}) (c_{4,2}) (c_{1,2}) (c_{2,2}) (c_{3,2})
(c_{1,3}) (c_{2,3}) (c_{3,3}) (c_{4,3}) (c_{2,3}) (c_{3,3}) (c_{4,3}) (c_{1,3}) (c_{3,3}) (c_{4,3}) (c_{1,3}) (c_{2,3}) (c_{4,3}) (c_{1,3}) (c_{2,3}) (c_{3,3})
(c_{1,1}) (c_{2,1}) (c_{3,1}) (c_{4,1}) (c_{2,1}) (c_{3,1}) (c_{4,1}) (c_{1,1}) (c_{3,1}) (c_{4,1}) (c_{1,1}) (c_{2,1}) (c_{4,1}) (c_{1,1}) (c_{2,1}) (c_{3,1})
(c_{1,3}) (c_{2,3}) (c_{3,3}) (c_{4,3}) (c_{2,3}) (c_{3,3}) (c_{4,3}) (c_{1,3}) (c_{3,3}) (c_{4,3}) (c_{1,3}) (c_{2,3}) (c_{4,3}) (c_{1,3}) (c_{2,3}) (c_{3,3})
(c_{1,1}) (c_{2,1}) (c_{3,1}) (c_{4,1}) (c_{2,1}) (c_{3,1}) (c_{4,1}) (c_{1,1}) (c_{3,1}) (c_{4,1}) (c_{1,1}) (c_{2,1}) (c_{4,1}) (c_{1,1}) (c_{2,1}) (c_{3,1})
(c_{1,2}) (c_{2,2}) (c_{3,2}) (c_{4,2}) (c_{2,2}) (c_{3,2}) (c_{4,2}) (c_{1,2}) (c_{3,2}) (c_{4,2}) (c_{1,2}) (c_{2,2}) (c_{4,2}) (c_{1,2}) (c_{2,2}) (c_{3,2})
Theorem 4.3.11
f (A) and (B) are Latin squares, so is (A imes B).
Proof
Consider two symbols (c_{i,j}) and (c_{k,l}) in the same row. If the positions containing these symbols are in the same copy of (A), then (i ot=k), since (A) is a Latin square, and so the symbols (c_{i,j}) and (c_{k,l}) are distinct. Otherwise, (j ot=l), since (B) is a Latin square. The argument is the same for columns.
(square)
Remarkably, this operation preserves orthogonality:
Theorem 4.3.12
If (A_1) and (A_2) are Latin squares of order (m), (B_1) and (B_2) are Latin squares of order (n), (A_1) and (A_2) are orthogonal, and (B_1) and (B_2) are orthogonal, then (A_1 imes B_1) is orthogonal to (A_1 imes B_2).
Proof
We denote the contents of (A_i imes B_i) by (C_i(w,x,y,z)), meaning the entry in row (w) and column (x) of the copy of (A_i) that replaced the entry in row (y) and column (z) of (B_i), which we denote (B_i(y,z)). We use (A_i(w,x)) to denote the entry in row (w) and column (x) of (A_i).
Suppose that ((C_1(w,x,y,z),C_2(w,x,y,z))=(C_1(w',x',y',z'),C_2(w',x',y',z'))), where ((w,x,y,z) ot=(w',x',y',z')). Either ((w,x) ot=(w',x')) or ((y,z) ot=(y',z')). If the latter, then ((B_1(y,z),B_2(y,z))= (B_1(y',z'),B_2(y',z'))), a contradiction, since (B_1) is orthogonal to (B_2). Hence ((y,z)=(y',z')) and ((w,x) ot=(w',x')). But this implies that ((A_1(w,x),A_2(w,x))=(A_1(w',x'),A_2(w',x'))), a contradiction. Hence (A_1 imes B_1) is orthogonal to (A_1 imes B_2).
(square)
We want to construct orthogonal Latin squares of order (4k). Write (4k=2^mcdot n), where (n) is odd and (mge 2). We know there are orthogonal Latin squares of order (n), by theorem 4.3.9. If there are orthogonal Latin squares of order (2^m), then by theorem 4.3.12 we can construct orthogonal Latin squares of order (4k=2^mcdot n).
To get a Latin square of order (2^m), we also use theorem 4.3.12. It suffices to find two orthogonal Latin squares of order (4=2^2) and two of order (8=2^3). Then repeated application of theorem 4.3.12 allows us to build orthogonal Latin squares of order (2^m), (mge 2).
Two orthogonal Latin squares of order 4:
$$left[matrix{ 1&2&3&4cr 2&1&4&3cr 3&4&1&2cr 4&3&2&1cr} ight] left[matrix{ 1&2&3&4cr 3&4&1&2cr 4&3&2&1cr 2&1&4&3cr } ight],$$
and two of order 8:
$$left[matrix{ 1&3&4&5&6&7&8&2cr 5&2&7&1&8&4&6&3cr 6&4&3&8&1&2&5&7cr 7&8&5&4&2&1&3&6cr 8&7&2&6&5&3&1&4cr 2&5&8&3&7&6&4&1cr 3&1&6&2&4&8&7&5cr 4&6&1&7&3&5&2&8cr } ight] left[matrix{ 1&4&5&6&7&8&2&3cr 8&2&6&5&3&1&4&7cr 2&8&3&7&6&4&1&5cr 3&6&2&4&8&7&5&1cr 4&1&7&3&5&2&8&6cr 5&7&1&8&4&6&3&2cr 6&3&8&1&2&5&7&4cr 7&5&4&2&1&3&6&8cr } ight].$$
## The 4x4 Latin Squares and Alphabetical Patterns
The fundamental 4x4 magic carpet can be represented by dots and spaces. Any line in any direction of length four contains two dots, i.e., it sums to 2 any selected 4x4 area is a pan-magic pattern. We can take four samples of this large carpet, rotate two of them, and make the only four possible order four magic carpets.
### Alphabetical Subsitution.
Because they look somewhat like letters of the alphabet they are given a letter to identify them.
### The Composite Alphabetical Magic Carpet
The dot in each of the above squares is replaced with its own letter. The four squares are then combined to make the composite square on the left and then the larger carpet on the right.
Only one composite pattern exists. This one composite square underlies all order 4 pan-magic squares. Any 4x4 area contains each letter twice in every row, every line, and every diagonal. To make an actual 4x4 magic square the letters in this square would be replaced, respectively by 8, 4, 2, and 1 (see Main 4x4 Page).
### Neat Pattern.
When the pattern is repeated, a large magic carpet emerges - pleasing and symmetrical - which makes the interesting colored pattern on the left.
### Not a Latin Square
Strictly speaking this is not a "Latin Square". A Latin Square for order N uses N letters N times and each row and each column contains one of each letter. The above square can be converted into two Latin Squares.
### Two 4x4 Latin Squares
The illustration below combines two 4x4 Latin Squares into a single, so-called Graeco-Latin square For convenience it employs Upper and Lower case Roman letters instead of using both Roman and Greek characters. The letters in the new square are derived from the square above:
A when neither S nor N are present B when N is present C when S is present D when both S and N are present a when neither C nor A are present b when C is present c when A is present d when both C and A are present
A B C D C D A B B A D C D C B A
+
a d c b d a b c b c d a c b a d
=
Aa Bd Cc Db Cd Da Ab Bc Bb Ac Dd Ca Dc Cb Ba Ad
### Not Pan-Magic.
Inspection of the resulting Graeco-Latin shows that that the rows and columns are inevitably " Magic " - they do contain one of each letter. This is, however, not true for any of the diagonals they can only add up to the magic sum for appropriately selected numerical substitutions.
### Limited Value for 4x4 Latin Squares.
Because of this limitation, Latin Squares are of only limited use in constructing 4x4 pan-magic squares. There are two other possible 4x4 alphabetical squares and all three are shown below. The third is not even Latin in that, now, only the diagonals contain one of each letter.
## Alternative versions of orthogonality
### (f) Mutually orthogonal partial latin squares
Two partial latin squares (not necessarily distinct) are orthogonal if, when they are juxtaposed, no ordered pair of elements appears more than once. A collection of partial latin squares is called r-compatible if each has r occupied cells and these occupied cells are in corresponding positions.
It follows that, in particular, a partial latin square of order n which has only n of its cells occupied is orthogonal to and n-compatible with itself. Thus, most interest is in partial n × n latin squares which have more than n cells filled. This concept, which is due to Abdel-Ghaffar(1996) , arose in connection with coding theory: namely, in minimizing the retrieval time for items belonging to a large file of data stored on several disks.
Let Mn(r) denote the maximum number of pairwise orthogonal r-compatible partial Latin squares. For the above application, Abdel-Ghaffar was interested in finding bounds for Mn(r) when r > n. He showed that, for n + 1 ≤ rn 2 ,
Note that this result implies in particular that Mn(n 2 ) ≤ n − 1. Abdel-Ghaffar showed further that, for n +1 ≤ r ≤ 2n, Mn(r) = ⌊r(r − 1)/2(rn)⌋ − 2 and he constructed squares which meet the latter bound.
In the coding theory application mentioned earlier, the above bounds on Mn(r) give bounds on the best possible retrieval time.
## 4.4: Latin Squares - Mathematics
As a reminder, we should mention here that the order of a magic square is the number of cells on one of its sides.
Recall that the earliest recorded fourth-order magic square appears to have been found in an inscription at Khajuraho, India dating to about 1000-1100 A.D. It was of the form
We will examine a method to create a 44 magic square however, this method does not generate the square found in India. (This is not the only way, but it is quick and sure.) We begin by creating a 44 square matrix and then we draw two diagonal lines to get a figure as follows.
We then start at the upper left corner to put the number 1,2,3. 14,15,16 into the cells. However, we do not put a number in any cell where the diagonal line appears. We start with 1, but that cell has a diagonal line in it, so we go to the next cell which is blank and enter a 2, then we put a 3 in the next cell. The last cell in the first row has a diagonal line, so we do not write in the 4. We go to the next row and enter 5 in the first cell, which is blank, the next two cells have a diagonal line, so we skip 6 and 7. We continue this pattern until we get to the last cell in the last row. Our square will look like this:
Now we begin in the lower right-hand corner and work our way back using the numbers 1,4,6,7,10,11,13, and 16. We put these number in the cells which originally had the diagonal lines starting with 1 in the lower right-hand corner. Our finished product looks like this:
We see that in our finished square every row, column and diagonal sums to the magic number 34, which is found, as we mentioned above, by calculating 4(4 2 + 1)/2.
Once we have this square, we can carefully rearrange the rows and columns to get other 44 magic squares. Below are some rearrangements of our original 44 magic square.
Notice in the rearrangement that the numbers in our original 44 magic square stay together. That is, the numbers 16,2,3,13 that appear in the first row will always be together, in some order, in a row or column of a new square. This is true of all of the other sets of four numbers. For example, if you have a row or column that contains the numbers 3 and 6, that row or column must also have the numbers 10 and 15.
In our rearranged examples above, the last magic square is of particular historical interest in mathematics and art. This magic square appears in the background of the engraving Melencolia by Albrecht Dürer, which he did in 1514. Notice the numbers 15 and 14 (the date of the engraving) appear together in the center of the bottom row. This engraving can be seen in many places. A convenient one is the WebMuseum. If this link does not get you there try the front door. The main entry point to the WebMuseum is the Pyramide.
## Exercises 4.3
Ex 4.3.1 Show that there is only one reduced Latin square of order 3.
Ex 4.3.2 Verify that the isotopy relation is an equivalence relation.
Ex 4.3.3 Find all 4 reduced Latin squares of order 4. Show that there are at most 2 isotopy classes for order 4.
Ex 4.3.4 Show that the second set system defined in example 4.3.7 has an sdr as claimed.
Ex 4.3.5 Show that there are no orthogonal Latin squares of order 2.
Ex 4.3.6 Find the two orthogonal Latin squares of order $5$ as described in theorem 4.3.9. Show your answer as in example 4.3.10.
Ex 4.3.7 Prove that to construct orthogonal Latin squares of order $2^m$, $mge2$, it suffices to find two orthogonal Latin squares of order $4=2^2$ and two of order $8=2^3$.
Ex 4.3.8 An $n imes n$ Latin square $A$ is symmetric if it is symmetric around the main diagonal, that is, $A_=A_$ for all $i$ and $j$. It is easy to find symmetric Latin squares: every addition table modulo $n$ is an example, as in example 4.3.6. A Latin square is idempotent if every symbol appears on the main diagonal. Show that if $A$ is both symmetric and idempotent, then $n$ is odd. Find a $5 imes 5$ symmetric, idempotent Latin square.
Ex 4.3.9 The transpose $A^ op$ of a Latin square $A$ is the reflection of $A$ across the main diagonal, so that $A_^ op=A_$. A Latin square is self-orthogonal if $A$ is orthogonal to $A^ op$. Show that there is no self-orthogonal Latin square of order 3. Find one of order 4.
Researchers in combinatorial design theory and areas of statistics such as design and analysis of experiments. The book may also be of interest to amateur mathematicians interested in magic squares, in designing games tournaments and/or in latin squares related to Sudoku puzzles
Chapter 1: Elementary properties
• 1.1 The multiplication table of a quasigroup
• 1.2 The Cayley table of a group
• 1.3 Isotopy
• 1.4 Conjugacy and parastrophy
• 1.5 Transversals and complete mappings
• 1.6 Latin subsquares and subquasigroups
Chapter 2: Special types of latin square
• 2.1 Quasigroup identities and latin squares
• 2.2 Quasigroups of some special types and the concept of generalized associativity
• 2.3 Triple systems and quasigroups
• 2.4 Group-based latin squares and nuclei of loops
• 2.5 Transversals in group-based latin squares
• 2.6 Complete latin squares
Chapter 3: Partial latin squares and partial transversals
• 3.1 Latin rectangles and row latin squares
• 3.2 Critical sets and Sudoku puzzles
• 3.3 Fuchs’ problems
• 3.4 Incomplete latin squares and partial quasigroups
• 3.5 Partial transversals and generalized transversals
Chapter 4: Classification and enumeration of latin squares and latin rectangles
• 4.1 The autotopism group of a quasigroup
• 4.2 Classification of latin squares
• 4.3 History of the classification and enumeration of latin squares
• 4.4 Enumeration of latin rectangles
• 4.5 Enumeration of transversals
• 4.6 Enumeration of subsquares
Chapter 5: The concept of orthogonality
• 5.1 Existence questions for incomplete sets of orthogonal latin squares
• 5.2 Complete sets of orthogonal latin squares and projective planes
• 5.3 Sets of MOLS of maximum and minimum size
• 5.4 Orthogonal quasigroups, groupoids and triple systems
• 5.5 Self-orthogonal and other parastrophic orthogonal latin squares and quasigroups
• 5.6 Orthogonality in other structures related to latin squares
Chapter 6: Connections between latin squares and magic squares
• 6.1 Diagonal (or magic) latin squares
• 6.2 Construction of magic squares with the aid of orthogonal latin squares
• 6.3 Additional results on magic squares
• 6.4 Room squares: their construction and uses
Chapter 7: Constructions of orthogonal latin squares which involve rearrangement of rows and columns
## Case 3 Section
In this case, we have different levels of both the row and the column factors. Again, in our factory scenario, we would have different machines and different operators in the three replicates. In other words, both of these factors would be nested within the replicates of the experiment.
We would write this model as:
Here we have used nested terms for both of the block factors representing the fact that the levels of these factors are not the same in each of the replicates.
The analysis of variance table would include:
## Anything but square: from magic squares to Sudoku
There is an ancient Chinese legend that goes something like this. Some three thousand years ago, a great flood happened in China. In order to calm the vexed river god, the people made an offering to the river Lo, but he could not be appeased. Each time they made an offering, a turtle would appear from the river. One day a boy noticed marks on the back of the turtle that seemed to represent the numbers 1 to 9. The numbers were arranged in such a way that each line added up to 15. Hence the people understood that their offering was not the right amount.
The markings on the back of the turtle were in fact a magic square. A magic square is a square grid filled with numbers, in such a way that each row, each column, and the two diagonals add up to the same number. Here's what the magic square from the Lo Shu would have looked like. It has three rows and three columns, and if you add up the numbers in any row, column or diagonal, you always get 15.
Here is a partial construction of a 5 by 5 magic square. Starting from 1, I have filled in the numbers up to 10. There is no space northeast of the 1, so I have put the 2 in the bottom row, followed by the 3. Again, because the 3 is on the edge, the 4 goes on the opposite side. The 6 should go in the cell where the 1 is, but because this cell is occupied, I put the 6 immediately below the 5 and continued up to 10. Try completing the square and then try making some of your own.
While this, known as the Siamese method, is probably the best known method for making magic squares, other methods do exist. The German schoolmaster Johann Faulhaber published a method similar to the Siamese method before it was discovered by De Le Loubere. Another way is the Lozenge method by John Horton Conway, a prolific British mathematician. Proving that these methods work can be done using algebra, but it's not easy!
### Magic squares of even order
Although the Siamese method can be used to generate a magic square for any odd number, there is no simple method that works for all magic squares of even order. Fortunately, there is a nice method that we can use if the order of the square is an even number divisible by 4. (For those that are interested, the LUX method was invented by J. H. Conway to deal with even numbers that are not divisible by 4).
Instead of saying "numbers that are divisible by 4", mathematicians usually say "numbers of the form 4k". For example, 12 is of the form 4k, because you can replace k with 3. Using the same idea, numbers that give a remainder of 2 when you divide them by 4 can be called numbers of the form 4k + 2.
So start by picking the order of the square, making sure that it's of the form 4k, and number the cells 1 to (4k) 2 starting at the top left and working along the rows. Then split the square up into 4 by 4 subsquares, and mark the numbers that lie on the main diagonals of each subsquare. In the example, these are the coloured numbers the order of the square is 4, so the only 4 by 4 subsquare is the square itself.
Now switch the lowest marked number with the highest marked number, the second lowest marked number with the second highest marked number, and so on. Another way of saying this is that if the magic square has order n, swap the numbers that add up to n 2 + 1. In this particular example, the order is 4, so we have to swap the numbers that add up to 17: 1 and 16, 4 and 13, 6 and 11, 7 and 10.
If you flip this magic square over, it is identical to the one drawn by the famous German artist, Albrecht Dürer. You can see it in the corner of his engraving Melencolia.
### A Knight's Tale
As any chess player will know, an order 8 magic square has the same number of cells as a chessboard. This similarity means that we can create a special type of magic square based on the moves of a chesspiece.
The knight is an interesting piece, because unlike the other pieces, it does not move vertically, horizontally or diagonally along a straight line. Instead, the knight moves in an L-shape as shown in the diagram. But is it possible for a knight that moves in this way to visit every square on the chessboard exactly once?
Using the concept of the knight's tour William Beverley managed to produce a magic square, as shown below. Cells are numbered in sequence, as the knight visits them. Although the rows and columns all add up to 260, the main diagonals do not, so strictly speaking it is a semi-magic square. In fact, a magic square based on a knight's tour is often called a magic tour, so what Beverley produced in 1848 is a semi-magic tour!
At first glance, it seems that the following magic square by Feisthamel fits the bill. The rows, columns and diagonals all sum to 260. Unfortunately, it is only a partial knight's tour, as there is a jump from 32 to 33.
So when is it possible to turn a knight's tour into a magic square? In 2003, Stertenbrink and Meyrignac finally solved this problem by computing every possible combination. They found 140 semi-magic tours, but no magic tours. Checkmate!
### Latin Squares
Latin squares are the true ancestors of Sudoku. You can find examples of Latin squares in Arabic literature over 700 years old. They were discovered by Euler a few centuries later, who saw them as a new type of magic square, and it's thanks to him that we call them Latin squares.
Latin squares are grids filled with numbers, letters or symbols, in such a way that no number appears twice in the same row or column. The difference between a magic square and a Latin square is the number of symbols used. For example, there are 16 different numbers in a 4 by 4 magic square, but you only need 4 different numbers or letters to make a 4 by 4 Latin square.
Now if we look at the bottom three boxes, one of the rows already has 6 numbers. I've called the empty cells A, B and C (in order from left to right), and the numbers that are missing are 3, 7 and 8. If you look at cell C, the only number that can go in it is 7. That's because the column that C lies in already contains 3 and 8.
Finding A and B is now pretty simple. There's already a 3 in the same column as B, so B has to be 8. That means A must be 3. Solving the rest of the puzzle is a bit trickier, but well worth the effort.
The Sudoku craze has swept across the globe, and it shows no signs of slowing. Several variations have developed from the basic theme, such as 16 by 16 versions and multi-grid combinations (you can try a duplex difference sudoku in the Plus puzzle). But as with magic squares and Latin squares, the popularity of Sudoku will depend on whether they can continue to offer new challenges.
## Latin Squares Design
On this webpage, we describe the basic concepts of Latin Squares designs. Additional information can be found on the following webpages:
A Latin Square design has two nuisance factors (Rows and Cols) and one treatment factor, each of which has the same number of levels, denoted r. There are no replications and no interactions. If we denote the possible treatment effects by Latin letters, then all the rows and columns are permutations of these letters (with no repeated rows and no repeated columns).
For r = 4 and r = 5, possible configurations are:
Figure 1 – Latin Square Configurations
Note that there are many possible 4 × 4 or larger configurations, although many of these are equivalent in the sense that one can be obtained from another by interchanging one or more rows and/or columns. In fact, there are 4 non-equivalent 4 × 4 configurations and 56 non-equivalent 5 × 5 configurations. It turns out that all the 3 × 3 configurations are equivalent.
Example 1: A factory wants to determine whether there is a significant difference between four different methods of manufacturing an airplane component, based on the number of millimeters of the part from the standard measurement. Four operators and four machines are assigned to the study. A Latin Squares design is used to account for operators and machines nuisance factors.
The representation of a Latin Squares design is shown in Figure 2 where A, B, C and D are the four manufacturing methods and the rows correspond to the operators and the columns correspond to the machines.
Figure 2 – Latin Squares Representation
For our purposes, we will use the following equivalent representations (see Figure 3):
Figure 3 – Latin Squares Design
The linear model of the Latin Squares design takes the form:
An Excel implementation of the design is shown in Figure 4.
Figure 4 – Latin Square Analysis
The left side of Figure 4 contains the data range in Excel format (equivalent to the left side of Figure 3). The middle part of Figure 4 contains the means of each of the factor levels. Representative formulas used are shown in Figure 5.
Cell Factor Formula L4 Row =AVERAGE(H4:K4) H8 Column =AVERAGE(H4:H7) H11 Treatment =AVERAGEIF($B$8:$E$11,H10,$B$4:$E$7)
Figure 5 – Formulas for factor means
The right side of Figure 4 contains the ANOVA analysis. The degrees of freedom for all three factors is 3 (cells P4, P5, P6), equal to the number to r – 1, as calculated by =COUNT(B4:B7)-1. dfT = r 2 – 1 = 15, while dfE = (r–1)(r–2) = 6.
Formulas for the sum of squares (SS) terms are shown in Figure 6. The other values in Figure 4 are calculated in the usual way.
Cell Factor Formula O5 Treatment =DEVSQ(H11:K11)*(P5+1) O6 Rows =DEVSQ(L4:L7)*(P6+1) O7 Columns =DEVSQ(H8:K8)*(P7+1) O8 Error =O9-SUM(O5:O7) O9 Total =DEVSQ(H4:K7)
Figure 6 – Formulas for sums of squares
We see from Figure 4 that there is a significant difference between the four methods (p-value = 0.03345 < .04 = α). There is no significant difference between the operators or between the machines, and so blocking on these factors may not have been necessary in this case.
The analysis is similar when the standard (i.e. stacked) input format is used (see Figure 7). E.g. the mean for row 1 (cell G4) can be calculated by the formula
The mean for treatment A (cell I4) can be calculated by using the formula
Figure 7 – Latin Square Analysis for stacked format
Observation: In the usual three-factor design, the minimum sample size would be 4 × 4 × 4 = 64, while in this design we only require a sample size of 4 × 4 = 16.
Observation: Latin Squares can also be used for a three-factor ANOVA when there are no replications, even when the row and column factors are not nuisance factors, but factors of interest.
## Disclaimer
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Advance Local.
Community Rules apply to all content you upload or otherwise submit to this site.
## Latin Square in C++
In this tutorial, we are going to learn about the Latin square.
The latin square is a matrix (3 x 3) in the form
If you carefully observe the pattern of the above matrix, then you will find out that the last number of the previous row comes as the first element of the next row.
We have to write the program that generates the above matrix for the input n.
Let's see the steps to write the program for the generation of the latin square.
• Initialise the n with any number you like.
• Initialise a number with the value n + 1 call it as mid.
• Write a loop that iterates from 1 to n both inclusive.
• Assign the value of mid to a temp variable.
• Write a loop until temp reaches to the value n.
• Print the temp.
• Print the value.
If you run the above code, then you will get the following result.
|
|
# Waves: find amplitude, given average power
1. Jun 4, 2008
### wayfarer
1. The problem statement, all variables and given/known data
A sinusoidal transverse wave travels on a string. The string has length 7.60 m and mass 6.50 g. The wave speed is 34.0 m/s and the wavelength is 0.250 m.
If the wave is to have an average power of 55.0 W, what must be the amplitude of the wave?
2. Relevant equations
Average Power = 0.5 * sqrt( u F ) * w^2 * A^2 where:
u = density of string, F =force, w = angular velocity, A = amplitude
3. The attempt at a solution
I put in u = (6.50*10^-3)/(0.25) = 8.55 * 10^-4
F =v^2 * u = (34)^2 * (8.55 * 10^-4 )
w = 2*pi* f = 2*pi* v/lambda = 2*3.14*(34)/(0.25)
and solved for A, and got the wrong answer (A = 2.104 m).
Where have I gone wrong?
2. Jun 4, 2008
### alphysicist
Hi wayfarer,
I think there is an error here; you are using the total mass of the string, so you need to put in the total length of the string (not the wavelength of the wave). But I guess that is just a typo in your post, because you have the right answer for $\mu$ so you did really divide by 7.6 m?
Can you give some more details (what you got for the intermediate values F, w, etc.). I did not get 2.104 m from your numbers, but it's difficult to tell where you might have gone wrong without more details.
Last edited: Jun 4, 2008
|
|
190 views
$A, S, M$ and $D$ are functions of $x$ and $y$, and they are defined as follows.
• $A(x, y) = x + y$
• $M(x, y) = xy$
• $S(x,y)= x-y$
• $D(x,y)= x/y, y\neq 0$
What is the value of $M(M(A(M(x, y), S(y, x)), x), A(y, x))$ for $x = 2, y = 3$?
1. $60$
2. $140$
3. $25$
4. $70$
Given that,
• $A(x,y)=x+y$
• $M(x,y)=xy$
• $S(x,y)=x−y$
• $D(x,y)=x/y;y\ne0$
Now, the value of $M(M(A(M(x,y),S(y,x)),x),A(y,x))$ for $x=2,y=3:$
$\Rightarrow M(M(A(M(2,3),S(3,2)),2),A(3,2))$
$\Rightarrow M(M(A(6,1),2),5)$
$\Rightarrow M(M(7,2),5)$
$\Rightarrow M(14,5)$
$\Rightarrow 70$
Correct Answer $:\text{D}$
by
7.7k points 3 8 30
1
135 views
2
111 views
|
|
# Network analysis abuses of null-models
Gorka Zamora-López
Analysing and interpreting data can be a complicated procedure, a maze made of interlinked steps and traps. There are no official procedures for how one should analyse a network. As it happens in many scientific fields the “standard” approach consists of a set of habits that have been popularised in the literature – repeated over-and-over again – without always being clear why we analyse networks the way we do.
Imagine we wanted to study an anatomical brain connectivity made of $$N = 214$$ cortical regions (nodes) interconnected by $$L = 4,593$$ white matter fibers (a density of $$\rho = 0.201$$). Following the typical workflow in the literature we would start the analysis by measuring a few basic graph metrics such as the degree of each node $$k_i$$ and their distribution $$P(k_i)$$, the custering coefficient $$C$$ and the average pathlength $$l$$ of the network. Imagine we obtain the empirical values $$C_{emp} = 0.497$$ for the clustering and $$l_{emp} = 1.918$$ for the average pathlength.
The typical workflow would then lead us to claim that network properties depend very much on the size and the number of links and therefore, the results need to be evaluated in comparison to equivalent random graphs, or to degree-preserving random graphs. At this point, we would generate a set of random graphs of same $$N$$ and $$L$$ as the anatomical connectome, and we would calculate the (ensemble) average values $$C_{rand} = 0.202$$ and $$l_{rand} = 1.800$$. Finally, we would conclude that since $$C’ = C_{emp} \,/\,C_{rand} = 2.466$$ our connectome has a large clustering coefficient and, since $$l’ = l_{emp} \,/\, l_{rand} = 1.066$$ the connectome has a very short pathlength. Therefore, it is a small-world network.
Unfortunately, this line of reasoning is misleading because it skips some intermediate steps that are necessary for an adequate interpretation of the results.
### Relative metrics are not absolute metrics
Let me expose this problem through a fictional example. Imagine that an old woman goes to the doctor’s and the doctor says: “I have good news and bad news for you. Unfortunately, the recent test shows you have 90% chance to develop lung cancer. The good news are, since you have been a heavy smoker for the last thirty years, there is nothing abnormal about it. So, go home and continue with your life as usual.”
The doctor’s recommendation is obviously absurd. Why? Because in that room, in that moment, the crucial information is that in a scale from 0 to 100% the patient is very close to developing a fatal disease. What matters in that scenario is to first answer the question “is the patient at risk of developing a cancer“? That information alone (90% chance) is sufficiently relevant to cause a strong reaction by the doctor and the patient, who will need to alter her lifestyle. Whether the result of the clinical test is expected or not as compared to some hypothesis – e.g. because of her age, because the woman is a heavy smoker, because she worked twenty years in a chemical factory or because of some genetic factor – that is rather irrelevant for the inmediate decision-making. Those other questions could be of importance, however, for a scientist investigating what are the causes of lung cancer, or to a lawyer considering whether they shall sue a tabaco company or the chemical factory in which the woman used to work.
This imaginary story exposes two of the matters that often go down the hill when analysing data and interpreting the outcomes:
1. Only because an observation is expected that doesn’t imply the observation is irrelevant. Neither does the other way around: that an observation is unexpected or significant doesn’t always mean it is relevant.
2. Null-models – used to calculate those expectations – are to be considered only when, and only where, they are needed.
In this case, the observation (the absolute or the empirical value) is that the test returns a probability $$P_{obs}(x) = 0.90$$ for the patient to develop lung cancer. Apart from that, the doctors may have wanted to contrast the result with some factors, e.g. the age (a) of the patient and the number of years (y) she has been smoking. The parameters $$a$$ and $$y$$ are constraints which, introduced into a null-model, could have returned an expected probability of $$P_{exp}(x | a,y) = 0.87$$. Finally, the doctor reads the relative value $$P’ = P_{obs} \,/\, P_{exp} = 1.03$$ and since this is very small, concludes that the patient doesn’t have a problem. The absurdity of the case is that – same as it has been popularised for network analyses – the doctor reads the relative metric $$P’$$ to provide an interpretation about the magnitude of the observation $$P_{obs}(x)$$.
We shall keep in mind that absolute values are meant to answer questions about the magnitude of an observation such as “is the clustering of a network large?“, “is the pathlength of a network short or long?“, “is probability $$P(x)$$ large or small?” On the other hand, relative values are only useful to answer comparative questions like “how similar is A to B” or “how does A deviate from B?” In order to judge whether the clustering of a network is large or small, we need to judge the empirical value $$C_{emp}$$. Interpreting $$C’ = C_{emp} \,/\, C_{rand}$$ instead for that purpose is misleading because we are demanding the relative value $$C’$$ to answer a question it cannot answer. For the same reason that $$P’ = P_{obs}(x) \,/\, P_{exp}(x|a,y)$$ is not an answer to whether the patient has a large or a small probability of developing a cancer.
Null-models are built upon two types of ingredients: constraints and generative assumptions. In random graphs the conserved numbers of nodes $$N$$ and links $$L$$ are the constraints. The generative assumption is that during the construction of the network links are added considering that any pair of nodes is equally likely to be connected. The values $$C_{rand}$$ and $$l_{rand}$$ are expectation values out of the null-model because their outcome depends both on the constraints and on the hypotheses we made of how the null-model is generated. Therefore, the relative metrics $$C’ = C_{emp} \,/\, C_{rand}$$ and $$l’ = l_{emp} \,/\, l_{rand}$$ are only meant to investigate whether $$C_{emp}$$ and $$l_{emp}$$ may have originated following the assumptions behind the null-model, not to evaluate how large / small or how relevant / irrelevant $$C_{emp}$$ and $$l_{emp}$$ are.
### So, how to fix this?
The reason we analyse data is because we have some question(s) about the system we are investigating. To analyse data is thus to ask those questions to the data in the form of metrics and comparative analyses. Every metric we measure serves to answer a specific question. Every comparison we make as well. Besides, analysing data is a step-wise process that requires to follow a reasonable order. In my experience I came to realise that most incongurencies in the interpretation of data – including some of the most heated debates – occur when we try to answer the wrong question at the wrong step, or when we evaluate the wrong metric for the question we aim at answering. Therefore, we need to be more aware of the steps we take to study networks and which are the questions we are targeting at each step, with every metric.
When a new network falls in our hands, one we have never seen before, our first goal is to understand “how does that network look like.” To answer this we don’t need null-models. At this initial step our job is to apply a variety of graph metrics available, to evaluate their relevance individually and together, such that the “picture” of the network takes form and makes sense. Once we have clarified the main properties of the network and its architecture, only then, we shall move onto the second step and start asking higher level questions such as “where does the clustering coefficient of this network come from?” Or, “why does this network have a rich-club?” Or, “how does this network compare to others of similar kind?” Answering those questions requires testing different hypotheses, posed in the form of generative assumptions on how the network may have arised. We will then build null-models based on those hypotheses and we will compare their outcomes (the expectation values) to the observations in the empirical network. This comparative exploration helps us determine whether our generative assumptions could be right or wrong.
##### Boundaries and limits, the forgotten members of the family
In summary, analysing a network implies two major steps. The first is to discover the properties and the shape of the network. The second step consists of inquiring where do those properties come from. For the latter, we will perform comparative analyses against null-models or against other networks. But, how do I to properly read the relevance of graph metrics in the first step, if it is not by comparing to other networks?
Well defined metrics have clear boundaries and those boundaries correspond to unambiguous, identifiable cases. For exampke, the Pearson correlation takes values from 0 to 1; $$R(X,Y) = 0$$ when the two variables are (linearly) unrelated and $$R(X,Y) = 1$$ only when the two variables are perfectly related (linearly). Those two boundaries, 0 and 1, and their meanings are the landmarks we need to interpret whether any two variables are correlated or not. If I were studying two variables and find that $$R(X,Y) = 0.21$$, then I would know that $$X$$ and $$Y$$ are not correlated. To derive this conclusion it doesn’t matter what a null-model returns as an expectation under some given assumptions or how does this result compare to other cases. What matters is that, in a scale of 0 to 1, $$R(X,Y) = 0.21$$ is very close to the lower bound.
In networks, many metrics have well-defined upper and lower boundaries as well. For example, a node could minimally have no neighbours and thus its $$k_i = 0$$; or maximally, it could be connected to all other nodes such that $$k_i = N-1$$. In the case of the average pathlength, the smallest value it can take is $$l = 1$$, if the network is a complete graph. The longest possible (connected) graph is a path graph, whose average pathlength is $$l = \frac{1}{3}(N+1)$$. The pathlength of disconnected networks is infinite but their global efficiency can be calculated. Global efficiency is minimal, $$E = 0$$, in the case of an empty graph (a network with no links) and it is maximal, $$E = 1$$, for complete graphs. These boundaries and their corresponding graphical configurations are the natural landmarks that help us evaluate whether a network has a short pathlength (large efficiency) or long (small efficiency).
Now, we also have to acknowledge that, in occasions, the upper and the lower bounds could represent impractical solutions. For example, I just mentioned that the boundaries for the pathlength and for the global efficiency are characterised by empty graphs, complete graphs and path graphs. These configurations can only happen for specific combinations of $$N$$ and $$L$$ so, what happens with networks of arbitrary size and density? In those cases, there is no need to invoke null-models for help. Instead, what we really need is to identify the limits of the pathlength and efficiency for given $$N$$ and $$L$$, and use those limits in combination with the boundaries as references to make the judgements.
Identifying the practical limits for all network metrics can be a difficult challenge but it is a necessary effort the community should face. In Zamora-López & Brasselet (2019) we could identify the upper and the lower limits of the average pathlength and of the global efficiency for (di)graphs of any arbitrary combination of $$N$$ and $$L$$. Barmpoutis and Murray (2011) accomplished the same for the betweeness centrality, the radius and the diameter of graphs. I am sure that completing the list of graph metrics with known analytical limits will help the field to achieve more transparent and more accurate interpretations, and will help restrict the use of null-models for those situations and questions for which they are truly helpful.
### Concluding …
I have tried to expose why the popular workflows to analysing networks overstates the use of null-models. I hope the following take-home messages became clear:
1. Data analysis is all about asking questions to the data. Every network metric (absolute, expectation or relative) serves the purpose to answer a given but different question.
2. Analysing a network is a step-wise process and every step is also meant to answer different types of questions. First, we want to understand how the network looks like and second, where does its architecture come from or how it compares to other networks.
3. Null-models are an important tool for the questions in the second step because they are built based on constraints and generative hypotheses. Null-models are thus meant for hypothesis testing, not for assessing the magnitude of an observation.
4. Relative metrics such as $$C’ = C_{emp} \,/\, C_{rand}$$ do not inform whether $$C_{emp}$$ is large or small. For that, the boundaries and the practical limits of the metrics need to be considered as the landmarks that allow us to judge the magnitude (and the importance) of $$C_{emp}$$.
For brevity I had to leave many matters aside in this post. Specially important are the practical implications for how to classify hubs, how to identify rich-clubs without relying on null-models, when is a network small-world and what are the consequences for community detection. Indeed, I would dare to say that the systematic use of null-models for community detection methods, e.g., via diverse modularity measures, biases their results. I hope to treat those topics in future posts.
So far, feel free to leave your views and comments below. And, if you would like to write your own post, let me know 🙂 Please notice that comments will be moderated and therefore they won’t appear inmediately. Comments should accept mathematical equations using LaTeX notation.
###### References
D. Barmpoutis & R.M. Murray “Extremal Properties of Complex Networks.” arXiv 1104.5532 (2011).
G. Zamora-López & R. Brasselet “Sizing Complex Networks.” Comms. Phys. 2:144 (2019).
Gorka Zamora-López is a post-doctoral researcher with +15 years of experience in the field. Now, he replies e-mails, attends zoom meetings and makes colourful figures all day long.
## 1 thought on “Network analysis abuses of null-models”
1. Guang Ouyang says:
I do feel comparing to a random network absurd. People do it as if there is a process that things all “evolve” from a random network…
However, comparison is always necessary to make sense of things, actually, everything. For the case of 0.9 probability of getting cancer, the value 0.9 is also generated by comparing the test result of the woman to that of general population. A correlation value of 0.21 is also essentially generated by comparison of the covariance value to the distribution of covariance from two independent variables with specific variances and number of samples.
For me, it is just that, comparing with a random network for no obvious reason sounds strange. Not the comparision itself.
Relatedly, I found the theory that the perception of our human being is generated by the comparison between brain’s prediction and the acutal stimulus very fasinating. No comparison, no perception…
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.