url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
https://chemistry.stackexchange.com/questions/50744/how-does-lc50-translate-to-real-world-exposure-for-paint-thinner | # How does LC50 translate to “real world” exposure for paint thinner?
Last year I started oil painting, which means (in part) that I use paint thinner for working with and/or cleaning brushes. I recently became concerned about my exposure to the vapor from the paint thinner I use, so I requested a toxicology report from the manufacturer. In the document it specifies that the summated $\mathrm{LC}_{50}$ for the product is 21400 mg$\cdot$ m$^{-3}$.
My (basic) understanding of this is that 50% of test subjects will die with a single exposure at the above level; however, I've really no idea how that translates to "real life" and wonder if someone can shed some light on it.
I'm painting in a reasonably small room (around 3 m x 5 m) with windows open, so I'm essentially trying to determine if this is safe or not, or whether I should be using breathing apparatus and/or move to a different room.
This is a link to the whole document which I received from the manufacturer which contains the substance and other information.
You are right $\mathrm{LC_{50}}$ is where $50~\%$ of subjects will die. for a room $3~\mathrm{m} \times 3~\mathrm{m} \times 5~\mathrm{m}$ to reach a concentration of $21400~\mathrm{mg/m^3}$ you would have to evaporate $963~\mathrm{g}$ of the material ($3~\mathrm{m} \times 5~\mathrm{m} \times 3~\mathrm{m} \times 21.4 ~\mathrm{g/m^3} = 963~\mathrm{g}$) within a sealed room. Unless your material is very volatile it is unlikely you will even be close to reaching this concentration. That said however it does not mean that this is not a lower concentration that could kill a more susceptible body, nor that the prlonged smell won't make you feel sick. It only says at this concentration you have a $50~\%$ chance of dying if exposed at the concentration for a specified (though in this case we don't know how long) amount of time.
The $\mathrm{LD_{50}}$ and $\mathrm{LC_{50}}$ (lethal dose and lethal concentration) values are measures for acute toxicity as you noted. Long-term exposure is generally not depicted well by these acute toxicity measures. A.K. pointed out, how improbable it is to have a serious risk of acute poisoning. (Also note that the vapour pressure is given as $2~\mathrm{mmHg}$, which is $2.7~\mathrm{mbar}$ in proper units, meaning that evaporation will be very slow at ambient temperature and pressure.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5990585088729858, "perplexity": 688.412249937488}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00004.warc.gz"} |
http://mathoverflow.net/questions/19828/the-space-of-lie-group-homomorphisms | The space of Lie group homomorphisms
Let $\ \mathrm{Hom}(H,G)\$ be the space of Lie group homomorphisms between compact connected Lie groups $H$, $G$. What is known about homology (or homotopy) groups of $\mathrm{Hom}(H,G)$?
UPDATE: $G$ acts on $\mathrm{Hom}(H,G)$ by conjugation, and the orbits are
a) connected, as $G$ is connected,
b) closed, as easily follows from compactness of $G$, and
c) open for it is an classical result that any nearby representations of $H$ into $G$ are conjugate; this uses compactness of $H$, and a proof can be found in the book by Conner-Floyd, "Differentiable periodic maps", Chapter VIII, Lemma 38.1.
Thus $\mathrm{Hom}(H,G)$ is the disjoint union of the $G$-orbits, and the $G$-orbit that contains a representation $r$ is homeomorphic to $G/Z_G(r(H))$, where $Z_G(r(H))$ is the centralizer of $r(H)$ in $G$.
What I do not yet understand is how to see whether $\mathrm{Hom}(H,G)$ has infinitely many connected components.
UPDATE: the topology on $\mathrm{Hom}(H,G)$ is that of uniform convergence.
-
What's your topology on $\mathrm{Hom}(H,G)$? It is most likely discrete in many cases.... $G = U(1)$ and $H = U(1)$ being the simplest example. One can begin with the case where $G = U(n)$, then it's actually about representations of $H$, which is, as we know, discrete and indexed by the root lattice. – Bo Peng Mar 30 '10 at 16:59
Bo Peng, I just realized that nearby homomorphisms of $H$ into $G$ are conjugate in $G$, which is what you probably mean when talking about discreteness. Thanks! – Igor Belegradek Mar 30 '10 at 17:35
Here I deleted the comment in which I was stating that the topology on $\mathrm{Hom}(H,G)$ is that of pointwise convergence; this is incorrect, and it should be uniform convergence topology. Hats off to Tom Church for pointing this out. – Igor Belegradek Mar 30 '10 at 22:01
I deleted my answer below, because half of it was wrong, and the other half no longer applies after the update to the question. Thanks to Charles Rezk for pointing out my mistake. – Tom Church Mar 30 '10 at 22:33
In general there are infinitely many connected components. Look at the case when both groups are the one dimensional torus, then every $k\in\mathbb Z$ gives a homomorphism by $z\mapsto z^k$. So the set is isomorphic to $\mathbb Z$ in this case. But this phenomenon has to do this central tori. In general, the groups are products of simple Lie-groups and central tori. If both groups are simple, they have only nontrivial homomorphisms, if they are isomorphic, so it boils down to determining isomorphisms, these induce isomorphisms on the Lie-algebra which preserve the Killing form. The latter is definite, hence its isometry group is compact, which should suffice to make the set of conjugacy classes finite.
Thanks for the input. Perhaps there is no "easy to state" way to enumerate components of $\mathrm{Hom}(H,G)$. One extreme is when $G$, $H$ are tori; then everything boils down to $\mathrm{Hom}(T^k, S^1)$, i.e. to enumerating codimension 1 subtori in $H=T^k$. At the other extreme if $H$, $G$ are isomorphic and simple, then mod conjugation we are looking at $\mathrm{Out}(G)$, which I think is finite. – Igor Belegradek Apr 1 '10 at 12:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629422187805176, "perplexity": 270.03073366204336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999675300/warc/CC-MAIN-20140305060755-00049-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://discretization.de/en/events/9/ | # Kis-Sem: Keep it simple Seminar
This weekly seminar is intended for PhD Students and Postdocs to informally discuss topics in the field of Geometry and Mathematical Physics.
• Next Occurrence: 26.05.2017, 14:00 - 15:00
• Thilo Rörig: Pluecker geometry of ruled surfaces
• Type: Seminar
• Location: TU Berlin, MA875 H-Cafe
Contact: Thilo Rörig
Room: MA875 (H-Cafe)
Time: Tuesdays at 14:00, and Fridays at 14:00
#### 30.05.2017
• 14:00 - 15:00 TBA, Isabella
#### 26.05.2017
• 14:00 - 15:00 Pluecker geometry of ruled surfaces, Thilo Rörig
#### 25.04.2017
• 14:00 - 15:00 On conformally equivalent triangle lattices, Ulrike Bücking
#### 07.04.2017
• 14:00 - 15:00 Phenomena of spin transformations with prescribed bi-normal, Christoph Seidel
#### 04.04.2017
• 14:00 - 15:00 A variational principle for isoradial graphs, Niklas Affolter
+
• I will repeat the basic idea of Rivin's theorem on prescribed dihedral angles and the discrete conformal theory. Then I want to show how these ideas can be reapplied to get to a variational principle for general isoradial graphs, and how consequently one can combine them to get a principle for flat isoradial graphs.
#### 31.03.2017
• 14:00 - 15:00 Some aspects of integrability in the Six Vertex model, Ananth Sridhar
+
• I will review the basic correspondence between the Hamiltonian and the Lagrangian frameworkd and talk about some aspects of integrability in the Six Vertex model.
#### 28.03.2017
• 14:00 - 15:00 Projective differential geometry, Thilo Roerig
+
• I will give a little introduction to *Projective differential geometry* and maybe give a nice description of the Lie quadric in Pluecker geometry.
#### 17.03.2017
• 14:00 - 15:00 Pluri-Lagrangian systems and KdV hierarchy, Mats Vermeeren
+
• Today at 2pm I will talk about pluri-Lagrangian systems. The pluri-Lagrangian description of integrable systems is based on a variational principle and on the key fact that an integrable system is generally part of a family of compatible equations. It can be applied both in the continuous case (e.g. KdV hierarchy) and in the discrete case (e.g. quad equations).
#### 14.03.2017
• 14:00 - 15:00 Knots, minimal surfaces, mapping class groups, Benedikt Kolben
+
• Today, the 14.3. at 2pm I would like to talk to you about knots, how one could go about trying to encode them in a finite symbol, what other sciences have to say, and some of the mathematics involved. I will assume that the tools used are far from common knowledge and begin with an introduction to mapping class groups, a key player in our mathematical set-up alongside orbifolds, after outlining the general idea of my approach to knot enumeration.
#### 10.03.2017
• 14:00 - 15:00 Lagrange multipliers of a maximum entropy distribution, Niklas Affolter
+
• Last time Jan talked about the maximum entropy principle. In my talk today I want to show how the Lagrange multipliers involved can be seen as the critical points of a convex functional. We then want to apply this method to the Dimer model and see how far we can take it.
#### 07.03.2017
• 14:00 - 15:00 Plausible inference and the maximum entropy principle, part2, Jan Techter
#### 03.03.2017
• 13:00 - 14:00 Plausible inference and the maximum entropy principle, Jan Techter
+
• Plausible reasoning is a generalization of deductive reasoning. The main ingredient is Cox's theorem, which states that under certain assumptions of consistency and qualitative correspondence to common(?) sense, plausible inference is governed by the laws of probability theory. Anyway, if you want to reason plausibly you still need some prior probabilities to start with representing your state of knowledge. One possible approach is the maximum entropy principle. It is based on Shannon's finding that (again) given some sensible requirements there is a unique measure for uncertainty of probability distributions. As an application we will derive the Boltzmann/Gibbs distribution of statistical mechanics.
#### 30.11.2016
• 14:30 - 15:30 Smooth polyhedral surfaces, Felix Günther
#### 27.07.2016
• 15:15 - 15:45 Crystalline structures from hyperbolic tilings, Benedikt Kolben
+
• The EPINET project enumerates crystalline frameworks that arise as structures derived from hyperbolic tilings. Using combinatorial tiling theory by Dress and Delaney, the 3-dimensional structures arising through this process can be ordered by complexity. The aim is to ultimately construct and classify all possible three-dimensional structures that arise in this way. This approach was recently expanded to include regular examples of so-called free tilings, which are tilings that include unbounded tiles and resulted in many novel 3-dimensional structures that also contained separate but interwoven nets. The goal of this work is to construct three-dimensional nets and weavings from hyperbolic tilings that arise by further generalizing the above approach to incorporate irregular tilings with two distinct edges. These tilings are projected onto some prominent examples of triply periodic minimal surfaces such as the P, D, G and H surface. Using this process, we can systematically construct increasingly complicated 3-dimensional structures. While this work has ties to areas as diverse as the mesoscale structure of soft matter or knot theory, before looking at the arising three-dimensional structures, the first step of the problem is to find a way to order, by complexity, all subsymmetries of an asymmetric patch of the minimal surface that represent the same group of symmetries.
• 16:00 - 16:30 Ricci Flow III: On the topological condition for existence of a circle packing metric with constant Gaussian curvature, Hana Kourimska
+
• In the earlier KisSem talks we have briefly seen that an existence of a circle packing metric with constant Gaussian curvature is equivalent to a certain topological condition, developed by Thurston. The goal of today's talk will be to understand this condition.
• 16:45 - 17:15 Conjugate Silhouette nets, Thilo Roerig
+
• We will study Laplace transformations of surfaces with conjugate parametrization and show, that degenerate Laplace transformations are characteristic for projective translational surfaces.
#### 22.06.2016
• 14:30 - 15:30 Infinitesimal deformations of discrete surfaces, Wai-Yeung Lam
#### 01.06.2016
• 14:15 - 15:15 Ricci flow, part 2, Hana Kourimska
+
• After the introduction to the smooth and discrete Ricci flow of a few weeks ago, I will look deeper into the properties of the first of the discrete Ricci flows based on a weighted triangulation. I will discuss some parts of the proof of convergence of this Ricci flow to a metric of constant curvature and the existence and uniqueness of such metric.
#### 25.05.2016
• 14:00 - 15:00 Envelope and orthogonal trajectories of a family of circles, Jan Techter
+
• We will discuss the elementary problem of finding the two envelope curves and all orthogonal trajectories of a one-parameter family of circles in the plane. The latter case is governed by a Riccati equation, which describes the infinitesimal motion of a Möbius transformation. We will also consider a possible discretization using the local symmetry, which leads to similar equations.
#### 04.05.2016
• 14:00 - 15:00 Projective model of Möbius geometry, Thilo Roerig
#### 22.04.2016
• 12:00 - 13:00 Variational Methods for Discrete Surface Parameterization. Applications and Implementation., Stefan Sechelmann
#### 20.04.2016
• 13:00 - 14:00 Ricci flow, part 1, Hana Kourimska
+
• Introduced in the 1980's by Richard Hamilton, the Ricci flow is one of the most useful tools nowadays to study the properties of Riemann manifolds, in particular in dimension three, and it has played an essential role in proving Thurston's geometrization conjecture, thus classifying all closed 3-manifolds. I will start the talk by mentioning the role of the smooth Ricci flow in the modern mathematics and then explaining its behaviour, concentrating on manifolds of dimension 2 - surfaces. We will encounter and discuss different discretizations of the flow, depending on the choice of discretization of the metric and the Gaussian curvature.
#### 15.04.2016
• 12:00 - 13:00 Super-Nets, Thilo Roerig
#### 30.03.2016
• 13:00 - 14:00 Teichmüller maps, part 2, Lara Skuppin
#### 23.03.2016
• 13:00 - 14:00 Teichmüller maps, part 1, Lara Skuppin
+
• In this talk, I will present an introduction to extremal quasiconformal mappings (in the continuous theory). We will start with the definition of quasiconformal mappings and review the Grötzsch problem of finding an extremal quasiconformal mapping between two rectangles. In order to proceed to a more general case, we will then discuss holomorphic quadratic differentials and Teichmüller maps, which are very special quasiconformal maps: Namely, these can be described by a pair of holomorphic quadratic differentials that locally yield conformal coordinates in which the map is just an affine stretch. Our goal is to explain Teichmüller's theorem, which asserts that given two Riemann surfaces of the same (finite, non-exceptional) type, Teichmuller maps are the unique extremal quasiconformal mappings in each homotopy class.
#### 16.03.2016
• 13:00 - 14:00 The dimer model, Niklas Affolter
+
• We introduce the dimer model, a topic in statistical physics. It deals with perfect matchings in graphs, where the probability of picking a matching comes from the sum of the involved edge-energies. There are some surprising geometric results including the occurence of a familiar function... This will be an introductory talk, presenting the definitions, some results and details on how to count perfect matchings with determinants.
#### 11.03.2016
• 12:00 - 13:00 Discrete Confocal Quadrics as orthogonal Koenigs nets, Jan Techter
+
• We introduce discrete confocal quadrics as separable solutions of the discrete Euler-Darboux equation. They are discrete Koenigs nets, and up to component-wise rescaling, satisfy a new discrete orthogonality condition involving a combinatorically dual net. We also show that discrete confocal conics derived from incenters of incircular-nets belong to the same class of orthogonal Koenigs nets.
#### 09.03.2016
• 13:00 - 14:00 Rigidity theory, Wai-Yeung Lam
+
• Basic introduction to rigidity theory for discrete surfaces.
#### 02.03.2016
• 13:00 - 14:00 Minimal surfaces from discrete harmonic functions, Wai-Yeung Lam
+
• We introduce discrete harmonic functions in the sense of the cotangent Laplacian. We show that given a discrete harmonic function on a planar triangular mesh, there is a family of discrete surfaces sharing properties analogous to smooth minimal surfaces. Certain discrete minimal surfaces, including those from Schramm?s orthogonal circle patterns, are in addition critical points of the total area.
#### 04.12.2015
• 11:00 - 12:00 On a discretization of confocal quadrics, part 2, Jan Techter
+
• discrete part: discrete Euler-Darboux equation and discrete confocal quadrics up to component-wise scaling
#### 27.11.2015
• 11:00 - 12:00 On a discretization of confocal quadrics, part 1, Jan Techter
+
• smooth part: confocal quadrics and the Euler-Darboux equation
#### 26.06.2015
• 11:00 - 12:00 Zero-sum problems in abelian groups, Florian Frick
#### 12.06.2015
• 11:00 - 12:00 On connections between electric networks, discrete harmonic functions, extremal length and random walks III, Ulrike Bücking
#### 05.06.2015
• 11:00 - 12:00 On connections between electric networks, discrete harmonic functions, extremal length and random walks II, Ulrike Bücking
#### 29.05.2015
• 11:00 - 12:00 On connections between electric networks, discrete harmonic functions, extremal length and random walks, Ulrike Bücking
#### 27.02.2015
• 11:00 - 12:00 Inscribed cyclic polygons, Hanna Kourimska und Lara Skuppin
#### 20.02.2015
• 11:00 - 12:00 Poncelet's Porism, Ulrike Bücking
#### 23.01.2015
• 11:00 - 12:00 Euclidean plane geometry via geometric algebra, Charles Gunn
#### 16.01.2015
• 11:00 - 12:00 Lexell's Theorem in the hyperbolic plane, Christoph Seidel
#### 05.11.2014
• 10:15 - 11:15 Nets with unique nodes and spherical geometry, Thilo Rörig
#### 29.10.2014
• 09:45 - 11:00 Discrete line congruences on triangulated surfaces, Jan Techter
#### 22.10.2014
• 10:15 - 11:15 Isothermic triangulated surfaces, Wayne Lam
#### 15.10.2014
• 10:15 - 11:15 Nerve complexes of arcs in $\mathbb{S}^1$, Florian Frick
#### 18.06.2014
• 16:15 - 17:15 Constrained Willmore Minimizers - Theory and Experiments, Lynn Heller (Uni Tuebingen)
#### 28.05.2014
• 16:15 - 17:15 Geometric invariant theory of the space - a modern approach to solid geometry (with a much simpler proof of the Kepler's conjecture as an exemplary application), Wu-Yi Hsiang (UC Berkeley/Hong Kong University)
#### 02.04.2014
• 14:00 - 15:00 Zyklidische und hyperbolische Netze, Emanuel Huhnen-Venedey
+
• Eine stückweise glatte Diskretisierung orthogonaler und asymptotischer Netze in der diskreten Differenzialgeometrie.
#### 26.03.2014
• 14:00 - 15:00 Zyklidische und hyperbolische Netze, Emanuel Huhnen-Venedey
+
• Eine stückweise glatte Diskretisierung orthogonaler und asymptotischer Netze in der diskreten Differenzialgeometrie.
#### 19.03.2014
• 14:00 - 15:00 Thickening Dubins Paths, Thomas El Khatib
#### 26.02.2014
• 14:00 - 15:00 Tverberg's Theorem strikes back, Florian Frick
#### 31.01.2014
• 12:00 - 13:00 Quasi-conformal distortion, Lara Skuppin
#### 17.01.2014
• 12:00 - 13:00 Generalized isoradial circle patterns, Jan Techter
#### 06.12.2013
• 10:15 - 12:00 Elastic curves and knots, Thomas El Khatib
#### 22.11.2013
• 10:15 - 12:00 Splitting Separatrices in Dynamical Systems, Marina Gonchenko
#### 15.11.2013
• 10:15 - 12:00 Hyperbolic Delaunay Triangulations, Thilo Rörig
#### 08.11.2013
• 10:15 - 12:00 Teichmüller spaces, Lara Skuppin
#### 01.11.2013
• 10:15 - 12:00 Teichmüller spaces, Lara Skuppin
#### 25.10.2013
• 10:15 - 12:00 Statistical Mechanics, Andrew Kels
#### 18.10.2013
• 10:15 - 12:00 Subdivision of Koenigs nets, Stefan Sechelmann
#### 21.06.2013
• 10:15 - 12:00 Troyanov Theorem on Riemann surfaces and polyhedral metrics, Micheal Joos
#### 14.06.2013
• 10:15 - 12:00 Canonical immersions of complex tori, Andre Heydt
#### 31.05.2013
• 10:15 - 12:00 From Maxwell's equations to Hamiltonian Flows on Phase Space, Christian Lessig
#### 24.05.2013
• 10:15 - 12:00 Triangulations with valence bounds, Florian Frick
#### 17.05.2013
• 10:15 - 12:00 Theorem on circles and lines - old and new, Arseniy Akopyan
#### 03.05.2013
• 10:15 - 12:00 Symmetries on Riemann surfaces, Isabella Thiessen
#### 26.04.2013
• 10:15 - 12:00 Darboux transforms of plane curves, Thilo Rörig
#### 19.04.2013
• 10:00 - 12:00 Direction fields, Felix Knöppel
#### 12.04.2013
• 10:00 - 12:00 Nets on surfaces, Thilo Rörig
#### 15.03.2013
• 10:00 - 12:00 Axis of motions in different geometries/ Discussion: Hopf fibration, Charles Gunn
#### 08.03.2013
• 10:00 - 12:00 Smooth vector fields on discrete surfaces, Felix Knöppel
#### 01.03.2013
• 10:00 - 12:00 Axes of motions via geometric algebra in different metrics, Charles Gunn
#### 22.02.2013
• 10:00 - 12:00 Axes of hyperbolic motions, Thilo Rörig
#### 15.02.2013
• 10:00 - 12:00 From Hyperboloid to Poincare model via Klein model, Thilo Rörig
#### 08.02.2013
• 10:00 - 12:00 A game on graphs, Felix Günther
#### 01.02.2013
• 10:00 - 12:00 Curvature line and asymptotic line parametrizations in Lie and Pluecker Geometry, Emanuel Huhnen-Venedey
#### 18.01.2013
• 10:00 - 12:00 Homology theories, Stefan Born
#### 07.12.2012
• 10:00 - 12:00 , Nikolay Dimitrov
#### 30.11.2012
• 10:00 - 12:00 3D- and 4D-consistency, quad equations, Bäcklund transformations and consistency, Bianchi permutability, Raphael Boll
#### 23.11.2012
• 10:00 - 12:00 Discrete and smooth KdV-equations, Bäcklund transformations, quad equations, 3D-consistency, Raphael Boll
#### 16.11.2012
• 10:00 - 12:00 Cosine-law for spherical triangles and dynamical systems, Matteo Petrera
#### 09.11.2012
• 10:00 - 12:00 Schläfli principle, David Chubelaschwili | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7044490575790405, "perplexity": 1274.9316616986737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607848.9/warc/CC-MAIN-20170524152539-20170524172539-00130.warc.gz"} |
http://casa.colorado.edu/~danforth/comp/tex/thesistex.html | # Astro-ThesisTeX
(latest update: 3-20-03, by C. Danforth) Writing a doctoral thesis is painful enough without the added frustration of LaTeX! Many astrophysics grad students are familiar with writing papers using the AASTeX package--an add-on to LaTeX published by the American Astronomical Society and the format of choice for most astronomical journals. JHU and most universities have strict formatting requirement for official disertations. Fortunately, there is a very useful package to put your generic LaTeX document into the approved thesis format (double spacing, margins that are just so, title pages, and all the rest). Unfortunately, the thesis package and AASTeX are incompatible and result in much bloodshed, angry words and general bad juju. Generations of grad students have coped with these technical difficulties through a mixture of hand-me-down code, vague rumors and experimentation.
Hopefully this document will make life easier for everyone...
Please note: This code is no longer supported (if indeed it ever was). I am very pleased that so many people have found it useful over the past five years, but I have moved on and can't afford the considerable effort required to keep it current with the latest versions of LaTeX and/or AASTeX. If nothing else, I hope that the solutions here will give you a place to start. If you run across a specific problem and find a solution you'd like to share, please send it to me and I'll be happy to incorporate your suggestions in the FAQ. I wish you the best of luck! Charles
## Preliminaries
First things first: do you know how to use LaTeX? If not, much of this will be quite cryptic and not terribly useful.
Learning LaTeX: As always, the best way to learn is by example. Norman Matloff has created a nice page of introductory notes with simple examples and links to lots of resources. There's another nice primer and lexicon maintained by David Wilkins.
Familiar with LaTeX? Great! How about AASTeX? AASTeX is just a package of routines to optimize LaTeX for the astronomical community. There are loads of handy symbols, formatting options, bibliography tools, and other handy goodness. Moving on...
#### Why shouldn't I just use MSWord or some other word processor?
This is a valid question. LaTeX is not a word processing program but a typesetting program. Your document is written in plain text with tags in it (much like HTML but less annoying). This can be done in any word processing program (Emacs, vi, Simpletext, MSWord, clay tablets) on any platform. It is then compiled into a Postscript or PDF file which can be read and printed out by pretty much everyone. The output looks the same for everyone regardless of which fonts they have installed, or what operating system they're using. Word documents and the like are only legible by some of the population and will look different on different machines. More importantly, most of the astronomical community (including all the journals) use LaTeX and some require submissions in this format. You'll be writing all your papers in it; so you might as well get used to it. LaTeX is extremely geeky with more degrees of freedom than you can possibly imagine. Best of all, there are lots of astronomer-specific add-ons one can use to make your life easier.
## The Thesis Class and the files you'll need
Now for the actual thesis writing. You'll need the JHU Thesis style file written by Ian Goh. He has written
a nice bit of documentation which you need to read. Be sure to download the thesis class definition file (thesis.cls) and one or more the point size files (jhu10.clo, jhu11.clo, jhu12.clo) and save them in your local directory where you will be working. The files are also located on Ian's website http://engspec2.cer.jhu.edu/~ian/jhuthesis/new/v03.1/. There are various tags accepted by the thesis class pertaining to what kind of degree you're getting (PhD, MA, MS, etc) and the type of document (Dissertation, Essay, etc).
## Writing Your Thesis in Eight Easy Steps
Time to start your thesis. There are several parts which I will go through in order.
1. #### Preamble
Every LaTeX document starts with a preamble where various styles, margins, and other options are defined. Every preamble starts with a document class and in this case you'll want something like \documentclass[10pt]{thesis}. This tells LaTeX to read the 'thesis.cls' class file to learn about margins, formatting and various other conventions different from normal LaTeX. In the example above, I'm using 10pt type, but you can change this to 11pt or 12pt.
There are other lines in the preamble which install various other functionality. We'll deal with these later.
The final line of the preamble is \begin{document} which matches an \end{document} at the end, the last line in your file. All your text, figures, tables and so forth will go between these two lines and are described below.
2. #### Title Page
\title and \author are pretty self explanatory. Next comes a line defining what kind of document it is. This can be any one of the following: \thesis, \essay, \dissertation; you'll probably want the last. Next comes what kind of degree you're getting: \masterarts, \masterscience, or \doctorphilosophy. Including the line \copyrightnotice puts in a line about how it's copywritten to you (whether or not you've actually filed the copyright paperwork is a separate matter). You'll also need to specify \degreemonth{} and \degreeyear{}. The command \maketitle turns the above information into a nice-looking title-page.
3. #### Front Matter
Next comes the front matter (tables of contents, abstracts, tables of figures and so forth). This is defined in a block bracketted by \begin{frontmatter} \end{frontmatter}. Inside you'll have an abstract bracketted by \begin{abstract} \end{abstract}. Your abstract shouldn't be more than 350 words and must include the name of your academic advisor (usually in the last sentence).
Then come the three lines: \tableofcontents, \listoffigures, and \listoftables which will generate the obvious tables. The beauty of LaTeX is that, as the text changes, the page numbers on these contents lists will be updated effortlessly.
4. #### Dedications and Acknowledgements
Some people may wish to include additional information in the front matter. If you wish to acknowledge help, ideas or support (financial or otherwise), you can place this text between \begin{acknowledgement}\end{acknowledgement} tags. Typically, this will immediately follow your abstract both in your code and in the final printed text.
Similarly, if you want to dedicate all your collossal thesis-writing efforts to some significant person, pet, diety, event or natural phenomenon, include a \begin{dedication} \end{dedication} tag. The dedication will appear, unlabeled on a blank page, immediately before the first chapter and after all the tables of contents and figure lists. Both dedications and acknowledgements are optional and their use varies by personal preference.
5. ### The Text
This is the real meat of the thesis and the part you should (quite rightfully) spend 99% of your time on. Presumably your thesis will be pretty large and will contain a hundred pages of text or more. It's convenient in a large document such as this to split things up into multiple files corresponding to individual chapters or sections and insert them appropriately with \input{file} or \include{file}. The difference between the two is explained in the FAQ. I have an include for each chapter which keeps my actual thesis.tex file itself pretty small and manageable.
As for actually writing your thesis for you, I'm the wrong person to ask ;-)
6. #### Bibliography
After all the text, figures and whatnot that comprise 90% of the work, comes the bibliography. This should look something like this:
\addcontentsline{toc}{chapter}{Bibliography}
\begin{thebibliography}{} \end{thebibliography}
The first line above adds the Bibliography to the table of contents. All of your references will go between the begin and end lines, as usual. The mechanics of citations and referencing is covered in more detail in my Natbib tutorial.
7. #### Appendices
Everyone likes appendices! It's where you get to dump your code, your massive quantities of data or huge tables. Readers skip them and it transforms your meager, skinny work into a two-volume bug killer. In the code, simply put in the tag \appendix when you want the appendices to start. From then on, use \chapter{Loads o' Data} as normal. The first chapter after the \appendix tag will now be called "Appendix A: Loads o' Data" instead of just "Chapter 7: Loads o' Data" and all the figure numbers, equations, and tables will be "Figure A.1" instead of "Figure 7.1", etc. Subsequent chapters called after this will be labeled B, C, D....
8. #### Vita
It seems silly, buy JHU wants a short 'about the author' at the end of your thesis. The vita page is the last page of the thesis and is a brief biographical sketch. This should record the date and location of your birth and the salient facts of academic training and experience in teaching and research. This may seem egotistical, but it's probably easier and more fun to write than any other part of your thesis so why not? As usual, there is a \begin{vita} \end{vita} tag. It should go at the end of your document, however, just before the \end{document} tag.
See Randy Telfer's FAQ for information on how to get these to appear in your table of contents. When in doubt, read the actual thesis.cls file; it's not nearly as bad as you might think!
## Incorporating the AASTeX functionality
Now here's where it gets interesting! Most astronomers don't know what to do without their AASTeX special macros. Unfortunately, the Thesis class and the AASTeX style aren't compatible! In order to take advantage of useful AASTeX commands like \ion, \citep and so forth extreme measures must be taken. Tim Hamilton, Randy Telfer, Larry Bradley and I (with trouble-shooting help from a host of others) have hacked up various files to include much of this functionality. These style files are called in the preamble with the \usepackage command. The package is called aastex_hack.sty and is available in the downloads section below.
I've also written a short routine called mydefs.sty which defines certain functions used frequently in my work. This could also be incorporated into aastex_hack.sty, or into the preamble (between the \documentclass and \begin{document} lines) of the TeX document itself instead.
## File downloads
Here are the files you need:
• everything.tar -- all of the files in a handy tar file (place in your working directory and extract with "tar -xvf everything.tar")
• thesis.cls -- the Thesis class file (updated v03.1)
• jhu10.clo, jhu11.clo, jhu12.clo -- 10, 11, and 12-point type size files (you need at least one of these!) (updated v03.1)
• aastex_hack.sty -- sets up symbols, journal codes, and the \plottwo command
• natbib.sty -- sets up \citet and \citep commands and makes reference sections (see below)
• deluxetable.sty -- sets up the deluxetable environment (see below)
• mydefs.sty -- defines some personal macros and commands
## FAQs and Tutorials
This is definitely a work in progress. Contributions from users have made the whole package and process easier. We have compiled a FAQ list to address some of the major and minor issues which have come up. You will, no doubt, come up with your own unique contribution and are encouraged to submit it. When in doubt, asking anyone who has been through the thesis process lately usually works.
## The FAQ List
I've also created a set of limitted tutorials for various of the more complicated parts of thesistex. Specifically, natbib which does the citations and bibliography in a particularly nice way, the deluxetable environment which produces spiffy-looking tables, and figures which discusses the finer points of figure inclusion and referencing.
## Putting it all together
That's pretty much it. Here's my thesis!
% ---- Preamble ----
\documentclass[10pt]{thesis}
\usepackage{epsfig, natbib, mydefs, deluxetable}
\usepackage{aastex_hack}
\bibstyle{aa} % <---: (Smith 1999) rather than (Smith, 1999)
\begin{document}
% ---- Title Page ----
\title{Interstellar Matter Kinematics in the Magellanic Clouds}
\author{Charles Weston Danforth}
\doctorphilosophy
\dissertation
\copyrightnotice
\degreemonth{April}\degreeyear{2003}
\maketitle
% ---- Frontmatter ----
\begin{frontmatter}
\begin{abstract}
\input{abstract}
\end{abstract}
\begin{acknowledgement}
\input{acks}
\end{acknowledgement}
\tableofcontents
\listoffigures
\listoftables
\begin{dedication}
{\em \Large \begin{center}
For my fluffy,\\ without whom none of this would have mattered.
\end{center}}
\end{dedication}
\end{frontmatter}
% ---- The Text ----
\include{intro}
\include{fuseatlas}
\include{echelleatlas}
\include{globaltrends}
\include{s119}
\include{n66}
\include{conclusions}
\include{bibliography}
\appendix
\include{appendix1}
\include{appendix2}
\include{vita}
\end{document}
That's All, Folks!
Please send comments, corrections and suggestions to Charles Danforth | http://fuse.pha.jhu.edu/~danforth
Last modified: Mon Nov 12 14:09:25 MST 2007 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826205849647522, "perplexity": 2633.720422877707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382396/warc/CC-MAIN-20130516092622-00088-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/tags/time/hot | # Tag Info
7
A fundamental postulate of QFT establishes that the theory admits a strongly continuous representation of (orthochronous proper) Poincaré group $\cal P$. A certain one-parameter subgroup of $\cal P$ describes time evolution (with respect to an inertial reference frame) which, as a consequence, turns out to be unitary since it is part of a larger unitary ...
3
First of all, physics does not ever talk about the question of existence, but about useful descriptions and predictions of observations. No physicist will ever prove to you he is not just a figment of your imagination but he can prove to you that Newton's law works pretty well for what you see. In the scientific method, a theory is indeed used until it ...
2
Rather than write something unintelligible, I'll quote from a page on cesium clocks. According to quantum theory, atoms can only exist in certain discrete ("quantized") energy states depending on what orbits about their nuclei are occupied by their electrons. Different transitions are possible; those in question refer to a change in the electron and ...
1
The traveling astronaut is younger. The situation is not reversible between both astronauts because the traveling astronaut is submitted to an effect which is similar to acceleration because he is following the curvature of space in the fourth dimension. The solution must consider the geometric/ topological constellation. And topologically, the traveling ...
1
I think what you are missing is that these energies are eigenvalues of the time-independent Hamiltonian. i.e. They correspond to stationary states that do not change in time. The scenario you describe is not time-independent - therefore the difference between the energy levels will carry some uncertainty corresponding to the lifetime of the excited state.
1
No. My answer is negative, even if I confirm the statements of other answers: "The first thing is almost completely arbitrary, especially in full general relativity. The second thing is an unambiguous result of an experiment."(Jerry Schirmer) "In Einsteinian relativity all observers can still agree on a number of facts, they are just ...
1
The answer is: Solve Newton's second law. Really, $\vec F = m\vec a$ is meant to be a second-order differential equation, with the force dependent on position (and, sometimes, time). Writing it as $$\vec F(\vec x,t) = m \frac{\mathrm{d}^2\vec x}{\mathrm{d}t^2}$$ makes manifest that the distance travelled by something, is, in general, the solution \$\vec ...
1
You have to be careful about the difference between speed and velocity. Saying that two clocks are moving at the same speed is different from saying that the relative speed between the two clocks is zero. For example, as measured in some inertial frame of reference, two clocks can be moving at the same speed but in opposite directions, in which case their ...
1
If we were to try to standardize a unit of time with another alien species based on something fundamental to the laws of physics rather than an arbitrary division of an arbitrary planet rotating an arbitrary sun, do we have anything fundamental and universal reference point to base it on? Yes. For example, the second is currently defined according to an ...
1
The aim of special relativity and of spacetime (in particular: the Minkowski space time) is not to know about what time is. Spacetime is showing a relation between space and time from an observer's view only - and this whatever time is in reality (including the question if time exists or not). The result is that time (i.e. the value measured by clocks) may ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8992524147033691, "perplexity": 357.47702237179095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444582.16/warc/CC-MAIN-20141017005724-00102-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/520293/riemann-sum-of-completeness-relation-in-continuous-basis | # Riemann sum of completeness relation in continuous basis
Suppose I have a wave function $$\psi$$ we express it in a continous states as
$$\psi= \int_{-\infty}^{\infty} dxC (x)\rvert x\rangle = \int_{-\infty}^{\infty} dx\rvert x\rangle \langle x \rvert \psi (x)$$
This can be expanded in Riemann sum as
$$\psi= \lim_{\Delta x\to 0} \sum_{i=-\infty}^{\infty} \Delta xC (x_i)\rvert x_i\rangle.$$
This is not symmetric with the expression for $$\psi$$ in other discrete Hilbert spaces.
Which is
$$\psi= \sum_{i=-\infty}^{\infty} C (\phi_i)\rvert \phi_i\rangle,$$ where this $$i$$ takes discrete values.
Symmetry is lost due to the appearance of the factor $$\Delta x$$.
There is no paradox. If you have the continuous relation as $$| \psi \rangle = \int_{-\infty}^\infty dx \ \psi(x) |x \rangle = \lim_{\Delta x \rightarrow 0} \sum_i \Delta x \psi(x_i) |x_i \rangle,$$ to draw an analogy with the discrete basis expansion $$\sum_i C_i |e_i \rangle$$, you need to realize that the analogy goes as $$C_i \leftrightarrow \Delta x \psi(x_i)$$, not $$C_i \leftrightarrow \psi(x_i)$$.
For a better intuition, think of it this way, the continuous basis has infinitely many more basis vectors than a discrete basis, meaning that the component of any state $$|\psi \rangle$$ along any of these basis vectors $$|x\rangle$$ has to be infinitesimal for the expansion to make sense. So $$\psi(x) =\langle x | \psi \rangle$$ is more of a component density (for lack of a better term).
This is similar to any other part of physics where densities are involved. For example, the total mass of a system of discrete point masses is: $$M = \sum_i m_i,$$ whereas for a continuous mass distribution it's $$M = \int_\mathcal D d^3\mathbf x \rho(\mathbf x) = \lim_{\Delta V \rightarrow 0} \sum _{i} \Delta V \rho(\mathbf{x}_i).$$ Here you can make the analogy $$m_i \leftrightarrow \Delta V \rho(\mathbf x_i)$$, i.e. the mass of an infinitesimal volume element in the continuous distribution is $$\Delta V \rho(\mathbf x_i)$$ (and not $$\rho$$ itself).
Similarly, the component of $$|\psi \rangle$$ along the basis vector $$|x \rangle$$ is $$\Delta x \psi(x)$$, with $$\psi(x)$$ playing the role of a "density".
There is no paradox. You're comparing two different formulas: one is an approximation that applies to a continuous basis of states $$|x\rangle$$, the other is an exact formula that applies to a discrete Hilbert space. Just because two formulas look different doesn't mean one of them is wrong. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835013747215271, "perplexity": 129.08299815600452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194634.29/warc/CC-MAIN-20201127221446-20201128011446-00167.warc.gz"} |
http://scholarpedia.org/article/User:Nicolas_Alamanos/Proposed/The_Holifield_Radioactive_Ion_Beam_Facility | User:Nicolas Alamanos/Proposed/The Holifield Radioactive Ion Beam Facility
Dr. Carl J. Gross accepted the invitation on 20 November 2009 (self-imposed deadline: 20 April 2010).
The Holifield Radioactive Ion Beam Facility refers to a low-energy nuclear physics laboratory located at Oak Ridge National Laboratory, Oak Ridge, TN. USA. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391950726509094, "perplexity": 12923.933324852584}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00510.warc.gz"} |
https://www.jobilize.com/course/section/problems-exercises-dispersion-the-rainbow-and-prisms-by-openstax?qcr=www.quizover.com | # 25.5 Dispersion: the rainbow and prisms (Page 2/4)
Page 2 / 4
Index of refraction n In selected media at various wavelengths
Medium Red (660 nm) Orange (610 nm) Yellow (580 nm) Green (550 nm) Blue (470 nm) Violet (410 nm)
Water 1.331 1.332 1.333 1.335 1.338 1.342
Diamond 2.410 2.415 2.417 2.426 2.444 2.458
Glass, crown 1.512 1.514 1.518 1.519 1.524 1.530
Glass, flint 1.662 1.665 1.667 1.674 1.684 1.698
Polystyrene 1.488 1.490 1.492 1.493 1.499 1.506
Quartz, fused 1.455 1.456 1.458 1.459 1.462 1.468
Rainbows are produced by a combination of refraction and reflection. You may have noticed that you see a rainbow only when you look away from the sun. Light enters a drop of water and is reflected from the back of the drop, as shown in [link] . The light is refracted both as it enters and as it leaves the drop. Since the index of refraction of water varies with wavelength, the light is dispersed, and a rainbow is observed, as shown in [link] (a). (There is no dispersion caused by reflection at the back surface, since the law of reflection does not depend on wavelength.) The actual rainbow of colors seen by an observer depends on the myriad of rays being refracted and reflected toward the observer’s eyes from numerous drops of water. The effect is most spectacular when the background is dark, as in stormy weather, but can also be observed in waterfalls and lawn sprinklers. The arc of a rainbow comes from the need to be looking at a specific angle relative to the direction of the sun, as illustrated in [link] (b). (If there are two reflections of light within the water drop, another “secondary” rainbow is produced. This rare event produces an arc that lies above the primary rainbow arc—see [link] (c).)
## Rainbows
Rainbows are produced by a combination of refraction and reflection.
Dispersion may produce beautiful rainbows, but it can cause problems in optical systems. White light used to transmit messages in a fiber is dispersed, spreading out in time and eventually overlapping with other messages. Since a laser produces a nearly pure wavelength, its light experiences little dispersion, an advantage over white light for transmission of information. In contrast, dispersion of electromagnetic waves coming to us from outer space can be used to determine the amount of matter they pass through. As with many phenomena, dispersion can be useful or a nuisance, depending on the situation and our human goals.
## Phet explorations: geometric optics
How does a lens form an image? See how light rays are refracted by a lens. Watch how the image changes when you adjust the focal length of the lens, move the object, move the lens, or move the screen.
## Section summary
• The spreading of white light into its full spectrum of wavelengths is called dispersion.
• Rainbows are produced by a combination of refraction and reflection and involve the dispersion of sunlight into a continuous distribution of colors.
• Dispersion produces beautiful rainbows but also causes problems in certain optical systems.
## Problems&Exercises
(a) What is the ratio of the speed of red light to violet light in diamond, based on [link] ? (b) What is this ratio in polystyrene? (c) Which is more dispersive?
A beam of white light goes from air into water at an incident angle of $\text{75}\text{.}0º$ . At what angles are the red (660 nm) and violet (410 nm) parts of the light refracted?
$\text{46}\text{.}5º\text{, red; 46}\text{.}0º\text{, violet}$
By how much do the critical angles for red (660 nm) and violet (410 nm) light differ in a diamond surrounded by air?
(a) A narrow beam of light containing yellow (580 nm) and green (550 nm) wavelengths goes from polystyrene to air, striking the surface at a $\text{30}\text{.}0º$ incident angle. What is the angle between the colors when they emerge? (b) How far would they have to travel to be separated by 1.00 mm?
(a) $0\text{.}\text{043º}$
(b) $1\text{.}\text{33 m}$
A parallel beam of light containing orange (610 nm) and violet (410 nm) wavelengths goes from fused quartz to water, striking the surface between them at a $\text{60}\text{.}0º$ incident angle. What is the angle between the two colors in water?
A ray of 610 nm light goes from air into fused quartz at an incident angle of $\text{55}\text{.}0º$ . At what incident angle must 470 nm light enter flint glass to have the same angle of refraction?
$\text{71.3º}$
A narrow beam of light containing red (660 nm) and blue (470 nm) wavelengths travels from air through a 1.00 cm thick flat piece of crown glass and back to air again. The beam strikes at a $\text{30}\text{.}0º$ incident angle. (a) At what angles do the two colors emerge? (b) By what distance are the red and blue separated when they emerge?
A narrow beam of white light enters a prism made of crown glass at a $\text{45}\text{.}0º$ incident angle, as shown in [link] . At what angles, ${\theta }_{\text{R}}$ and ${\theta }_{\text{V}}$ , do the red (660 nm) and violet (410 nm) components of the light emerge from the prism?
$\text{53.5º}\text{, red;}$ $\text{55.2º}\text{, violet}$
full meaning of GPS system
how to prove that Newton's law of universal gravitation F = GmM ______ R²
sir dose it apply to the human system
prove that the centrimental force Fc= M1V² _________ r
prove that centripetal force Fc = MV² ______ r
Kaka
how lesers can transmit information
griffts bridge derivative
below me
please explain; when a glass rod is rubbed with silk, it becomes positive and the silk becomes negative- yet both attracts dust. does dust have third types of charge that is attracted to both positive and negative
what is a conductor
Timothy
hello
Timothy
below me
why below you
Timothy
no....I said below me ...... nothing below .....ok?
dust particles contains both positive and negative charge particles
Mbutene
corona charge can verify
Stephen
when pressure increases the temperature remain what?
remains the temperature
betuel
what is frequency
define precision briefly
CT scanners do not detect details smaller than about 0.5 mm. Is this limitation due to the wavelength of x rays? Explain.
hope this helps
what's critical angle
The Critical Angle Derivation So the critical angle is defined as the angle of incidence that provides an angle of refraction of 90-degrees. Make particular note that the critical angle is an angle of incidence value. For the water-air boundary, the critical angle is 48.6-degrees.
okay whatever
Chidalu
pls who can give the definition of relative density?
Temiloluwa
the ratio of the density of a substance to the density of a standard, usually water for a liquid or solid, and air for a gas.
Chidalu
What is momentum
mass ×velocity
Chidalu
it is the product of mass ×velocity of an object
Chidalu
how do I highlight a sentence]p? I select the sentence but get options like copy or web search but no highlight. tks. src
then you can edit your work anyway you want
Wat is the relationship between Instataneous velocity
Instantaneous velocity is defined as the rate of change of position for a time interval which is almost equal to zero
Astronomy | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.484560489654541, "perplexity": 1246.7777598324144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00782.warc.gz"} |
http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1000944?imageURI=info:doi/10.1371/journal.pgen.1000944.g004 | Research Article
# The Relationship among Gene Expression, the Evolution of Gene Dosage, and the Rate of Protein Evolution
• Affiliation: Laboratoire de Biométrie et Biologie Evolutive, Université de Lyon, Université Lyon 1, CNRS, INRA, INRIA, UMR 5558, Villeurbanne, France
X
• Affiliation: Laboratoire de Biométrie et Biologie Evolutive, Université de Lyon, Université Lyon 1, CNRS, INRA, INRIA, UMR 5558, Villeurbanne, France
X
• duret@biomserv.univ-lyon1.fr
Affiliation: Laboratoire de Biométrie et Biologie Evolutive, Université de Lyon, Université Lyon 1, CNRS, INRA, INRIA, UMR 5558, Villeurbanne, France
X
• Affiliation: Laboratoire de Biométrie et Biologie Evolutive, Université de Lyon, Université Lyon 1, CNRS, INRA, INRIA, UMR 5558, Villeurbanne, France
Membership of the Paramecium Post-Genomics Consortium is available in the Acknowledgments.
X
• Published: May 13, 2010
• DOI: 10.1371/journal.pgen.1000944
Corrections
8 Jun 2010: Gout J-F, Kahn D, Duret L, Paramecium Post-Genomics Consortium (2010) Correction: The Relationship among Gene Expression, the Evolution of Gene Dosage, and the Rate of Protein Evolution. PLoS Genet 6(6): 10.1371/annotation/c55d5089-ba2f-449d-8696-2bc8395978db. doi: 10.1371/annotation/c55d5089-ba2f-449d-8696-2bc8395978db | View correction
## Abstract
The understanding of selective constraints affecting genes is a major issue in biology. It is well established that gene expression level is a major determinant of the rate of protein evolution, but the reasons for this relationship remain highly debated. Here we demonstrate that gene expression is also a major determinant of the evolution of gene dosage: the rate of gene losses after whole genome duplications in the Paramecium lineage is negatively correlated to the level of gene expression, and this relationship is not a byproduct of other factors known to affect the fate of gene duplicates. This indicates that changes in gene dosage are generally more deleterious for highly expressed genes. This rule also holds for other taxa: in yeast, we find a clear relationship between gene expression level and the fitness impact of reduction in gene dosage. To explain these observations, we propose a model based on the fact that the optimal expression level of a gene corresponds to a trade-off between the benefit and cost of its expression. This COSTEX model predicts that selective pressure against mutations changing gene expression level or affecting the encoded protein should on average be stronger in highly expressed genes and hence that both the frequency of gene loss and the rate of protein evolution should correlate negatively with gene expression. Thus, the COSTEX model provides a simple and common explanation for the general relationship observed between the level of gene expression and the different facets of gene evolution.
## Author Summary
The analysis of gene evolution is a powerful approach to recognize the genetic features that contribute to the fitness of organisms. It was shown previously that selective constraints on protein sequences increase with expression level. This observation was surprising because there is a priori no reason why lowly expressed genes should be less important than highly expressed genes for the proper function of an organism. Here we show that selective pressure on the evolution of gene dosage, which is another important aspect of gene evolution, is also directly dependent on gene expression level. To explain these observations, we propose a model based on the fact that gene expression is a costly process (notably protein synthesis), so that there is an optimal expression level for each gene corresponding to a trade-off between the benefit and the cost of its expression. This model predicts that selective pressure on gene expression level or on the encoded protein should on average be stronger in highly expressed genes, providing a simple and common explanation for the general relationship observed between gene expression and the different facets of gene evolution.
### Introduction
Mutations can affect the phenotype either by modifying the sequences of proteins or by changing their pattern of expression. Whereas the evolutionary constraints acting on protein-coding sequences are relatively well characterized, those driving the evolution of gene expression have been much less studied. Modifications in gene expression can result from mutations in regulatory elements or through changes in the number of gene copies in the genome (i.e. gene dosage) by gene duplications or gene losses. The phenotypic impact of changes in gene dosage is clearly illustrated by the deleterious effects caused by chromosome aneuploidy [1]. The necessity of an X-chromosome inactivation mechanism to compensate for dosage imbalance between males and females in mammals [2] is another example of the importance of having the correct dosage of genes. Within populations, polymorphism in copy number of genes (Copy Number Variations: CNVs) significantly contributes to variations in transcript abundance [3]. Moreover, some CNVs were shown to be driven by positive selection for increased expression of the corresponding genes [4][6], highlighting the fact that gene dosage modifications can be targeted by selection. However, the evolutionary constraints that apply on gene dosage remain poorly understood.
Whole-genome duplications (WGDs) represent interesting cases to study the evolutionary constraints on gene dosage. Immediately after a WGD event, all genes are present in two copies; these paralogs that result from WGD are termed ohnologs, in reference to the pioneering ideas of Susumu Ohno on the role of WGDs in genome evolution [7], [8]. However progressive changes in gene dosage do occur: most ohnologs are lost, while only a subset is retained over long evolutionary times [9], [10]. Different (non-exclusive) models have been proposed to explain the retention of gene duplicates after a genome duplication. First, some ohnologs are retained because one or both copies evolved toward a different function, either by gain of a new function (neo-functionalization [7], [11]) or through partition of ancestral functions [12], [13 for review]. The over-retention of some functional categories suggests that WGDs might have played a role in some important evolutionary transitions by providing opportunities for functional innovations [14], [15]. Second, some ohnologs appear to be retained because of constraints on relative gene dosage (the ‘dosage balance’ hypothesis). For example, the loss of ohnologs encoding subunits of protein complexes is counter-selected because it affects the stoichiometry of complexes [16][18].
In yeast, it has been noticed that genes that have been maintained in two copies after WGD tend to be highly expressed [19]. However, the interpretation of this observation remained unclear: does it simply reflect an indirect effect of other parameters (e.g. differences in functional categories between highly and weakly expressed genes) or is there a direct relationship between expression and the probability of retention of ohnologs? The genome of Paramecium tetraurelia, which contains almost 40,000 protein-coding genes, provides a perfect configuration to investigate this issue. Indeed, 3 WGDs occurred during the evolution of the Paramecium lineage [17]. The genome contains about 12,000 pairs of ohnologs resulting from the most recent WGD, compared to less than 600 in yeast [20]. This corresponds to a frequency of gene loss of 49% since the last WGD (frequencies of gene loss after the intermediary and the old WGD are respectively 76% and 92%) [17]. Thus, the Paramecium genome allows the investigation of the fate of gene duplicates over different evolutionary scales.
The analysis of EST abundances suggested that in Paramecium, as in yeast, highly expressed genes tend to be more retained [17]. To investigate in detail the relation between gene expression and gene retention following WGD we measured genome-wide expression patterns in different culture conditions and at different stages of Paramecium life cycle. We show that retention rate is positively correlated with the level of gene expression. This observation does not appear to be due to indirect effects of other parameters known to affect gene retention. To explain these observations we propose a model based on the assumption that gene expression levels before WGD are close to an optimum, which corresponds to a trade-off between the benefit and cost of their expression. This simple COSTEX model provides a general explanation for the relationships between gene expression and gene evolution, not only in terms of gene dosage but also in terms of evolution of the encoded proteins.
### Results
#### Expression level influences gene retention after WGD
We measured the expression level of Paramecium genes in 58 different experiments, spanning different stages of its life cycle, using a DNA microarray covering the 39,642 protein-coding genes annotated in the genome. We define here the expression level of a gene as the median value of its expression across all 58 different experiments. We name ‘ohnologon’ a set of ohnologous genes related by a given WGD event. Since the Paramecium lineage encountered 3 successive WGDs, ohnologons may contain from 1 up to 2, 4 or 8 genes for the recent, intermediary or old WGD respectively.
Ideally, to investigate the relationship between gene expression and retention, one would have to measure the rate of gene loss per elementary time unit in each ohnologon. However, with only one genome sequenced in the Paramecium clade, it is not possible to quantify this rate for each individual ohnologon. We therefore investigated the relationship between gene expression and retention by grouping ohnologons into bins defined by fixed intervals of expression level (see Materials and Methods). For the recent WGD, there is a striking positive relationship between the frequency of gene retention in each bin and their average expression level (Figure 1). The frequency of gene retention increased 2-fold between the 10% least expressed genes and the 10% most highly expressed genes (0.32 and 0.67 respectively, P<10−16). We observed the same trend for the intermediary and the old WGD (frequency of retention = 0.17 vs. 0.31, P<10−16 and 0.04 vs. 0.10, P = 2.9×10−6 when comparing the 10% extreme genes respectively for the intermediary and old WGD). We also found a similar relationship between gene retention in the Paramecium lineage and the expression level of their orthologs in Tetrahymena thermophila (Figure S1). The divergence between T. thermophila and P. tetraurelia lineages occurred before the last two WGDs [17]. Hence, the observed correlation between expression level in T. thermophila and retention rate in Paramecium directly demonstrates that there is a relationship between the expression level of genes – before WGD – and their probability of retention after the WGD event. In other words, the selective pressure against gene losses is positively correlated to the pre-WGD expression level.
#### Other factors contributing to gene retention
It has been shown that various parameters affect the fate of duplicated genes after WGD. Notably, some functional gene categories are more retained than others, possibly because they contributed to adaptation by functional innovation [11], or because of dosage balance constraints [16][18]. We analyzed each of the known factors in order to investigate whether the observed relationship between gene retention and expression could be explained by these other parameters.
#### Gene retention versus phylogenetic distribution
It is expected that widely conserved genes and lineage-specific genes undergo different selective pressures [21], [22]. To investigate the relationship between retention rate and phylogenetic distribution, we classified genes into 3 groups: Paramecium-specific genes (n = 17,896), ciliate-specific genes (n = 4,135) and ancient eukaryotic genes (n = 8,846) (see Materials and Methods). We found that eukaryotic and ciliate-specific genes are more retained than average following the recent WGD (both P<10−16) while Paramecium specific genes were more frequently lost (P<10−16). Therefore, genes that are conserved across large evolutionary time scales are more prone to retention following WGD than genes that evolved quickly or were innovated in the Paramecium lineage. However, all 3 gene categories show a relationship between gene expression and gene retention similar to what we observed on the whole set of Paramecium genes (Figure S2), indicating that this relationship pertains independently of age or level of gene conservation.
#### Gene retention versus functional categories
We classified Paramecium genes according to their functional category based on the Gene Ontology (GO) [23]. We computed the average retention rate for each functional category represented by more than 400 genes in the Paramecium genome. On average, genes that have a GO assignment are more retained than other genes (0.57 vs. 0.48, P<10−16). This result simply reflects the previous observation: given that functional category assignment is based on homology with genes in other species and that genes conserved across species are preferentially retained following WGD, genes with GO assignment tend to be more retained than the average. However, a few (3/23) functional categories were significantly under-retained (Table S1). Among them, ‘integral to membrane’ is the category with the lowest retention rate, reflecting differences in post-WGD selective pressure on genes encoding membrane proteins (see discussion).
We analyzed the relation between gene expression and gene retention across the different functional categories by dividing genes into 4 quartiles according to their expression level (Figure S3). As expected, functional categories show differences both in average expression levels and retention rates. For the same level of expression, different GO categories show different retention rates, which shows an effect of functional categories independently of gene expression. Nevertheless highly expressed genes (in the upper quartile) are more retained than lowly expressed ones (in the lower quartile) for all the 23 functional categories analyzed, indicating that the relationship between gene expression and retention is not caused by some specific functional categories (Figure S3 and Table S1).
#### Gene retention versus dosage balance constraints
Aury et al. [17] showed that genes encoding subunits of protein complexes are over-retained after the recent WGD in Paramecium. We used the same data to investigate the relation between expression level and retention rate separately for genes predicted to encode part of protein complexes (n = 1,236) and for other genes (n = 7,025) (see Materials and Methods). We find that genes coding for subunits of protein complexes are over-retained, even when expression is controlled for (Figure 2), confirming the impact of dosage-balance constraints on the fate of genes following WGD. However, both genes encoding protein-complex subunits and other genes show a similar relationship between expression level and retention rate (Figure 2). Hence, expression level appears to influence the retention of genes following WGD, independently of dosage balance constraints.
#### Highly expressed genes show no evidence of a higher tendency for change of function
Some duplicate genes are retained because they evolved toward different functions (by neo- or sub-functionalization) [11], [12]. One possible hypothesis to explain the higher retention of highly expressed genes is that they might be more prone to functional changes, either via changes in the encoded protein or via changes in expression patterns. To test this hypothesis, we first investigated the relation between gene expression and coding sequence divergence, measured by the rate of non-synonymous changes (Ka) between ohnologs of the recent WGD. We found a negative correlation (r = −0.31, P<10−16; Figure S4), indicating that the evolutionary rate of coding sequences is lower in highly expressed genes.
We also investigated the relation between gene expression and the rate of evolution of expression patterns between ohnologs of the recent WGD. For this we used two different measures of expression divergence. The first is the Pearson correlation coefficient between ohnologs on the 58 different experiments. The second measure is an Euclidean distance between expression levels of ohnologous genes across the 58 different arrays. Both measures show a negative correlation between gene expression and divergence of expression patterns (r = −0.23 and r = −0.13 respectively, both P<10−16): highly expressed genes have more conserved expression patterns.
Thus, highly expressed genes evolve more slowly than weakly expressed genes, both in terms of protein sequence and in terms of expression pattern. These two observations are consistent with the model we propose (see discussion) but are in contradiction with the hypothesis that highly expressed genes undergo functional innovation more frequently than weakly expressed genes. We admit however that this latter hypothesis cannot be formally rejected. Indeed, it can be argued that functional innovations do not necessarily imply a noticeable increase in evolutionary rate (e.g. a very limited number of amino-acid changes might be sufficient to change the function of a protein), and the negative correlations reported above might reflect other evolutionary processes (e.g. selective constraints on amino-acid sequences to avoid protein folding errors [24]). The minimal conclusion is therefore that we found no evidence of a higher propensity for functional innovation among highly expressed genes.
### Discussion
#### Gene expression and dosage sensitivity in Paramecium, yeast, and animals
We studied the constraints acting on the evolution of gene dosage by analyzing the fate of duplicated genes after WGDs. We show that the frequency of gene retention following the recent WGD in Paramecium is positively correlated to gene expression level, which reveals a selective pressure against the loss of highly expressed duplicated genes. Various factors are known to contribute to the retention of gene duplicates, such as a functional shift by neo or sub-functionalization, or selection for dosage balance in protein complexes. However, these factors do not appear to explain the observed relationship between retention rate and gene expression. Highly expressed genes do not show evidence of a higher propensity to evolve toward new functions after a duplication. Moreover, the relationship between retention rate and gene expression holds for most functional categories, independently of their involvement in protein complexes. Hence, the most parsimonious explanation for our observations is that there is a direct link between the expression level of genes and the fitness impact of changes in gene dosage.
To test this hypothesis, we analyzed data from systematic gene knock-out (KO) experiments in the yeast Saccharomyces cerevisiae, where the fitness of heterozygous strains (i.e. carrying one KO allele and one wild-type allele) was measured by competition experiments [25], and for which expression data were available from [26]. We found a negative correlation between the fitness of heterozygotes and the expression level of the corresponding genes (r = −0.13, P<10−16). The mean loss of fitness increased 2-fold between the 10% least expressed genes and the 10% most highly expressed genes (0.027 and 0.053 respectively, P = 10−10; Figure 3) which indicates a higher selective pressure against reduction of gene dosage for highly expressed genes. Several observations suggest that this rule holds also for multicellular eukaryotes. First, Drosophila and mouse genes with copy number variation (CNVs), tend to be lowly expressed and/or have a narrow tissue distribution [27], [28]. Second, it is known that the small subset of genes on the human Y chromosome that have retained a homolog on the X chromosome is strongly biased toward highly expressed genes [29]. Both observations are consistent with the hypothesis that changes in gene dosage are more deleterious for highly expressed genes.
The strong correlation between gene expression and retention in Paramecium that is apparent in Figure 1 should not be interpreted as evidence that expression is the unique determinant of the variance in the rate of gene loss. Indeed, to analyze the relation between the frequency of gene loss in Paramecium and gene expression, we had to bin the data into groups of expression level. This binning tends to underestimate the variance between individual genes that is caused by other factors (e.g. see [30]). Thus, the strong correlations observed with binned data simply indicate that on average – everything else being equal – the fitness impact of gene loss is correlated with expression level, which does not exclude that other factors contribute to variations in retention rate.
#### The COSTEX model: trade-off between benefit and cost of gene expression
It is clearly established that expression of a gene is a costly process, both because it requires energy (particularly for protein synthesis) and because it mobilizes cellular resources (e.g. the translational machinery), thus competing with the expression of other genes (see [31], [32] for a recent appraisal). Hence natural selection is expected to drive gene expression towards an optimum level at which the cost of increased expression is balanced by the resulting benefit on fitness. In some cases it has been possible to directly measure the cost of gene expression. For instance Dekel and Alon [32] measured the cost of gratuitous induction of the lac operon in Escherichia coli. They could also measure the fitness gain associated with lac induction as a function of available lactose concentration. Moreover, they showed by in-lab evolution experiments that optimal lac expression could be reached in just a few hundred generations, demonstrating the strength of selection for optimal gene expression. The selective pressure to optimize gene expression levels is expected to be particularly strong in microorganisms because of their large effective population sizes [31], but there is clear evidence for such selective pressures in animals too [33].
We now show that this selective pressure can explain the observed relationship between gene expression level and the fitness impact of changes in gene dosage. Our model is based on a simple cost function for gene expression in the presence of limiting resources that has been proposed by Dekel and Alon [32] on the basis of the Monod equation and that matched their data particularly well:(1)
where X is the gene expression level, M is the maximal capacity for expression of a gene, given the cellular resources that can be used for its expression and k is a scaling factor expressing the fitness cost of resource usage. Let X0 be the optimal expression level of a gene, i.e. the level that maximizes fitness. We use the relative expression level x of this gene with respect to its optimal expression level: . It should be noted that the optimal expression level of a given gene depends on resources available and therefore depends on the expression of all the other genes. Hence, X0 for a given gene may change as the expression of other genes evolves. However, at equilibrium, selection should drive the evolution of expression levels of each gene close to a value that maximizes fitness (that is, x = 1). We express fitness w(x), a function of the relative gene expression level, as the difference between a benefit function B(x) and the cost function C(X0x):(2)
Note that fitness is expressed relatively to the fitness of the optimal genotype (i.e. X = X0). Hence, fitness is equal to 1 for x = 1:(3)
For x = 1 the fitness function is also at an optimum, hence:(4)
so that is necessarily positive at optimal expression. Therefore w(x) can be approximated by a second order Taylor expansion:(5)
Therefore the selective pressure on changes in relative expression level x can be quantified by the magnitude of the second order derivative:(6)
which must be negative at maximal fitness. Therefore, everything else being equal, the selective pressure on relative gene expression level is predicted to increase with the optimal expression level X0. This is illustrated on Figure 4 showing the fitness function w(x) for various values of X0 assuming an affine benefit function B(x). The higher the optimal expression level X0, the sharper the fitness function is in the vicinity of this optimum – equation (6) – resulting in increased selective pressure on gene expression.
As a first approximation, the loss of a gene copy after WGD is expected to decrease by 50% the level of gene expression. Under the assumption that most genes were close to their optimal expression at the time of WGD, we can estimate the selection coefficient s associated with the drop in expression following the loss of an ohnolog by setting in equations (5) and (6):(7)
This approximation by Taylor expansion is all the more accurate as X0 is low compared to M. This relationship predicts that the strength of selection against gene loss increases with gene expression, as observed very clearly in the present work for the recent Paramecium WGD (Figure 1). On longer time scales, other processes such as neo- or sub-functionalization are expected to contribute to gene retention, which may explain why the relationship between retention rate and expression level is weaker for the intermediary and old WGDs (Figure 1).
#### The COSTEX model and the evolutionary path to pseudogenization
On shorter time scales, an additional phenomenon may contribute to the selective pressure against loss of highly expressed genes. Indeed, gene losses are usually caused by the accumulation of small-scale mutational events [17], transiently resulting in the expression of a non-functional peptide. Disabling mutations that disrupt the function of the protein but do not change its expression level clearly bear a cost with no benefit. The corresponding selection coefficient sψ can be derived from equations (2) and (4) at 1st order approximation:(8)
This cost may even be higher if the non-functional peptide interacts with other proteins and perturbs their functions in a dominant-negative fashion, so that is a lower bound for the selection coefficient. Therefore the COSTEX model predicts that gene expression strongly influences the pseudogenization path to gene loss because the probability of fixation of disabling mutations decreases with increasing gene expression level. Moreover this model predicts that once a disabling mutation has been fixed, there should be a selective pressure to decrease the expression level of the pseudogene up to its total silencing, all the stronger as gene expression is high.
#### The COSTEX model: gene-specific parameters
Gene expression level is obviously not the unique determinant of gene evolution. As shown in equation 6, there are several other parameters that determine the selective pressure against changes in gene dosage. First, parameters M and k of the cost function are expected to vary from one gene to another, according to the length of encoded proteins and their amino-acid composition. Moreover, the amount of resources available for gene expression depends on the physiological state of the cell, and hence these parameters should also depend on the time at which genes are expressed. Second, the selective pressure against changes in gene expression also depends on the second derivative of the benefit function B(x) (see equations 5–7). Little is known about the shape of the benefit function – except that this function must be increasing in the vicinity of the optimal expression level (see equation 4). It is however clear that B(x) certainly varies widely among genes. Indeed, it is well known that there are some weakly expressed genes that are essential for cell functioning (e.g. transcription factors). In other words, the fact that the optimal expression of a gene is low does not necessarily imply that the fitness impact of mutations affecting its expression is low.
Thus, the selection coefficient against changes in gene expression s is expected to vary according to gene-specific parameters , k and M. We observed indeed that for a same expression level, the frequency of gene retention among Paramecium ohnologs varies strongly according to functional GO categories (Figure S3). In absence of knowledge about these parameters it is difficult to predict s for any given gene. However, under the assumption that the distribution of these parameters is similar among genes of different expression levels, the COSTEX model predicts that, on average, selective constraints on gene dosage increase with expression level.
#### Gene expression optimality after WGD
The COSTEX model can explain the observed relationship between gene retention rate and expression level, under the assumption that most genes were close to their optimal expression level right after WGD. This hypothesis is difficult to test but deserves to be discussed because it is a major assumption of the model. In the absence of major changes such as WGDs, most genomes are expected to tend toward this evolutionary equilibrium at which most genes are expressed close to their optimum level [33]. Therefore, the ancestral pre-duplication species in the Paramecium lineage was probably in this situation. The question now turns into: how did the WGD affect this equilibrium? A first point to note is that in-lab polyploidisation experiments in plants and yeast indicate that changing the ploidy from 2n to 4n has very little influence per se on the relative expression level of genes [34][36]. Such experiments showed that allopolyploidization (i.e. WGD resulting from inter-species hybridization) affects the expression of many more genes than autopolyploidization, and that these changes can have very important phenotypic consequences [37]. However, even in the case of allopolyploidization, a large majority of genes do not show substantial changes of expression level relative to the parental species (e.g. in Arabidopsis allotetraploids, less than 10% of genes show a 1.5-fold difference in gene expression [35]). Second, the relative dosage between genes remains unchanged until gene losses start to accumulate. Third, it has been observed, both in plants and in yeasts, that cell size increases with the level of ploidy [34], [38], [39]. These three points suggest that a WGD event does not necessarily result in a change in the concentration of cytoplasmic proteins. It should be noted however that, when the volume of a cell increases, the surface of its membrane should increase in a lower proportion, and hence the surface concentration of membrane proteins might be too high immediately after WGD. This could explain our observation that genes encoding membrane proteins are under-retained. However, in the specific case of Paramecium, the relation between ploidy and cell volume is unclear because of nuclear dimorphism. Paramecium, like other ciliates, separates germline and somatic functions into two distinct nuclei (named respectively micronucleus and macronucleus). The transcriptionally silent micronucleus is diploid while the expressed macronucleus is highly polyploid (~800 n). WGDs resulted in a temporary tetraploidization of the micronucleus but one can only speculate about the consequences on macronucleus ploidy. Indeed, it has been shown that the macronucleus DNA content is regulated after amitotic divisions [40], leaving open the possibility that micronucleus tetraploidization did not change the total amount of DNA in the macronucleus.
Although we can only speculate on the immediate consequences of WGD in Paramecium, it can be argued that the fixation of a WGD in the population of ancestral species would be highly unlikely if it resulted in a strong decrease in fitness. This is particularly true in microorganisms such as Paramecium for which selection against fixation of deleterious mutations is strong because of their high effective population size [41]. Therefore, assuming that expression level of most genes was close to their optimum immediately after WGD appears to be a reasonable assumption.
#### The trade-off between cost and benefit of gene expression constrains evolutionary rates of coding sequences
One additional prediction of the COSTEX model is that the selective constraints on coding sequences should vary with gene expression level. Indeed, missense mutations in a coding sequence do not change expression level (and therefore do not change the cost of expression), but they generally yield a decrease of the benefit function. Hence, the fitness function for a mutant allele becomes (see equation 2):(9)
where α denotes the decrease of the benefit function caused by this particular allele, and x and X0 correspond to the expression parameters of the wild-type allele. Therefore the effect of the missense mutation on fitness is:(10)
If the wild-type gene was at its optimal expression level (x = 1), B(1) can be inferred from equation (3), which leads to:(11)
which indicates that the loss of fitness is an increasing function of gene expression. Hence mutations with an equivalent effect on protein function are predicted to have a stronger impact on fitness for highly expressed genes because of the higher cost incurred for their expression, a price the organism had to ‘pay’ for their function. Note that this relationship also applies for potentially suboptimal expression . Note also that the distribution of α for the different mutations that may affect a gene probably differs widely from gene to gene. In other words, there are some genes for which, on average, mutations have a stronger impact on their benefit function than others. Hence, the mean fitness impact of mutations depends not only on X0, but also on the distribution of α, which is gene-specific. Therefore this model does not contradict the observation that some lowly expressed proteins may also be under strong selective constraints. Nevertheless, under the null hypothesis that the distribution of α is independent of the level of gene expression, the COSTEX model predicts that, on average, the selective constraints on coding sequences are higher in highly expressed genes.
#### Conclusion
It is well established that the expression pattern of genes is an important determinant of the rate of evolution of the encoded proteins [42], [43], although the reasons for this observation are still debated (for review, see [44]). Here we show that gene expression is also a major determinant of the evolution of gene dosage. Thus, many aspects of gene evolution appear to be driven by constraints on gene expression. To explain the observed relationship between gene expression level and the fitness impact of both changes in gene expression and changes in the encoded protein, we propose a model, based on the simple assumption that gene expression levels reflect a trade-off between cost and benefit of gene expression. This model is directly inspired by the work by Dekel and Alon who demonstrated and quantified experimentally the cost of gene expression in vivo [32]. Put in a simple verbal formulation, the COSTEX model states that because of the non-linearity of the cost function, gene evolution (in terms of gene expression, gene dosage or encoded proteins) is all the more constrained as optimal gene expression is high. Thus this model can explain simultaneously three observations in Paramecium: i) highly expressed genes are more frequently retained as duplicates after a WGD, ii) they evolve more slowly than other genes in terms of protein divergence and iii) they evolve more slowly than other genes in terms of expression pattern. Note that the COSTEX model does not imply that gene expression is the unique determinant of gene evolution. Selective constraints notably depend on the shape of the benefit function, which certainly varies widely among genes. However, the COSTEX model can explain why, on average, highly expressed genes are more constrained than others.
Several other hypotheses have been proposed to explain the relationship between gene expression and the rate of protein evolution [44]. According to a popular model, this relationship reflects a selective pressure on protein sequences to prevent folding errors [24]. Indeed, misfolded proteins can affect fitness, either directly (they can be toxic for the cell) or indirectly (they represent a waste of resources). In both cases the impact on fitness is dependent on gene expression level, and hence this model predicts a stronger selective pressure on highly expressed protein-coding sequences. Translational errors represent one important cause of protein misfolding [45]. Thus, one interesting feature of this model is that it provides an explanation for the covariation between codon usage (under selection to optimize translation accuracy) and non-synonymous substitution rate [24]. The ‘misfolding hypothesis’ and the COSTEX model are not mutually exclusive. In fact, the waste of resources linked to the production and degradation of misfolded proteins can be considered as one component of the cost of gene expression. But the COSTEX model predicts that even in absence of folding errors, the rate of protein evolution should be negatively correlated to the expression level. One other interesting aspect of the COSTEX model is that it also provides an explanation for the relationship between gene expression and the evolution of gene dosage or gene expression, an aspect of gene evolution that is not predicted by the ‘misfolding hypothesis’. Thus, the COSTEX model provides a simple and common explanation for the general relationship observed between the level of gene expression and the different facets of gene evolution.
### Materials and Methods
#### Expression data
Expression data for P. tetraurelia were obtained from single channel NimbleGen arrays with six different 50-mer probes per gene. We analyzed data from a total of 58 different hybridizations, corresponding to six independent series of experiments (raw data are deposited in the Gene Expression Omnibus database [46], under accession numbers GSE18002, GSE17998, GSE17997, GSE17996, GSE17930, GSE14631 and GSE12620). Signals from the 58 arrays were simultaneously normalized using the normalizeBetweenArrays function from the Limma package [47]. The expression of each gene in each condition was taken as the median of the six individual 50-mer signals. We calculated expression level of each gene as the log2 of the median value across all 58 arrays. Expression levels of ohnologons were taken as that of a randomly chosen gene within each ohnologon [17], [48].
Ohnologons were sorted according to their expression level and grouped into bins defined by fixed intervals of expression level. Depending on the size of the dataset, this interval was set to 0.2 or to 1. Bins containing less than 30 ohnologons were excluded from the analysis. Retention rate was calculated in each bin as the frequency of ohnologons having retained both gene copies.
Microarray data for T. thermophila [49] were downloaded from the Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/), a public repository of expression data [46]. We normalized data across all 50 available arrays (GEO series: GSE11300) and computed expression level of each gene as the median value across all 50 arrays. Orthology relationships between P. tetraurelia and T. thermophila were taken from [17].
#### Functional categories
Functional categories were downloaded from parameciumDB (http://paramecium.cgm.cnrs-gif.fr/download/analysis/InterproScan_results_August_2008.txt) and only categories with more than 400 genes were retained. We eliminated redundancy among functional categories by searching for categories for which both gene lists overlapped by more than 90%. In these cases the category with the higher number of assigned genes was retained. This led to the elimination of three functional categories: protein kinase activity (GO:4672), protein serine/threonine kinase activity (GO:4674) and ribosome (GO:5840), that overlapped protein amino acid phosphorylation (GO:6468), protein kinase activity (GO:4672) and structural constituent of ribosome (GO:3735), respectively. Each functional category was divided into 4 bins of equal size according to gene expression level and we computed average retention rates for each quartile.
#### Phylogenetic distribution
Lists of orthologous genes were obtained through the BioMart interface of parameciumDB [50]. For Paramecium specific genes we queried the BioMart interface for all Paramecium genes with no ortholog in any other species available. Ciliate-specific genes were obtained by querying for genes with an ortholog in T. thermophila only and ancient eukaryotic genes by querying for genes with an ortholog in H. sapiens.
#### Proteins involved in complexes
Paramecium genes encoding subunits of protein complexes were predicted by Aury and colleagues [17] by homology with yeast proteins annotated in the MIPS database (http://mips.gsf.de/) or in [51]. The rate of retention is also correlated to the level of conservation of genes across the eukaryote phylogeny (see text). In order to investigate the impact of protein complexes on the rate of gene retention independently of their phylogenetic distribution, we selected a set of Paramecium genes having an homolog in yeast (defined as genes having at least one BLASTP hit in the yeast proteome with P<1×10−3 and alignment covering >70% of the Paramecium protein) and compared retention rates for genes involved in protein complexes (n = 615 ohnologons) and for other genes (n = 4,331 ohnologons).
#### Yeast KO data
We defined the fitness associated to a heterozygous KO as the minimal fitness across the different culture conditions tested in [25]. Expression level for each gene corresponds to the log2-transformed value of mRNA abundance per cell given by [26].
### Supporting Information
Figure S1.
Relationship between the rate of gene retention in the Paramecium lineage and the expression level of their orthologs in T. thermophila. Ohnologons were binned according to expression levels of their orthologs in T. thermophila, and for each bin, we computed the frequency of ohnologons having retained both copies since the WGD. Circles: recent WGD (3,601 ohnologons); crosses: intermediary WGD (2,998 ohnologons); diamonds: old WGD (1,589 ohnologons). The histogram in the background represents the distribution of expression levels in Tetrahymena for genes that have an ortholog in Paramecium. For each WGD the locally-weighted polynomial regression (lowess, as implemented in R [52]) is displayed as a solid line for visual aid. For the recent and the intermediary WGDs the frequency of gene retention significantly increased between the 10% least expressed genes and the 10% most highly expressed genes (0.49 vs. 0.84, P<10−16 for the recent WGD and 0.24 vs. 0.48 P = 2.6×10−10 for the intermediary WGD) while it was not significant for the ancient WGD (0.16 vs. 0.19, P = 0.37).
doi:10.1371/journal.pgen.1000944.s001
(2.68 MB TIF)
Figure S2.
Relationship between gene expression and gene retention for genes with different phylogenetic distributions. Retention rates after the recent WGD were computed for bins of expression level for genes that are Paramecium-specific (n = 10,861 ohnologons), ciliate-specific (n = 2,417 ohnologons) or ancient eukaryotic genes (n = 5,048 ohnologons) (see Materials and Methods). The horizontal dashed line represents the average retention rate following the recent WGD. The solid lines correspond to locally-weighted polynomial regression (lowess, as implemented in the R software [52]).
doi:10.1371/journal.pgen.1000944.s002
(6.34 MB TIF)
Figure S3.
Relationship between gene expression and gene retention across different functional categories. Functional categories were taken from the Gene Ontology classification [53] as indicated in each panel. For each category, ohnologons were grouped into four quartiles of expression level and the average retention rate was computed as the frequency of ohnologons having retained both copies since the recent WGD. The dotted line corresponds to the average retention rate of all genes with a GO classification.
doi:10.1371/journal.pgen.1000944.s003
(3.15 MB TIF)
Figure S4.
Relationship between non-synonymous substitution rates and expression level. Values of non-synonymous divergence (Ka) between ohnologs from the recent WGD were taken from [17]. The solid red line shows the linear regression between Ka and expression level.
doi:10.1371/journal.pgen.1000944.s004
(2.56 MB TIF)
Table S1.
Detailed analysis of functional categories. For each functional category, the indications given by the table are:
go: GO number of the functional category.
name: name of the functional category.
type: type of functional category (‘Molecular function’, ‘Biological process’ or ‘Molecular function’).
nbg: the number of genes within a given functional category.
retention: the average retention rate among genes belonging to the functional category.
retention_others : the average retention of genes not belonging to the given GO category.
pval_retentions: p-value associated to the comparison of the 2 retention rates by a Chi2 test (bold when <0.05; grey background when retention rate is lower than other genes).
avg_xp: average expression level of genes belonging to the functional category.
avg_xp_others: average expression level of genes not belonging to the functional category.
pval_xp: p-value associated to the comparison of the 2 average expression levels by a student t-test (bold when P<0.05; grey background when average expression level is lower than other genes).
retention_quartile#1–4: average retention rate among genes from each quartile of expression level (quartile#1 = low expression level; quartile#4 = high expression level).
avg_xp_quartile#1–4: average expression level in each quartile.
doi:10.1371/journal.pgen.1000944.s005
(0.01 MB PDF)
### Acknowledgments
We thank Sylvain Mousset for his help in improving the mathematical model and two anonymous referees for their very insightful comments.
The members of the Paramecium Post-Genomics Consortium are as follows:
Olivier Arnaiz, Mireille Bétermier, Jean Cohen (leader), Aurélie Kapusta and Linda Sperling, Centre de Génétique Moléculaire, Université Paris-Sud, Centre National de la Recherche Scientifique, FRE3144, Gif-sur-Yvette, France; Laurent Duret and Jean-François Gout, Laboratoire de Biométrie et Biologie Evolutive, Université de Lyon, Université Lyon 1, CNRS, INRA, INRIA, UMR 5558, Villeurbanne, France; Khaled Bouhouche, Eric Meyer and Baptiste Saudemont, Institut de Biologie de l'Ecole Normale Supérieure, CNRS UMR8197, INSERM U1024, Paris, France.
### Author Contributions
Conceived and designed the experiments: LD. Performed the experiments: JFG. Analyzed the data: JFG. Wrote the paper: JFG DK LD. Designed the model: DK. Contributed expression data: PPGC.
### References
1. 1. Torres EM, Williams BR, Amon A (2008) Aneuploidy: cells losing their balance. Genetics 179: 737–746.
2. 2. Payer B, Lee JT (2008) X chromosome dosage compensation: how mammals keep the balance. Annu Rev Genet 42: 733–772.
3. 3. Stranger BE, Forrest MS, Dunning M, Ingle CE, Beazley C, et al. (2007) Relative impact of nucleotide and copy number variation on gene expression phenotypes. Science 315: 848–853.
4. 4. Gonzalez E, Kulkarni H, Bolivar H, Mangano A, Sanchez R, et al. (2005) The influence of CCL3L1 gene-containing segmental duplications on HIV-1/AIDS susceptibility. Science 307: 1434–1440.
5. 5. Perry GH, Dominy NJ, Claw KG, Lee AS, Fiegler H, et al. (2007) Diet and the evolution of human amylase gene copy number variation. Nat Genet 39: 1256–1260.
6. 6. Nair S, Miller B, Barends M, Jaidee A, Patel J, et al. (2008) Adaptive copy number evolution in malaria parasites. PLoS Genet 4: e1000243. doi:10.1371/journal.pgen.1000243.
7. 7. Ohno (1970) Evolution by gene duplication; Unwin A, editor. London..
8. 8. Wolfe KH (2001) Yesterday's polyploids and the mystery of diploidization. Nat Rev Genet 2: 333–341.
9. 9. Scannell DR, Frank AC, Conant GC, Byrne KP, Woolfit M, et al. (2007) Independent sorting-out of thousands of duplicated gene pairs in two yeast species descended from a whole-genome duplication. Proc Natl Acad Sci U S A 104: 8397–8402.
10. 10. Semon M, Wolfe KH (2007) Consequences of genome duplication. Curr Opin Genet Dev 17: 505–512.
11. 11. Walsh JB (1995) How often do duplicated genes evolve new functions? Genetics 139: 421–428.
12. 12. Force A, Lynch M, Pickett FB, Amores A, Yan YL, et al. (1999) Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151: 1531–1545.
13. 13. Cusack BP, Wolfe KH (2007) When gene marriages don't work out: divorce by subfunctionalization. Trends Genet 23: 270–272.
14. 14. Maere S, De Bodt S, Raes J, Casneuf T, Van Montagu M, et al. (2005) Modeling gene and genome duplications in eukaryotes. Proc Natl Acad Sci U S A 102: 5454–5459.
15. 15. Conant GC, Wolfe KH (2007) Increased glycolytic flux as an outcome of whole-genome duplication in yeast. Mol Syst Biol 3: 129.
16. 16. Papp B, Pal C, Hurst LD (2003) Dosage sensitivity and the evolution of gene families in yeast. Nature 424: 194–197.
17. 17. Aury JM, Jaillon O, Duret L, Noel B, Jubin C, et al. (2006) Global trends of whole-genome duplications revealed by the ciliate Paramecium tetraurelia. Nature 444: 171–178.
18. 18. Qian W, Zhang J (2008) Gene dosage and gene duplicability. Genetics 179: 2319–2324.
19. 19. Seoighe C, Wolfe KH (1999) Yeast genome evolution in the post-genome era. Curr Opin Microbiol 2: 548–554.
20. 20. Byrne KP, Wolfe KH (2005) The Yeast Gene Order Browser: combining curated homology and syntenic context reveals gene fate in polyploid species. Genome Res 15: 1456–1461.
21. 21. Daubin V, Ochman H (2004) Bacterial genomes as new gene homes: the genealogy of ORFans in E. coli. Genome Res 14: 1036–1042.
22. 22. Alba MM, Castresana J (2005) Inverse relationship between evolutionary rate and age of mammalian genes. Mol Biol Evol 22: 598–606.
23. 23. Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, et al. (2000) Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet 25: 25–29.
24. 24. Drummond DA, Wilke CO (2008) Mistranslation-induced protein misfolding as a dominant constraint on coding-sequence evolution. Cell 134: 341–352.
25. 25. Steinmetz LM, Scharfe C, Deutschbauer AM, Mokranjac D, Herman ZS, et al. (2002) Systematic screen for human disease genes in yeast. Nat Genet 31: 400–404.
26. 26. Holstege FC, Jennings EG, Wyrick JJ, Lee TI, Hengartner CJ, et al. (1998) Dissecting the regulatory circuitry of a eukaryotic genome. Cell 95: 717–728.
27. 27. Dopman EB, Hartl DL (2007) A portrait of copy-number polymorphism in Drosophila melanogaster. Proc Natl Acad Sci U S A 104: 19920–19925.
28. 28. Henrichsen CN, Vinckenbosch N, Zollner S, Chaignat E, Pradervand S, et al. (2009) Segmental copy number variation shapes tissue transcriptomes. Nat Genet 41: 424–429.
29. 29. Skaletsky H, Kuroda-Kawaguchi T, Minx PJ, Cordum HS, Hillier L, et al. (2003) The male-specific region of the human Y chromosome is a mosaic of discrete sequence classes. Nature 423: 825–837.
30. 30. Semon M, Mouchiroud D, Duret L (2005) Relationship between gene expression and GC-content in mammals: statistical significance and biological relevance. Hum Mol Genet 14: 421–427.
31. 31. Wagner A (2005) Energy constraints on the evolution of gene expression. Mol Biol Evol 22: 1365–1374.
32. 32. Dekel E, Alon U (2005) Optimality and evolutionary tuning of the expression level of a protein. Nature 436: 588–592.
33. 33. Bedford T, Hartl DL (2009) Optimization of gene expression by natural selection. Proc Natl Acad Sci U S A 106: 1133–1138.
34. 34. Galitski T, Saldanha AJ, Styles CA, Lander ES, Fink GR (1999) Ploidy regulation of gene expression. Science 285: 251–254.
35. 35. Wang J, Tian L, Lee HS, Wei NE, Jiang H, et al. (2006) Genomewide nonadditive gene regulation in Arabidopsis allotetraploids. Genetics 172: 507–517.
36. 36. Stupar RM, Bhaskar PB, Yandell BS, Rensink WA, Hart AL, et al. (2007) Phenotypic and transcriptomic changes associated with potato autopolyploidization. Genetics 176: 2055–2067.
37. 37. Doyle JJ, Flagel LE, Paterson AH, Rapp RA, Soltis DE, et al. (2008) Evolutionary genetics of genome merger and doubling in plants. Annu Rev Genet 42: 443–461.
38. 38. Masterson J (1994) Stomatal Size in Fossil Plants: Evidence for Polyploidy in Majority of Angiosperms. Science 264: 421–424.
39. 39. Andalis AA, Storchova Z, Styles C, Galitski T, Pellman D, et al. (2004) Defects arising from whole-genome duplications in Saccharomyces cerevisiae. Genetics 167: 1109–1121.
40. 40. Berger JD, Schmidt HJ (1978) Regulation of macronuclear DNA content in Paramecium tetraurelia. J Cell Biol 76: 116–126.
41. 41. Snoke MS, Berendonk TU, Barth D, Lynch M (2006) Large global effective population sizes in Paramecium. Mol Biol Evol 23: 2474–2479.
42. 42. Duret L, Mouchiroud D (2000) Determinants of substitution rates in mammalian genes: expression pattern affects selection intensity but not mutation rate. Mol Biol Evol 17: 68–74.
43. 43. Drummond DA, Raval A, Wilke CO (2006) A single determinant dominates the rate of yeast protein evolution. Mol Biol Evol 23: 327–337.
44. 44. Rocha EP (2006) The quest for the universals of protein evolution. Trends Genet 22: 412–416.
45. 45. Drummond DA, Bloom JD, Adami C, Wilke CO, Arnold FH (2005) Why highly expressed proteins evolve slowly. Proc Natl Acad Sci U S A 102: 14338–14343.
46. 46. Edgar R, Domrachev M, Lash AE (2002) Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res 30: 207–210.
47. 47. Smyth GK, Speed T (2003) Normalization of cDNA microarray data. Methods 31: 265–273.
48. 48. Gout JF, Duret L, Kahn D (2009) Differential retention of metabolic genes following whole-genome duplication. Mol Biol Evol 26: 1067–1072.
49. 49. Miao W, Xiong J, Bowen J, Wang W, Liu Y, et al. (2009) Microarray analyses of gene expression during the Tetrahymena thermophila life cycle. PLoS ONE 4: e4429. doi:10.1371/journal.pone.0004429.
50. 50. Arnaiz O, Cain S, Cohen J, Sperling L (2007) ParameciumDB: a community resource that integrates the Paramecium tetraurelia genome sequence with genetic data. Nucleic Acids Res 35: D439–444.
51. 51. Gavin AC, Aloy P, Grandi P, Krause R, Boesche M, et al. (2006) Proteome survey reveals modularity of the yeast cell machinery. Nature 440: 631–636.
52. 52. Ihaka R, Gentleman R (1996) R: A language for data analysis and graphics. Journal of computational and graphical statistics 5: 299–314.
53. 53. Carbon S, Ireland A, Mungall CJ, Shu S, Marshall B, et al. (2009) AmiGO: online access to ontology and annotation data. Bioinformatics 25: 288–289.
Ambra 2.10.9 Managed Colocation provided
by Internet Systems Consortium. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546390533447266, "perplexity": 3947.280270451109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769374.67/warc/CC-MAIN-20141217075249-00136-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://mathandmultimedia.com/category/high-school-mathematics/high-school-calculus/ | ## Understanding Domain and Range Part 4
In this post, we summarize the previous three articles about domain and range. In the first part of the series, we focused on the graphical meaning of domain and range. We have learned that the domain of a function can be interpreted as the projection of its graph to the x-axis. Similarly, the range of the function is the projection of its graph to the y-axis.
Graphical meaning of domain (red) and range (green)
In the second part of the series, we learned to analyze equations of functions to determine their domain and range. We learned the restrictions in the domain and range of functions are affected by the following: squares in the expressions, square root signs, absolute value signs, and being in the denominator. In exploring these we concluded the following:
• Expressions under the square root sign result to a positive real number or 0. This means that we have to set the inequality such that the expression is greater than or equal to 0, and then find the permissible values of x.
• Expressions containing squares result to a positive real number or 0. This affects the range of the function.
• Expressions inside the absolute value sign result to a positive number of 0. This also affects the range of the function.
• Expressions in the denominator of fractions cannot be 0 because it will make the function undefined. So, we need to find the value of x that makes the denominator by 0. To do this, we equate the expression in the denominator to 0 and find the value of x. The values of x are the restrictions in the domain.
In the third part of the series, we examined functions that have more complicated equations than those in the second part of the series.
Before I end this series, there is one more concept about domain that I want you to remember. That is, the domain of all polynomial functions is the set of real numbers. That’s why the domain of linear functions and quadratic functions in Part 1 and Part 2 is the set of real numbers.
## Understanding Domain and Range Part 3
In the previous post, we have learned how to analyze equations of functions and determine their domain and range. We have observed that the range of the functions $y = x^2$ and $y = |x|$ are the set of real numbers greater than or equal to $0$ since squaring a number or getting its absolute value results to $0$ or a positive real number. We also learned that for a function to be defined, the number under the square root sign must be greater than or equal to 0. Lastly, we have learned that we cannot divide by zero because it will make the function undefined.
In this post, we are going to continue our discussion by examining functions with equations more complicated than those in the second part of this series.
Squares and Absolute Values
1. $f(x) = x^2 - 3$
Domain: The function is defined for any real number $x$, so the domain of $f$ is the set of real numbers.
Range: The minimum value of $x^2$ is $0$ for any real number $x$ and $f(0) - 3 = 0^2 - 3 = -3$. So, the minimum value of the function is $-3$. We can make the value of the function as large as possible by increasing the absolute value of $x$. So, the range of the function is the set of real numbers greater than or equal to $-3$ or $[-3, \infty)$ in interval notation. » Read more
## Understanding Domain and Range Part 2
In the previous post, we have learned the graphical representation of domain and range. The domain of the function $f$ is the shadow or projection of the graph of $f$ to the x-axis (see the red segment in the figure below). The range of $f$ is the projection of the graph of $f$ to the y-axis (see the green segment in the figure below). In this post, we are going to learn how to analyze equations of functions and determine their domain and range without graphing.
If a graph of a function is projected to the x-axis, the projection is the set of x-coordinates of the graph. A single point $(a,0)$ on the projection means a point on the graph exists. The existence of a point implies that $f(a)$ exists. This means that the function is defined at $x = a$. In effect, the domain of a function is the set of x-coordinates that makes the function defined. In what follows, we learn some examples to illustrate this concept. » Read more
1 2 3 7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7817921042442322, "perplexity": 121.19555484146834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313259.30/warc/CC-MAIN-20190817123129-20190817145129-00050.warc.gz"} |
https://mail.queryxchange.com/q/21_2604799/find-the-truth-values-for-each-p-x-y/ | # Find the truth values for each P(x, y)???
by John Baek Last Updated January 14, 2018 13:20 PM
Here is the problem
Let $S$ denote the two-element set {0, 1}. Find truth values (i.e. True of False) for each of P(0, 0), P(0, 1), P(1, 0), P(1, 1) so that
$\forall x\in S, \exists y \in S, P(x, y)$ is true
But
$\exists y \in S, \forall x\in S, P(x, y)$ is false.
This exercises illustrates the fact that changing the order of your quantifiers can change the meaning of your statement.
The problem's hint:
Just to clarify, for problem 5 you are assigning the value True or False to each of P(0,0), P(0,1), P(1,0), and P(1,1). That's four choices for you to make.
For example, if you choose
P(0,0)=True
P(0,1)=True
P(1,0)=True
P(1,1)=True
you'll see that both of the given statements become true, and if you choose
P(0,0)=False
P(0,1)=False
P(1,0)=False
P(1,1)=False
you'll see that both of the given statements become false.
What set of 4 choices makes the first given statement true and the second given statement false?
There are two pairs of truth values for P(0,0), P(0,1), P(1,0), P(1,1)
which satisfy two statements. (true true false false), (false false true true).
And the professor said "You just need to find one such assignment."
Um.. I'm totally lost here. Could anyone help me to solve this problem???
Tags :
The idea is that, for each $x$, we pick one $y$ so that $P(x,y)$ is true (in order to keep the first statement true), but this shouldn't be the same $y$ for all values of $x$ (in order to keep the second statement false).
So, let's pick $y=0$ for $x=0$ and $y=1$ for $x=1$. (This is one possible example, there are others.) Now set:
$$P(0,0)=\top, P(1,1)=\top$$
but:
$$P(0,1)=\bot, P(1,0)=\bot$$
You can easily check that this choice of truth values does the job.
user8734617
January 14, 2018 13:14 PM | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.554328203201294, "perplexity": 761.7819183279689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890771.63/warc/CC-MAIN-20180121135825-20180121155825-00170.warc.gz"} |
https://it.mathworks.com/help/dsp/ref/dsp.firdecimator-system-object.html | # dsp.FIRDecimator
Polyphase FIR decimator
## Description
The `dsp.FIRDecimator` System object™ resamples vector or matrix inputs along the first dimension. The FIR decimator (as shown in the schematic) conceptually consists of an anti-aliasing FIR filter followed by a downsampler.
The FIR filter filters the data in each channel of the input using a direct-form FIR filter. The FIR filter coefficients can be specified through the `Numerator` property, or can be automatically designed by the object using the `designMultirateFIR` function. The `designMultirateFIR` function designs an anti-aliasing FIR filter. The downsampler that follows the FIR filter downsamples each channel of filtered data by taking every M-th sample and discarding the M – 1 samples that follow. M is the value of the decimation factor that you specify. The resulting discrete-time signal has a sample rate that is 1/M times the original sample rate.
Note that the actual object algorithm implements a direct-form FIR polyphase structure, an efficient equivalent of the combined system depicted in the diagram. For more details, see Algorithms.
To resample vector or matrix inputs along the first dimension:
1. Create the `dsp.FIRDecimator` object and set its properties.
2. Call the object with arguments, as if it were a function.
Under specific conditions, this System object also supports SIMD code generation. For details, see Code Generation.
## Creation
### Syntax
``firdecim = dsp.FIRDecimator``
``firdecim = dsp.FIRDecimator(M)``
``firdecim = dsp.FIRDecimator(M,'Auto')``
``firdecim = dsp.FIRDecimator(M,num)``
``firdecim = dsp.FIRDecimator(___,Name,Value)``
``firdecim = dsp.FIRDecimator(M,'legacy')``
### Description
example
````firdecim = dsp.FIRDecimator` returns an FIR decimator object with a decimation factor of 2. The object designs the FIR filter coefficients using the `designMultirateFIR(1,2)` function.```
````firdecim = dsp.FIRDecimator(M)` returns an FIR decimator with the integer-valued `DecimationFactor` property set to `M`. The object designs its filter coefficients based on the decimation factor `M` that you specify while creating the object, using the `designMultirateFIR(1,M)` function. The designed filter corresponds to a lowpass with a cutoff at π/`M` in radial frequency units.```
````firdecim = dsp.FIRDecimator(M,'Auto')` returns an FIR decimator with the `NumeratorSource` property set to `'Auto'`. In this mode, every time there is an update in the decimation factor, the object redesigns the filter using `designMultirateFIR(1,M)`.```
````firdecim = dsp.FIRDecimator(M,num)` returns an FIR decimator with the `DecimationFactor` property set to `M` and the `Numerator` property set to `num`.```
````firdecim = dsp.FIRDecimator(___,Name,Value)` returns an FIR decimator object with each specified property set to the specified value. Enclose each property name in quotes. You can use this syntax with any previous input argument combinations.```
````firdecim = dsp.FIRDecimator(M,'legacy')` returns an FIR decimator where the filter coefficients are designed using `fir1(35,0.4)`. The designed filter has a cutoff frequency of 0.4π radians/sample.```
## Properties
expand all
Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the `release` function unlocks them.
If a property is tunable, you can change its value at any time.
### Main Properties
Decimation factor M, specified as a positive integer. The FIR decimator reduces the sampling rate of the input by this factor. The number of input rows must be a multiple of the decimation factor.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
FIR filter coefficient source, specified as one of the following:
• `'Property'` –– The numerator coefficients are specified through the `Numerator` property.
• `'Input port'` –– The numerator coefficients are specified as an input to the object algorithm.
• `'Auto'` –– The numerator coefficients are designed automatically using the `designMultirateFIR(1,M)` function.
Numerator coefficients of the FIR filter, specified as a row vector in powers of z–1. The following equation defines the system function for a filter of length N+1:
`$H\left(z\right)=\sum _{l=0}^{N}{b}_{l}{z}^{-l}$`
The vector b = [b0, b1, …, bN] represents the vector of filter coefficients.
To prevent aliasing as a result of downsampling, the filter transfer function should have a normalized cutoff frequency no greater than 1/`M`. To design an effective anti-aliasing filter, use the `designMultirateFIR` function. For an example, see Decimate Sum of Sine Waves.
#### Dependencies
This property is visible only when you set `NumeratorSource` to `'Property'`.
When `NumeratorSource` is set to `'Auto'`, the numerator coefficients are automatically redesigned using `designMultirateFIR(1,M)`. To access the filter coefficients in the automatic design mode, type `objName.``Numerator` in the MATLAB® command prompt.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Complex Number Support: Yes
Specify the implementation of the FIR filter as either ```Direct form``` or `Direct form transposed`.
### Code Generation Properties
Allow arbitrary frame length for fixed-size input signals in the generated code, specified as `true` or `false`. When you specify:
• `true` –– The input frame length does not have to be a multiple of the decimation factor. The output of the object in the generated code is a variable-size array.
• `false` –– The input frame length must be a multiple of the decimation factor.
When you specify variable-size signals, the input frame length can be arbitrary and the object ignores this property in the generated code. When you run this object in MATLAB, the object supports arbitrary input frame lengths for fixed-size and variable-size signals and this property does not affect the object behavior.
Data Types: `logical`
### Fixed-Point Properties
Flag to use full-precision rules for fixed-point arithmetic, specified as one of the following:
• `true` –– The object computes all internal arithmetic and output data types using the full-precision rules. These rules provide the most accurate fixed-point numerics. In this mode, other fixed-point properties do not apply. No quantization occurs within the object. Bits are added, as needed, to ensure that no roundoff or overflow occurs.
• `false` –– Fixed-point data types are controlled through individual fixed-point property settings.
For more information, see Full Precision for Fixed-Point System Objects and Set System Object Fixed-Point Properties.
Rounding method for fixed-point operations. For more details, see rounding mode.
#### Dependencies
This property is not visible and has no effect on the numerical results when the following conditions are met:
• `FullPrecisionOverride` set to `true`.
• `FullPrecisionOverride` set to `false`, `ProductDataType` set to `'Full precision'`, `AccumulatorDataType` set to ```'Full precision'```, and `OutputDataType` set to `'Same as accumulator'`.
Under these conditions, the object operates in full precision mode.
Overflow action for fixed-point operations, specified as one of the following:
• `'Wrap'` –– The object wraps the result of its fixed-point operations.
• `'Saturate'` –– The object saturates the result of its fixed-point operations.
For more details on overflow actions, see overflow mode for fixed-point operations.
#### Dependencies
This property is not visible and has no effect on the numerical results when the following conditions are met:
• `FullPrecisionOverride` set to `true`.
• `FullPrecisionOverride` set to `false`, `OutputDataType` set to `'Same as accumulator'`, `ProductDataType` set to ```'Full precision'```, and `AccumulatorDataType` set to `'Full precision'`
Under these conditions, the object operates in full precision mode.
Data type of the FIR filter coefficients, specified as:
• `Same word length as input` –– The word length of the coefficients is the same as that of the input. The fraction length is computed to give the best possible precision.
• `Custom` –– The coefficients data type is specified as a custom numeric type through the `CustomCoefficientsDataType` property.
Word and fraction lengths of the coefficients data type, specified as an autosigned `numerictype` (Fixed-Point Designer) with a word length of 16 and a fraction length of 15.
#### Dependencies
This property applies when you set the `CoefficientsDataType` property to `Custom`.
Data type of the product output in this object, specified as one of the following:
• `'Full precision'` –– The product output data type has full precision.
• `'Same as input'` –– The object specifies the product output data type to be the same as that of the input data type.
• `'Custom'` –– The product output data type is specified as a custom numeric type through the `CustomProductDataType` property.
For more information on the product output data type, see Multiplication Data Types.
#### Dependencies
This property applies when you set `FullPrecisionOverride` to `false`.
Word and fraction lengths of the product data type, specified as an autosigned numeric type with a word length of 32 and a fraction length of 30.
#### Dependencies
This property applies only when you set `FullPrecisionOverride` to `false` and `ProductDataType` to `'Custom'`.
Data type of an accumulation operation in this object, specified as one of the following:
• `'Full precision'` –– The accumulation operation has full precision.
• `'Same as product'` –– The object specifies the accumulator data type to be the same as that of the product output data type.
• `'Same as input'` –– The object specifies the accumulator data type to be the same as that of the input data type.
• `'Custom'` –– The accumulator data type is specified as a custom numeric type through the `CustomAccumulatorDataType` property.
#### Dependencies
This property applies when you set `FullPrecisionOverride` to `false`.
Word and fraction lengths of the accumulator data type, specified as an autosigned numeric type with a word length of 32 and a fraction length of 30.
#### Dependencies
This property applies only when you set `FullPrecisionOverride` to `false` and `AccumulatorDataType` to `'Custom'`.
Data type of the object output, specified as one of the following:
• `'Same as accumulator'` –– The output data type is the same as that of the accumulator output data type.
• `'Same as input'` –– The output data type is the same as that of the input data type.
• `'Same as product'` –– The output data type is the same as that of the product output data type.
• `'Custom'` –– The output data type is specified as a custom numeric type through the `CustomOutputDataType` property.
#### Dependencies
This property applies when you set `FullPrecisionOverride` to `false`.
Word and fraction lengths of the output data type, specified as an autosigned numeric type with a word length of 16 and a fraction length of 15.
#### Dependencies
This property applies only when you set `FullPrecisionOverride` to `false` and `OutputDataType` to `'Custom'`.
## Usage
### Syntax
``y = firdecim(x)``
``y = firdecim(x,num)``
### Description
example
````y = firdecim(x)` outputs the filtered and downsampled values, `y`, of the input signal, `x`.```
````y = firdecim(x,num)` uses the FIR filter, `num`, to decimate the input signal. This configuration is valid only when the `'NumeratorSource'` property is set to `'Input port'`.```
### Input Arguments
expand all
Data input, specified as a column vector or a matrix of size P-by-Q. The columns in the input signal represent Q independent channels.
Under most conditions, the number of inputs rows P can be arbitrary and does not have to be a multiple of the `DecimationFactor` property. See this table for details.
Input SignalWhen you Run Object in MATLABWhen you Generate Code Using MATLAB Coder™
Fixed-sizeObject supports arbitrary input frame lengthObject supports arbitrary input frame length when you set `AllowArbitraryInputLength` to `true` while generating code
Variable-sizeObject supports arbitrary input frame lengthObject supports arbitrary input frame length
Variable-size signals change in frame length once you lock the object while the fixed-size signals remain constant. When the object does not support arbitrary frame length, the input frame length must be a multiple of the `DecimationFactor` property.
This object does not support complex unsigned fixed-point inputs.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `fi`
Complex Number Support: Yes
FIR filter coefficients, specified as a row vector.
#### Dependencies
This input is accepted only when the `'NumeratorSource'` property is set to `'Input port'`.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `fi`
Complex Number Support: Yes
### Output Arguments
expand all
FIR decimator output, returned as a column vector or a matrix. When the input is of size P-by-Q, and P is not a multiple of the decimation factor M, the output signal has an upper bound size of `ceil`(P/M)-by-Q. If P is a multiple of the decimation factor, then the output is of size (P/M)-by-Q.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `fi`
Complex Number Support: Yes
## Object Functions
To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named `obj`, use this syntax:
`release(obj)`
expand all
`freqz` Frequency response of discrete-time filter System object `fvtool` Visualize frequency response of DSP filters `info` Information about filter System object `cost` Estimate cost of implementing filter System object `polyphase` Polyphase decomposition of multirate filter `generatehdl` Generate HDL code for quantized DSP filter (requires Filter Design HDL Coder) `impz` Impulse response of discrete-time filter System object `coeffs` Returns the filter System object coefficients in a structure
`step` Run System object algorithm `release` Release resources and allow changes to System object property values and input characteristics `reset` Reset internal states of System object
## Examples
collapse all
Decimate a sum of sine waves by a factor of 2 and by a factor of 4.
Start with a cosine wave that has an angular frequency of $\frac{\pi }{4}$ radians/sample.
`x = cos(pi/4*(0:95)');`
Design Default Filter
Create a `dsp.FIRDecimator` object. To prevent aliasing, the object uses an anti-aliasing lowpass filter before downsampling. By default, the anti-aliasing lowpass filter is designed using the `designMultirateFIR` function. The function designs the filter based on the decimation factor that you specify, and stores the coefficients in the `Numerator` property. For a decimation factor of 2, the object designs the coefficients using `designMultirateFIR(1,2)`.
`firdecim = dsp.FIRDecimator(2)`
```firdecim = dsp.FIRDecimator with properties: Main DecimationFactor: 2 NumeratorSource: 'Property' Numerator: [0 -1.0054e-04 0 3.8704e-04 0 -0.0010 0 0.0022 0 ... ] Structure: 'Direct form' Show all properties ```
Visualize the filter response using `fvtool`. The designed filter meets the ideal filter constraints that are marked in red. The cutoff frequency is approximately half the spectrum.
`fvtool(firdecim)`
Decimate by 2
Decimate the cosine signal by a factor of 2.
`y = firdecim(x);`
Plot the original and the decimated signals. In order to plot the two signals on the same plot, you must account for the output delay of the FIR decimator and the scaling introduced by the filter. Use the `outputDelay` function to compute the `delay` value introduced by the decimator. Shift the output by this delay value.
Visualize the input and the resampled signals. After a short transition, the output converges to a cosine of frequency $\frac{\pi }{2}$ as expected, which is twice the frequency of the input signal $\frac{\pi }{4}$. Due to the decimation factor of 2, the output samples coincide with every other input sample.
```[delay,FsOut] = outputDelay(firdecim); nx = (0:length(x)-1); ty = (0:length(y)-1)/FsOut-delay; stem(ty,y,'filled',MarkerSize=4); hold on; stem(nx,x); hold off; xlim([-10,22]) ylim([-2.5 2.5]) legend('Decimated by 2 (y)','Input signal (x)');```
Add a High Frequency Component to Input and Decimate
Add another frequency component to the input signal, a sine with an angular frequency of $\frac{2\pi }{3}$ radians/sample. Since $\omega =\text{\hspace{0.17em}}\frac{2\pi }{3}$ is above the FIR lowpass cutoff, $\frac{\pi }{2}$, the frequency $\frac{2\pi }{3}$ radians/sample is filtered out from the signal.
```xhigh = x + 0.2*sin(2*pi/3*(0:95)'); release(firdecim) yhigh = firdecim(xhigh);```
Plot the input signal, decimated signal, and the output of the low frequency component. The decimated signal `yhigh `has the high frequency component filtered out. `yhigh` is almost identical to the output of the low frequency component `y`.
```stem(ty,yhigh,'filled',MarkerSize=4); hold on; stem(nx,xhigh); stem(ty,y,':m',MarkerSize=7); hold off; xlim([-10,22]) ylim([-2.5 2.5]) legend('Decimated by 2 (yhigh)',... 'Input signal with the high tone added (xhigh)',... 'Decimated by 2 - low tone only (y)');```
Decimate by 4 in Automatic Filter Design Mode
Now decimate by a factor of 4. In order for the filter design to be updated automatically based on the new decimation factor, set the `NumeratorSource` property to `'Auto'`. Alternately, you can pass `'Auto'` as the keyword while creating the object. The object then operates in the automatic filter design mode. Every time there is a change in the decimation factor, the object updates the filter design.
```release(firdecim) firdecim.NumeratorSource = 'Auto'; firdecim.DecimationFactor = 4```
```firdecim = dsp.FIRDecimator with properties: Main DecimationFactor: 4 NumeratorSource: 'Auto' Structure: 'Direct form' Show all properties ```
To access the filter coefficients in the automatic mode, type `firdecim.Numerator` in the MATLAB command prompt.
The designed filter occupies a narrower passband that is approximately a quarter of the spectrum.
`fvtool(firdecim)`
Decimate the cosine signal by a factor of 4. After a short transition, the output converges to a cosine of frequency $\pi$ as expected, which is four times the lower frequency component of the input signal $\frac{\pi }{4}$. This time, the amplitude of the output is half the amplitude of the input since the gain of the FIR at $\omega =\frac{\pi }{4}$ is exactly $\frac{1}{2}$. The high frequency component $\frac{2\pi }{3}$ diminishes by the lowpass FIR whose cutoff frequency is $\frac{\pi }{4}$.
`yAuto = firdecim(xhigh);`
Plot the input signal with the high frequency component added, low frequency component scaled by 1/2, and the decimated signal. Recalculate the output delay and the output sample rate since the decimation factor has changed.
```[delay,FsOut] = outputDelay(firdecim); tyAuto = (0:length(yAuto)-1)/FsOut-delay; stem(tyAuto,yAuto,'filled',MarkerSize=4); hold on; stem(nx,xhigh); stem(nx,x/2,'m:',MarkerSize=7); hold off; xlim([-20,36]) ylim([-2.5 2.5]) legend('Decimated by 4 (yAuto)',... 'Input signal with the high frequency component added (xhigh)',... 'Low tone input scaled by 1/2');```
Reduce the sample rate of an audio signal by a factor of 2 and play the decimated signal using the `audioDeviceWriter` object.
Note: If you are using R2016a or an earlier release, replace each call to the object with the equivalent `step `syntax. For example, `obj(x) `becomes `step(obj,x)`.
Note: The `audioDeviceWriter` System object™ is not supported in MATLAB Online.
Create a `dsp.AudioFileReader` object. The default audio file read by the object has a sample rate of 22050 Hz.
```afr = dsp.AudioFileReader('OutputDataType',... 'single');```
Create a `dsp.FIRDecimator` object and specify the decimation factor to be 2. The object designs the filter using `designMultirateFIR(1,2)` and stores the coefficients in the `Numerator` property of the object.
`firdecim = dsp.FIRDecimator(2)`
```firdecim = dsp.FIRDecimator with properties: Main DecimationFactor: 2 NumeratorSource: 'Property' Numerator: [0 -1.0054e-04 0 3.8704e-04 0 -0.0010 0 0.0022 0 ... ] Structure: 'Direct form' Show all properties ```
Create an `audioDeviceWriter` object. Specify the sample rate to be 22050/2.
`adw = audioDeviceWriter(22050/2)`
```adw = audioDeviceWriter with properties: Device: 'Default' SampleRate: 11025 Show all properties ```
Read the audio signal using the file reader object, decimate the signal by a factor of 2, and play the decimated signal.
```while ~isDone(afr) frame = afr(); y = firdecim(frame); adw(y); end release(afr); pause(0.5); release(adw);```
expand all
## Algorithms
The FIR decimation filter is implemented efficiently using a polyphase structure. For more details on polyphase filters, see Polyphase Subfilters.
To derive the polyphase structure, start with the transfer function of the FIR filter:
`$H\left(z\right)={b}_{0}+{b}_{1}{z}^{-1}+...+{b}_{N}{z}^{-N}$`
N+1 is the length of the FIR filter.
You can rearrange this equation as follows:
`$H\left(z\right)=\begin{array}{c}\left({b}_{0}+{b}_{M}{z}^{-M}+{b}_{2M}{z}^{-2M}+..+{b}_{N-M+1}{z}^{-\left(N-M+1\right)}\right)+\\ {z}^{-1}\left({b}_{1}+{b}_{M+1}{z}^{-M}+{b}_{2M+1}{z}^{-2M}+..+{b}_{N-M+2}{z}^{-\left(N-M+1\right)}\right)+\\ \begin{array}{c}⋮\\ {z}^{-\left(M-1\right)}\left({b}_{M-1}+{b}_{2M-1}{z}^{-M}+{b}_{3M-1}{z}^{-2M}+..+{b}_{N}{z}^{-\left(N-M+1\right)}\right)\end{array}\end{array}$`
M is the number of polyphase components, and its value equals the decimation factor that you specify.
You can write this equation as:
`$H\left(z\right)={E}_{0}\left({z}^{M}\right)+{z}^{-1}{E}_{1}\left({z}^{M}\right)+...+{z}^{-\left(M-1\right)}{E}_{M-1}\left({z}^{M}\right)$`
E0(zM), E1(zM), ..., EM-1(zM) are the polyphase components of the FIR filter H(z).
Conceptually, the FIR decimation filter contains a lowpass FIR filter followed by a downsampler.
Replace H(z) with its polyphase representation.
Here is the multirate noble identity for decimation.
Applying the noble identity for decimation moves the downsampling operation to before the filtering operation. This move enables you to filter the signal at a lower rate.
You can replace the delays and the decimation factor at the input with a commutator switch. The switch starts on the first branch 0 and moves in the counterclockwise direction as shown in this diagram. The accumulator at the output receives the processed input samples from each branch of the polyphase structure and accumulates these processed samples until the switch goes to branch 0. When the switch goes to branch 0, the accumulator outputs the accumulated value.
When the first input sample is delivered, the switch feeds this input to the branch 0 and the decimator computes the first output value. As more input samples come in, the switch moves in the counter clockwise direction through branches M−1, M−2, and all the way up to branch 0, delivering one sample at a time to each branch. When the switch comes to branch 0, the decimator outputs the next set of output values. This process continues as data keeps coming in. Every time the switch comes to the branch 0, the decimator outputs y[m]. The decimator effectively outputs one sample for every M samples it receives. Hence the sample rate at the output of the FIR decimation filter is fs/M.
## Version History
Introduced in R2012a
expand all | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5868773460388184, "perplexity": 5940.046315008847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00844.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/in-what-time-will-rs-1500-yield-rs-49650-compound-interest-10-annum-compounded-annually-concept-of-compound-interest-use-of-compound-interest-in-computing-amount-over-a-period-of-2-or-3-years_19027 | # In What Time Will Rs. 1500 Yield Rs. 496.50 as Compound Interest at 10% per Annum Compounded Annually? - Mathematics
In what time will Rs. 1500 yield Rs. 496.50 as compound interest at 10% per annum compounded annually?
#### SolutionShow Solution
Given P = Rs. 1500, I = 496.50, R = 10%
A = P+I
⇒ A =Rs. 1500+Rs. 496.50=Rs. 1996.50
A = P(1 + R/100)^n
=> 1996.50 = 1500 (1 + 10/100)^n
=> 1996.50/1500 = (1 +1/10)^n
=> 1.331 = (1.1)^n
=> (1.1)^3 = (1.1)^n
=> n = 3
Concept: Concept of Compound Interest - Use of Compound Interest in Computing Amount Over a Period of 2 Or 3-years
Is there an error in this question or solution? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30169564485549927, "perplexity": 4209.076929184847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00289.warc.gz"} |
http://mymathforum.com/number-theory/339570-similar-sequence-starts.html | My Math Forum Similar sequence starts
Number Theory Number Theory Math Forum
March 18th, 2017, 06:17 PM #1 Senior Member Joined: May 2015 From: Arlington, VA Posts: 289 Thanks: 24 Math Focus: Number theory Similar sequence starts How can I look up on OEIS, https://en.wikipedia.org/wiki/On-Lin...eger_Sequences, sequences that begin with similarly ordered matching integers? Last edited by Loren; March 18th, 2017 at 06:19 PM.
March 18th, 2017, 08:05 PM #2 Senior Member Joined: May 2015 From: Arlington, VA Posts: 289 Thanks: 24 Math Focus: Number theory For example: 1, 4, 7, 8, 10, 12, 15, 20... 1, 4, 7, 8, 10, 12, 13, 14... for fairly long sequences of integers that are easily generated. I believe there are formulas which "generate" primes for many integers before degenerating.
March 18th, 2017, 09:21 PM #3 Senior Member Joined: Aug 2012 Posts: 1,681 Thanks: 437 I think you just put in the common initial segment and it will show you as many distinct continuations as it knows about. As far as formulas for finitely many primes, there are always the Lagrange polynomials. These let you fit n points to a degree n-1 polynomial. 3 point determine a quadratic and so forth. So f(1) = 2, f(2) = 3, f(3) = 5, etc. can always be fitted to a poly. https://en.wikipedia.org/wiki/Lagrange_polynomial And here's a really cool Lagrange polynomial calculator. https://en.wikipedia.org/wiki/Lagrange_polynomial For example I put in (1,2) (2,3) (3,5) (5,7) (6,11) (7,13) and it gave me back Is that cool or what! Of course we have to remember 6 rational coefficients to get back 6 primes. So having a polynomial that generates primes is less helpful than it might seem. Thanks from Loren Last edited by Maschke; March 18th, 2017 at 09:27 PM.
March 18th, 2017, 10:21 PM #4 Senior Member Joined: May 2015 From: Arlington, VA Posts: 289 Thanks: 24 Math Focus: Number theory Which equation generates more primes? Euler and Legendre are credited, respectively, with the following two polynomials which "generate" primes N^2+N+41=p(N) N^2-N+41=p(N) where p is usually prime for a natural number N. Would anyone here like to speculate on which equation generates more primes? (Watch out for those Big numbers!)
Tags sequence, similar, squences, starting, starts
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post PuzzledLook New Users 7 January 29th, 2015 09:02 PM Chikis Algebra 4 July 29th, 2014 12:16 AM nigerianscholar Academic Guidance 2 June 19th, 2013 09:26 PM Denis Algebra 7 November 13th, 2011 08:00 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073810935020447, "perplexity": 2915.52502707843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890928.82/warc/CC-MAIN-20180121234728-20180122014728-00080.warc.gz"} |
https://en.academic.ru/dic.nsf/enwiki/766732/Hyperbolic_partial_differential_equation | Hyperbolic partial differential equation
Hyperbolic partial differential equation
In mathematics, a hyperbolic partial differential equation is usually a second-order partial differential equation (PDE) of the form
:$A u_\left\{xx\right\} + 2 B u_\left\{xy\right\} + C u_\left\{yy\right\} + D u_x + E u_y + F = 0$
with
:
The one-dimensional wave equation:
:$frac\left\{partial^2 u\right\}\left\{partial t^2\right\} - c^2frac\left\{partial^2 u\right\}\left\{partial x^2\right\} = 0$
is an example of hyperbolic equation. The two-dimensional and three-dimensional wave equations also fall into the category of hyperbolic PDE.
This type of second-order hyperbolic partial differential equation may be transformed to a hyperbolic system of first-order differential equations.
Hyperbolic system of partial differential equations
Consider the following system of $s$ first order partial differential equations for $s$ unknown functions $vec u = \left(u_1, ldots, u_s\right)$, $vec u =vec u \left(vec x,t\right)$, where $vec x in mathbb\left\{R\right\}^d$
:$\left(*\right) quad frac\left\{partial vec u\right\}\left\{partial t\right\} + sum_\left\{j=1\right\}^d frac\left\{partial\right\}\left\{partial x_j\right\} vec \left\{f^j\right\} \left(vec u\right) = 0,$
$vec \left\{f^j\right\} in C^1\left(mathbb\left\{R\right\}^s, mathbb\left\{R\right\}^s\right), j = 1, ldots, d$ are once continuously differentiable functions, nonlinear in general.
Now define for each $vec \left\{f^j\right\}$ a matrix $s imes s$
:
We say that the system $\left(*\right)$ is hyperbolic if for all $alpha_1, ldots, alpha_d in mathbb\left\{R\right\}$ the matrix $A := alpha_1 A^1 + cdots + alpha_d A^d$has only real eigenvalues and is diagonalizable.
If the matrix $A$ has distinct real eigenvalues, it follows that it's diagonalizable. In this case the system $\left(*\right)$ is called strictly hyperbolic.
Hyperbolic system and conservation laws
There is a connection between a hyperbolic system and a conservation law. Consider a hyperbolic system of one partial differential equation for one unknown function $u = u\left(vec x, t\right)$. Then the system $\left(*\right)$ has the form
:$\left(**\right) quad frac\left\{partial u\right\}\left\{partial t\right\} + sum_\left\{j=1\right\}^d frac\left\{partial\right\}\left\{partial x_j\right\} \left\{f^j\right\} \left(u\right) = 0,$
Now $u$ can be some quantity with a flux $vec f = \left(f^1, ldots, f^d\right)$. To show that this quantity is conserved, integrate $\left(**\right)$ over a domain $Omega$
:$int_\left\{Omega\right\} frac\left\{partial u\right\}\left\{partial t\right\} dOmega + int_\left\{Omega\right\} abla cdot vec f\left(u\right) dOmega = 0.$
If $u$ and $vec f$ are sufficiently smooth functions, we can use the divergence theorem and change the order of the integration and $partial / partial t$ to get a conservation law for the quantity $u$ in the general form
:$frac\left\{d\right\}\left\{dt\right\} int_\left\{Omega\right\} u dOmega + int_\left\{Gamma\right\} vec f\left(u\right) cdot vec n dGamma = 0,$which means that the time rate of change of $u$ in the domain $Omega$ is equal to the net flux of $u$ through its boundary $Gamma$. Since this is an equality, it can be concluded that $u$ is conserved within $Omega$.
See also
* Elliptic partial differential equation
* Parabolic partial differential equation
* Hypoelliptic operator
Bibliography
* A. D. Polyanin, "Handbook of Linear Partial Differential Equations for Engineers and Scientists", Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9
External links
* [http://eqworld.ipmnet.ru/en/solutions/lpde/lpdetoc2.pdf Linear Hyperbolic Equations] at EqWorld: The World of Mathematical Equations.
* [http://eqworld.ipmnet.ru/en/solutions/npde/npde-toc2.pdf Nonlinear Hyperbolic Equations] at EqWorld: The World of Mathematical Equations.
Wikimedia Foundation. 2010.
Look at other dictionaries:
• Partial differential equation — A visualisation of a solution to the heat equation on a two dimensional plane In mathematics, partial differential equations (PDE) are a type of differential equation, i.e., a relation involving an unknown function (or functions) of several… … Wikipedia
• partial differential equation — Math. a differential equation containing partial derivatives. Cf. ordinary differential equation. [1885 90] * * * In mathematics, an equation that contains partial derivatives, expressing a process of change that depends on more than one… … Universalium
• Parabolic partial differential equation — A parabolic partial differential equation is a type of second order partial differential equation, describing a wide family of problems in science including heat diffusion and stock option pricing. These problems, also known as evolution problems … Wikipedia
• First order partial differential equation — In mathematics, a first order partial differential equation is a partial differential equation that involves only first derivatives of the unknown function of n variables. The equation takes the form: F(x 1,ldots,x n,u,u {x 1},ldots u {x n}) =0 … Wikipedia
• Differential equation — Not to be confused with Difference equation. Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state… … Wikipedia
• List of nonlinear partial differential equations — In mathematics and physics, nonlinear partial differential equations are (as their name suggests) partial differential equations with nonlinear terms. They describe many different physical systems, ranging from gravitation to fluid dynamics, and… … Wikipedia
• Numerical partial differential equations — is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs). Numerical techniques for solving PDEs include the following: The finite difference method, in which functions are represented by… … Wikipedia
• Équation aux dérivées partielles — En mathématiques, plus précisément en calcul différentiel, une équation aux dérivées partielles (EDP) est une équation dont les solutions sont les fonctions inconnues vérifiant certaines conditions concernant leurs dérivées partielles. Une EDP a… … Wikipédia en Français
• Differential geometry of surfaces — Carl Friedrich Gauss in 1828 In mathematics, the differential geometry of surfaces deals with smooth surfaces with various additional structures, most often, a Riemannian metric. Surfaces have been extensively studied from various perspectives:… … Wikipedia
• Sine-Gordon equation — The sine Gordon equation is a nonlinear hyperbolic partial differential equation in 1+1 dimensions involving the d Alembert operator and the sine of the unknown function. It was originally considered in the nineteenth century in the course of… … Wikipedia
Share the article and excerpts
Direct link
Do a right-click on the link above
and select “Copy Link”
We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 35, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9091858267784119, "perplexity": 541.6100872148578}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141653.66/warc/CC-MAIN-20200217030027-20200217060027-00274.warc.gz"} |
http://www.ams.org/bookstore?fn=20&arg1=memoseries&ikey=MEMO-218-1025 | New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
General Relativistic Self-Similar Waves that Induce an Anomalous Acceleration into the Standard Model of Cosmology
Joel Smoller, University of Michigan, Ann Arbor, MI, and Blake Temple, University of California, Davis, CA
SEARCH THIS BOOK:
Memoirs of the American Mathematical Society
2012; 69 pp; softcover
Volume: 218
ISBN-10: 0-8218-5358-9
ISBN-13: 978-0-8218-5358-0
List Price: US$58 Individual Members: US$34.80
Institutional Members: US\$46.40
Order Code: MEMO/218/1025
The authors prove that the Einstein equations for a spherically symmetric spacetime in Standard Schwarzschild Coordinates (SSC) close to form a system of three ordinary differential equations for a family of self-similar expansion waves, and the critical ($$k=0$$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, they prove that the family reduces to an implicitly defined one-parameter family of distinct spacetimes determined by the value of a new acceleration parameter $$a$$, such that $$a=1$$ corresponds to the Standard Model.
The authors prove that all of the self-similar spacetimes in the family are distinct from the non-critical $$k\neq0$$ Friedmann spacetimes, thereby characterizing the critical $$k=0$$ Friedmann universe as the unique spacetime lying at the intersection of these two one-parameter families. They then present a mathematically rigorous analysis of solutions near the singular point at the center, deriving the expansion of solutions up to fourth order in the fractional distance to the Hubble Length. Finally, they use these rigorous estimates to calculate the exact leading order quadratic and cubic corrections to the redshift vs luminosity relation for an observer at the center.
• Self-similar coordinates for the $$k=0$$ FRW spacetime
• Canonical co-moving coordinates and comparison with the $$k\neq0$$ FRW spacetimes
• A foliation of the expanding wave spacetimes into flat spacelike hypersurfaces with modified scale factor $$R(t)=t^{a}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4571196436882019, "perplexity": 1089.0471933043007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775656.66/warc/CC-MAIN-20141217075255-00089-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://science.sciencemag.org/content/343/6171/580.1 | Ecology
See allHide authors and affiliations
Science 07 Feb 2014:
Vol. 343, Issue 6171, pp. 580
DOI: 10.1126/science.343.6171.580-a
Understanding the balance between climatic changes and weather-driven mortality requires data on both long-term climate trends and the toll taken by extreme weather. Boersma and Rebstock looked at the cause of every recorded chick mortality in an Argentinian colony of Magellanic penguins, over a nearly 30-year period, and compared these with changes in temperature and precipitation over the same time. They found that the majority of deaths were due to predation and starvation, common causes of mortality in juvenile animals. However, in a few unusual years, where extreme storms occurred during the critical period after the young are protected by the brood pouch but before they develop protective plumage, large numbers of chicks were killed by weather. Although looking at the rarity of these events one might presume that weather extremes have little effect, the number of animals killed in the storms left a persistent recruitment legacy. Rainstorms increased in frequency over the study period, and the authors suggest that this, as well as the synchronization between rainstorms and chick vulnerable periods, is likely to increase with climate change. Further, such extreme events will affect other species in the region, which have long existed under more predictable weather regimes.
PLOS One 10.1371/journal.pone.0085602 (2014). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592796921730042, "perplexity": 4269.612002884793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00184.warc.gz"} |
http://link.springer.com/article/10.1007%2FJHEP09%282011%29049 | Journal of High Energy Physics
, 2011:49
# Discriminating top-antitop resonances using azimuthal decay correlations
Article
DOI: 10.1007/JHEP09(2011)049
Baumgart, M. & Tweedie, B. J. High Energ. Phys. (2011) 2011: 49. doi:10.1007/JHEP09(2011)049
## Abstract
Top-antitop pairs produced in the decay of a new heavy resonance will exhibit spin correlations that contain valuable coupling information. When the tops decay, these correlations imprint themselves on the angular patterns of the final quarks and leptons. While many approaches to the measurement of top spin correlations are known, the most common ones require detailed kinematic reconstructions and are insensitive to some important spin interference effects. In particular, spin-1 resonances with mostly-vector or mostly-axial couplings to top cannot be easily discriminated from one another without appealing to mass-suppressed effects or to more model-dependent interference with continuum Standard Model production. Here, we propose to probe the structure of a resonance’s couplings to tops by measuring the azimuthal angles of the tops’ decay products about the production axis. These angles exhibit modulations which are typically O(0.1-1), and which by themselves allow for discrimination of spin-0 from higher spins, measurement of the CP-phase for spin-0, and measurement of the vector/axial composition for spins1and 2. For relativistic tops, the azimuthal decay angles can be well-approximated without detailed knowledge of the tops’ velocities, and appear to be robust against imperfect energy measurements and neutrino reconstructions. We illustrate this point in the highly challenging dileptonic decay mode, which also exhibits the largest modulations. We comment on the relevance of these observables for testing axigluon-like models that explain the top quark AFB anomaly at the Tevatron, through direct production at the LHC.
### Keywords
Beyond Standard ModelHeavy Quark Physics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577578902244568, "perplexity": 3233.579620100215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719564.4/warc/CC-MAIN-20161020183839-00293-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/users/15873/thanos?tab=questions&sort=views | # Thanos
less info
reputation
312
bio website users.ntua.gr/ge05032 location Athens, Greece age 27 member for 3 years seen Oct 8 at 22:28 profile views 58
I have just(October 2012) finished my undergraduate studies in Applied Physics(D.Sc., D.Eng.) at National Technical University of Athens(NTUA). My thesis was about the MicroMEGAS detector where I constructed, studied, simulated and analysed data. I have enrolled(October 2012) in a M.Sc. programm called Physics and Technological Applications (NTUA) and I hope to continue with a Ph.D.
My working experience includes 7 years student's tutoring(university,high school, primary school), 1 year of developing laser systems on behalf of the NTUA Laser Group, 1 year of detector studies on behalf of the NTUA High Energy Physics Group(HEP-NTUA) and a month working at CERN on behalf of the RD51 collaboration, in detector RnD.
My future plans include Ph.D studies in the USA or Europe in the field of experimental Physics(I should decide on which particular field), marrying my girlfriend and working in research.
Other interests include culinary, pastry, photography, basketball, cinema, completing tutorial in various fields and computing.
# 23 Questions
1
2
0
774
views
### How does a Fresnel rhomb work (half and quarter wave plate)?
mar 20 '13 at 10:36 Qmechanic 51.7k
1
vote
1
437
views
4
3
425
views
2
1
410
views
0
0
330
views
### Constant Pressure lines in evaporation diagram
may 28 '13 at 10:06 Thanos 410
2
2
295
views
1
vote
1
285
views
2
0
213
views
### Fourier Transform of ribbon's beam Electric Field
feb 27 at 12:52 adel 21
1
vote
1
209
views
0
1
199
views
3
1
133
views
1
vote
2
132
views
3
1
132
views
1
vote
1
129
views
0
1
128
views
1
3
1
120
views
1
5
1
108
views
1
vote
1
103
views
1
vote
1
101
views
0
1
85
views
0
1
74
views
1
vote
1
67
views
1
vote
1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2814714014530182, "perplexity": 10243.736593804473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471436.90/warc/CC-MAIN-20151124205431-00033-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://winvector.github.io/wrapr/reference/let.html | let implements a mapping from desired names (names used directly in the expr code) to names used in the data. Mnemonic: "expr code symbols are on the left, external data and function argument names are on the right."
let(alias, expr, ..., envir = parent.frame(), subsMethod = "langsubs",
strict = TRUE, eval = TRUE, debugPrint = FALSE)
## Arguments
alias mapping from free names in expr to target names to use (mapping have both unique names and unique values). block to prepare for execution. force later arguments to be bound by name. environment to work in. character substitution method, one of 'langsubs' (preferred), 'subsubs', or 'stringsubs'. logical if TRUE names and values must be valid un-quoted names, and not dot. logical if TRUE execute the re-mapped expression (else return it). logical if TRUE print debugging information when in stringsubs mode.
## Value
result of expr executed in calling environment (or expression if eval==FALSE).
## Details
Please see the wrapr vignette for some discussion of let and crossing function call boundaries: vignette('wrapr','wrapr'). For formal documentation please see https://github.com/WinVector/wrapr/blob/master/extras/wrapr_let.pdf. Transformation is performed by substitution, so please be wary of unintended name collisions or aliasing.
Something like let is only useful to get control of a function that is parameterized (in the sense it take column names) but non-standard (in that it takes column names from non-standard evaluation argument name capture, and not as simple variables or parameters). So wrapr:let is not useful for non-parameterized functions (functions that work only over values such as base::sum), and not useful for functions take parameters in straightforward way (such as base::merge's "by" argument). dplyr::mutate is an example where we can use a let helper. dplyr::mutate is parameterized (in the sense it can work over user supplied columns and expressions), but column names are captured through non-standard evaluation (and it rapidly becomes unwieldy to use complex formulas with the standard evaluation equivalent dplyr::mutate_). alias can not include the symbol ".".
The intent from is from the user perspective to have (if a <- 1; b <- 2): let(c(z = 'a'), z+b) to behave a lot like eval(substitute(z+b, c(z=quote(a)))).
let deliberately checks that it is mapping only to legal R names; this is to discourage the use of let to make names to arbitrary values, as that is the more properly left to R's environment systems. let is intended to transform "tame" variable and column names to "tame" variable and column names. Substitution outcomes that are not valid simple R variable names (produced with out use of back-ticks) are forbidden. It is suggested that substitution targets be written ALL_CAPS style to make them stand out.
## Examples
d <- data.frame(Sepal_Length=c(5.8,5.7),
Sepal_Width=c(4.0,4.4),
Species='setosa',
rank=c(1,2))
RANKCOLUMN <- NULL # optional, make sure macro target does not look like unbound variable.
GROUPCOLUMN <- NULL # optional, make sure macro target does not look like unbound variable.
mapping = c(RANKCOLUMN= 'rank', GROUPCOLUMN= 'Species')
let(alias = mapping,
expr = {
# Notice code here can be written in terms of known or concrete
# names "RANKCOLUMN" and "GROUPCOLUMN", but executes as if we
# had written mapping specified columns "rank" and "Species".
# restart ranks at zero.
dres <- d
dres$RANKCOLUMN <- dres$RANKCOLUMN - 1 # notice, using $not [[]] # confirm set of groups. groups <- unique(d$GROUPCOLUMN)
},
debugPrint = TRUE
)#> $RANKCOLUMN #> [1] "rank" #> #>$GROUPCOLUMN
#> [1] "Species"
#>
#> {
#> dres <- d
#> dres$rank <- dres$rank - 1
#> groups <- unique(d\$Species)
#> }print(groups)#> [1] setosa
#> Levels: setosaprint(length(groups))#> [1] 1print(dres)#> Sepal_Length Sepal_Width Species rank
#> 1 5.8 4.0 setosa 0
#> 2 5.7 4.4 setosa 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6509208083152771, "perplexity": 7939.066921219676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213540.33/warc/CC-MAIN-20180818095337-20180818115337-00358.warc.gz"} |
https://www.physicsforums.com/threads/series-with-hyperbolic-and-trigonometric-functions.198948/ | # Series with Hyperbolic and Trigonometric functions
1. Nov 18, 2007
### azatkgz
1. The problem statement, all variables and given/known data
Determine whether the series converges and diverges.
$$\sum_{n=3}^{\infty}\ln \left(\frac{\cosh \frac{\pi}{n}}{\cos \frac{\pi}{n}}\right)$$
3. The attempt at a solution
$$\sum_{n=3}^{\infty}\ln \left(\frac{1+\frac{\pi^2}{2n^2}+O(\frac{1}{n^4})}{1-\frac{\pi^2}{2n^2}+O(\frac{1}{n^4})}\right)$$
$$=\sum_{n=3}^{\infty}\ln \left(\left(1+\frac{\pi^2}{2n^2}+O(\frac{1}{n^4})\right)\left(1+\frac{\pi^2}{2n^2}+O(\frac{1}{n^4})\right)\right)=\sum_{n=3}^{\infty}\ln \left(1+\frac{\pi^2}{n^2}+O(\frac{1}{n^4})\right)$$
$$=\sum_{n=3}^{\infty}\left(\frac{\pi^2}{n^2}+O(\frac{1}{n^4})\right)$$
series converges
2. Nov 19, 2007
### Gib Z
Sorry its not immediately obvious to me how you got the your first line of working to your second.
Similar Discussions: Series with Hyperbolic and Trigonometric functions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729686379432678, "perplexity": 2055.6835605298493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822930.13/warc/CC-MAIN-20171018104813-20171018124813-00253.warc.gz"} |
https://research-repository.uwa.edu.au/en/publications/broadening-frequency-range-of-a-ferromagnetic-axion-haloscope-wit | # Broadening frequency range of a ferromagnetic axion haloscope with strongly coupled cavity–magnon polaritons
Research output: Contribution to journalArticlepeer-review
37 Citations (Scopus)
## Abstract
With the axion being a prime candidate for dark matter, there has been some recent interest in direct detection through a so called ‘Ferromagnetic haloscope.’ Such devices exploit the coupling between axions and electrons in the form of collective spin excitations of magnetic materials with the readout through a microwave cavity. Here, we present a new, general, theoretical treatment of such experiments in a Hamiltonian formulation for strongly coupled magnons and photons, which hybridise as cavity–magnon polaritons. Such strongly coupled systems have an extended measurable dispersive regime. Thus, we extend the analysis and operation of such experiments into the dispersive regime, which allows any ferromagnetic haloscope to achieve improved bandwidth with respect to the axion mass parameter space. This experiment was implemented in a cryogenic setup, and initial search results are presented setting laboratory limits on the axion–electron coupling strength of gaee>3.7×10−9 in the range 33.79μeV<ma<33.94μeV with 95% confidence. The potential bandwidth of the Ferromagnetic haloscope was calculated to be in two bands, the first of about 1 GHz around 8.24 GHz (or 4.1μeV mass range around 34.1μeV) and the second of about 1.6 GHz around 10 GHz (6.6μeV mass range around 41.4μeV). Frequency tuning may also be easily achieved via an external magnetic field which changes the ferromagnetic resonant frequency with respect to the cavity frequency. The requirements necessary for future improvements to reach the DFSZ axion model band are discussed in the paper.
Original language English 100306 Physics Of The Dark Universe 25 https://doi.org/10.1016/j.dark.2019.100306 Published - 1 Sep 2019
## Fingerprint
Dive into the research topics of 'Broadening frequency range of a ferromagnetic axion haloscope with strongly coupled cavity–magnon polaritons'. Together they form a unique fingerprint.
• ### ARC Centre of Excellence for Engineered Quantum Systems (EQuS 2017)
White, A., Doherty, A., Biercuk, M., Bowen, W., Milburn, G., Tobar, M., Volz, T. & McFerran, J.
Australian Research Council
1/01/1831/12/24
Project: Research
• ### Precision Low Energy Experiments to Search for New Physics
Australian Research Council
1/01/196/02/22
Project: Research | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8673673272132874, "perplexity": 3560.415395608268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00441.warc.gz"} |
http://physics.stackexchange.com/questions/4789/what-is-the-usefulness-of-the-wigner-eckart-theorem/4811 | # What is the usefulness of the Wigner-Eckart theorem?
I am doing some self-study in between undergrad and grad school and I came across the beastly Wigner-Eckart theorem in Sakurai's Modern Quantum Mechanics. I was wondering if someone could tell me why it is useful and perhaps just help me understand a bit more about it. I have had two years of undergrad mechanics and I think I have a reasonably firm grasp of the earlier material out of Sakurai, so don't be afraid to get a little technical.
-
Great question :) My understanding of the W-E theorem is pretty flaky too so I'll be very interested to see what people come up with. – David Z Feb 8 '11 at 4:11
I will not get into theoretical details -- Luboš ad Marek did that better than I'm able to.
Let me give an example instead: suppose that we need to calculate this integral:
$\int d\Omega (Y_{3m_1})^*Y_{2m_2}Y_{1m_3}$
Here $Y_{lm}$ -- are spherical harmonics and we integrate over the sphere $d\Omega=\sin\theta d\theta d\phi$.
This kind of integrals appear over and over in, say, spectroscopy problems. Let us calculate it for $m_1=m_2=m_3=0$:
$\int d\Omega (Y_{30})^*Y_{20}Y_{10} = \frac{\sqrt{105}}{32\sqrt{\pi^3}}\int d\Omega \cos\theta\,(1-3\cos^2\theta)(3\cos\theta-5\cos^3\theta)=$
$= \frac{\sqrt{105}}{32\sqrt{\pi^3}}\cdot 2\pi \int d\theta\,\left(3\cos^2\theta\sin\theta-14\cos^4\theta\sin\theta+15\cos^6\theta\sin\theta\right)=\frac{3}{2}\sqrt{\frac{3}{35\pi}}$
Hard work, huh? The problem is that we usually need to evaluate this for all values of $m_i$. That is 7*5*3 = 105 integrals. So instead of doing all of them we got to exploit their symmetry. And that's exactly where the Wigner-Eckart theorem is useful:
$\int d\Omega (Y_{3m_1})^*Y_{2m_2}Y_{1m_3} = \langle l=3,m_1| Y_{2m_2} | l=1,m_3\rangle = C_{m_1m_2m_3}^{3\,2\,1}(3||Y_2||1)$
$C_{m_1m_2m_3}^{j_1j_2j_3}$ -- are the Clebsch-Gordan coefficients
$(3||Y_2||1)$ -- is the reduced matrix element which we can derive from our expression for $m_1=m_2=m_3=0$:
$\frac{3}{2}\sqrt{\frac{3}{35\pi}} = C_{0\,0\,0}^{3\,2\,1}(3||Y_2||1)\quad \Rightarrow \quad (3||Y_2||1)=\frac{1}{2}\sqrt{\frac{3}{\pi}}$
So the final answer for our integral is:
$\int d\Omega(Y_{3m_1})^*Y_{2m_2}Y_{1m_3}=\sqrt{\frac{3}{4\pi}}C_{m_1m_2m_3}^{3\,2\,1}$
It is reduced to calculation of the Clebsch-Gordan coefficient and there are a lot of, tables, programs, reduction and summation formulae to work with them.
-
+1, this is a nice example. – Marek Feb 8 '11 at 18:47
This is a really nice, practical example. – Cogitator Feb 9 '11 at 3:22
I think I have to give most helpful to this answer, because although Marek gave one that obviously had more theoretical detail, this one helped me to better understand where the W-E theorem falls within what I already know in quantum mechanics. – Cogitator Feb 9 '11 at 3:31
@Kostya, $7\times 5\times 3 = 105$, is because of $2(l+1)$ values. – lavkush Jul 1 at 14:16
@lavkush, yep. $2\ell + 1$, to be more precise... – Kostya Jul 1 at 21:08
First, W-E theorem is just a simple (just bear with me, I know that the theorem can appear formidable if not explained properly) statement about the decomposition of the tensor product of representations into its irreducible components.
Suppose we have a group $G$ and a tensor operator $A_r$ transforming under a(n irreducible) representation $\Gamma^1$ of it ($r$ counting the number of components of the operator, e.g. 3 for the usual angular momentum in 3 dimensions). Also suppose that you have additional (irreducible) representation $\Gamma^2$ with basis $\left\{ \left| \psi_n \right>\right\}$. It is easy to check that then vectors $\Psi_{rn} \equiv A_r \left|\psi_n\right>$ transform as a tensor product $\Gamma^1 \otimes \Gamma^2$.
In the following we use that the group $G$ is compact in order to be able to decompose its representations into irreducible components. This can be weakened but in general representations of non-compact groups don't have to be reducible to their irreducible components; there's pretty subtle mathematics behind representation theory of non-compact groups; even seemingly simple ones as $SL(2, \mathbb{R})$.
So, we'll decompose the representation as $$\Gamma^1 \otimes \Gamma^2 = \bigoplus_{\alpha} \Gamma^{\alpha}$$ where $\alpha$ runs over irreducible representations of $G$ (possibly repeated). This amounts to finding a more suitable basis for vectors $\Psi_{rn}$. We will write $\Phi_{\alpha m}$ for that basis (with $m$ indexing the components of representation $\Gamma^{\alpha}$). Then we can write $$\Psi_{rn} = \sum_{\alpha, m} U^{\alpha m}_{rn} \Phi_{\alpha m}$$.
Now, back to the problem: we are interested in computing some element such as $\left<\omega_k \right| A_r \left | \psi_n\right>$ with $\omega_k$ transforming in some $\Gamma^3$ representation. Thanks to Schur orthogonality relations and the fact that we decomposed $A_r \left | \psi_n\right>$ into its irreducible components we can see that for this element to be non-zero, there has to $\Gamma^3$ representation in the $\Gamma^{\alpha}$ decomposition. That already saves us a lot of computation. If there is no $\Gamma^3$ present there then we're finished, all those elements will be zero. And if it is present, we only have to carry out calculations corresponding to just the $\Gamma^3$ part of the decomposition (which will usually be a small part of the total).
Okay, I think the above might have been a little confusing, so let's try an example. The canonical one is with $SO(3)$ of course. Suppose we want to compute something like $\left<j m\right| \mathbf X \left|j' m'\right>$ with $\mathbf X$ being the position operator (which transforms under $SO(3)$ vector irrep). So we are interesting of doing a tensor product ${\mathbf 3} \otimes (\mathbf{2j +1})$ which can be shown to be equal to $(\mathbf{2j-1}) \oplus (\mathbf {2j+1}) \oplus (\mathbf {2j+3})$ (supposing $j$ is high enough for simplicity). So $j'$ has to be equal to one $j-1, j, j+1$ and moreover $m = m_X + m'$ (if $\mathbf X$ is written in a diagonal basis of the vectorial representations one can decompose it into operators having eigenvalues of m_X = $-1, 0, 1$) will need to hold (this is again a consequence of orthogonality relations).
Anyway, the most important point (at least for me) is about the decomposition of the tensor product into irreducible components. Supposing you can carry out this decomposition (which can be pretty hard at times), the calculation of matrix elements will simplify greatly and you can immediately decide stuff such as whether some molecule with a given symmetry will radiate in IR (which amounts to computing matrix elements of a dipole operator between eigenstates of that molecule). I am sure you can imagine lots of other similar stuff you may compute yourself. Applications of this theorem are pretty much limitless.
-
Excellent explanation @Marek. +1 – user346 Feb 8 '11 at 17:44
@space_cadet: thank you :) – Marek Feb 8 '11 at 18:47
I know what a group is and what an irreducible representation is. Though I am a little shaky on the rest of the group theory and the notation (the first part), this answer is helpful, particularly the last three paragraphs. – Cogitator Feb 9 '11 at 3:18
@Cogitator: glad to hear that. Of course, my answer is nowhere near complete (I am sure books can be written about W-E and its applications) and in particular it misses the usual formulation found in QM (which was nicely addressed by Kostya). I provided this answer because for me that classical formulation didn't work at all and I only understood W-E after learning some group theory :) – Marek Feb 9 '11 at 7:35
The Wigner-Eckart Theorem
http://en.wikipedia.org/wiki/Wigner_Eckart_theorem
is a formula that tells us about all "simple constraints" that group theory - the mathematical incarnation of the wisdom about symmetries, especially in the $SO(3)\approx SU(2)$ case (rotations in a three-dimensional space) - implies about matrix elements of tensor operators - those that transform as some representation of the same symmetry.
The dependence on the indices labeling basis vectors of the representations - $m_1, m_2, m_3$ in the $SU(2)$ case - is totally determined. Instead of thinking that some matrix elements depend on many variables, physicists may realize that the symmetry guarantees that the matrix elements only depend on a few labels labeling the whole "multiplets" of the states and operators rather than on all the labels identifying the individual components. It's always critically important to know how much freedom or how much uncertainty there is about some observables - for example, experimenters don't want to repeat their experiment $(2J+1)^3$ times without a good reason - and we would get a totally wrong idea without this theorem.
To see that the theorem is used all the time, check e.g. these 5700 papers
many of which are highly cited ones. The topics of the papers include optics, nanotubes, X-rays, spectroscopy, condensed matter physics, mathematical physics involving integrable systems, quantum chemistry, nuclear physics, and virtually all other branches of physics that depend on quantum mechanics. In many other cases, the theorem is being used without mentioning its name, and its generalizations are being used all the time in advanced theoretical physics, e.g. in the contexts with groups that are much more complicated than $SO(3)\approx SU(2)$.
It's interesting to note that in some sense, quantum mechanics allows the symmetry to impose as many constraints as classical physics. In classical physics, we could start with an initial state labeled by numbers $I_i$, apply some operations depending on parameters $O_j$, and we would obtain a final state described by parameters $F_k$. Classical physics would tell us Yes/No - whether we can get from $I_i$ via $O_j$ to $F_k$: that's the counterpart of the quantum probability amplitude.
The rotational $SO(3)$ symmetry - which has 3 parameters (3 independent rotations - or the latitude; longitude of the axis, and the angle) would only tell us that if we rotate all objects $I_i,O_j,F_k$ by the same rotation, we obtain a valid proposition again (Yes goes to Yes, No goes to No). So the dependence on 3 parameters - corresponding to the 3 rotations - is eliminated. In quantum mechanics, we also eliminate the dependence on 3 parameters - in this case $m_1,m_2,m_3$, the projections $j_z$ for the two state vectors and for the operator sandwiched in between them. In some proper counting, this is true for any $d$-dimensional group of symmetries, I think.
-
And if you are interested in "where" it might come useful: Some of the selection rules for optical transitions can be obtained from it, and I faintly recall that it helps rewriting the Hamiltonian for Spin-Orbit coupling into a much more convenient form.
-
1) Remember that a tensor operator is a collection (i.e. a set of 2j+1) operators which transform amongst themselves under group transformation (let's stick to SU(2) or SO(3)) or commutation with the generators of su(2) or su(3) (if we use infinitesimal transformation).
2) The Wigner-Eckart theorem ultimately states that some operators (the components of the tensor operator) can be written in terms of the spherical coordinates as a common function of the radial variable r multiplied by a spherical harmonic specified by the component.
3) The simplest examples are the observables x,y and z, which can be written as r*(spherical harmonic $Y_{1m}$), v.g. $z=r \cos(\theta) ~ r Y_{10}$.
The common function here is r itself. For more complicated operators, more complicated functions are encountered, but the central point is that the components of a tensor operator of angular momentum L are always of the form $f(r) Y_{LM}$, having a common f(r) and different components M.
4) Because the function f(r) is common to all the components, the radial integral between various basis states can be evaluated using any component of the tensor, i.e. any component M. This radial integral is basically the reduced matrix element (or at least very closely related to this reduced matrix elements as some definitions have extra factors in the numerator or denominator). The angular part is an integration of spherical harmonics so proportional to some Clebsh-Gordan coefficient involving the angular quantum numbers $(L,M)$ of the component of the tensor and the angular quantum numbers, $(L_i, M_i)$ and $(L_f,M_f)$ of the initial and final states, respectively. The selection rules enter because $M_i+M=M_f$ and $L_f$ must be in the decomposition of $L\otimes L_i$ (i.e. must be in the usual range $\vert L_i-L \vert \le L_f \le L_i+L$).
5) The use is very simple: a) ratios of matrix elements of tensor operators will not depend on the reduced matrix elements, so for instance ratios of decay rates (or cross sections) of components of a tensor operator can be compared to experiment without any explicit knowledge of the function f(r) or the (often complicated) radial integral needed to evaluate the reduced matrix element - the integration over the radial variable is the same for all components so drops out of any ratio involving those components. b) If the reduced matrix element is known then the matrix element of any component of the tensor can be calculated as the integration over the angles (the $Y_{LM}$ part) is easily done analytically.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554559588432312, "perplexity": 323.34533348833986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00005-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://ncertmcq.com/rd-sharma-class-10-solutions-chapter-1-real-numbers-ex-1-6/ | ## RD Sharma Class 10 Solutions Chapter 1 Real Numbers Ex 1.6
These Solutions are part of RD Sharma Class 10 Solutions. Here we have given RD Sharma Class 10 Solutions Chapter 1 Real Numbers Ex 1.6
Other Exercises
Question 1.
Without actually performing the long division, state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion.
Solution:
Since, the denominator is of the form 2m x 5n, the rational number has a terminating decimal expansion.
Question 2.
Write down the decimal expansions of the following rational numbers by writing their denominators in the form 2m x 5n, where m and n are non-negative integers:
Solution:
Question 3.
Write the denominator of the rational number $$\frac { 257 }{ 5000 }$$ in the form 2m x 5n , where m, n are non-negative integers. Hence, write the decimal expansion, without actual division.
Solution:
Question 4.
What can you say about the prime factorisations of the denominators of the following rationals :
Solution:
(i) 43.123456789
This decimal fraction is terminating Its denominator will be factorised in the form of 2m x 5n where m and n are non-negative integers.
This decimal fraction is non-terminating repeating decimals.
The denominator of their fraction will be not in the form of 2m x 5n where m and n are non-negative integers.
This decimal fraction is terminating
Its denominator will be factorised in the form of 2m x 5n where m and n are non-negative integers
(iv) 0.120120012000120000 ………..
This decimal fraction in non-terminating non recurring
Its denominator will not be factorised in the form of 2m x 5n where m and n are non negative integers
Question 5.
A rational number in its decimal expansion is 327.7081. What can you say about the prime factors of q, when this number is expressed in the form $$\frac { p }{ q }$$ ? Give reasons. [NCERT Exemplar]
Solution:
327.7081 is terminating decimal number. So, it represents a rational number and also its denominator must have the form 2m x 5n.
Hence, the prime factors of q is 2 and 5.
Hope given RD Sharma Class 10 Solutions Chapter 1 Real Numbers Ex 1.6 are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835512042045593, "perplexity": 735.5748103882305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00138.warc.gz"} |
https://en.wikibooks.org/wiki/Haskell/Libraries/IO | IO (Solutions)
## Contents
Libraries Reference
## The IO Library
Here, we'll explore the most commonly used elements of the `System.IO` module.
```data IOMode = ReadMode | WriteMode
openFile :: FilePath -> IOMode -> IO Handle
hClose :: Handle -> IO ()
hIsEOF :: Handle -> IO Bool
hGetChar :: Handle -> IO Char
hGetLine :: Handle -> IO String
hGetContents :: Handle -> IO String
getChar :: IO Char
getLine :: IO String
getContents :: IO String
hPutChar :: Handle -> Char -> IO ()
hPutStr :: Handle -> String -> IO ()
hPutStrLn :: Handle -> String -> IO ()
putChar :: Char -> IO ()
putStr :: String -> IO ()
putStrLn :: String -> IO ()
readFile :: FilePath -> IO String
writeFile :: FilePath -> String -> IO ()
```
Note
`FilePath` is a type synonym for `String`. So, for instance, the `readFile` function takes a `String` (the file to read) and returns an action that, when run, produces the contents of that file. See the Type declarations chapter for more about type synonyms.
Most of the IO functions are self-explanatory. The `openFile` and `hClose` functions open and close a file, respectively. The `IOMode` argument determines the mode for opening the file. `hIsEOF` tests for end-of file. `hGetChar` and `hGetLine` read a character or line (respectively) from a file. `hGetContents` reads the entire file. The `getChar`, `getLine`, and `getContents` variants read from standard input. `hPutChar` prints a character to a file; `hPutStr` prints a string; and `hPutStrLn` prints a string with a newline character at the end. The variants without the `h` prefix work on standard output. The `readFile` and `writeFile` functions read and write an entire file without having to open it first.
## Bracket
The `bracket` function comes from the `Control.Exception` module. It helps perform actions safely.
```bracket :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c
```
Consider a function that opens a file, writes a character to it, and then closes the file. When writing such a function, one needs to be careful to ensure that, if there were an error at some point, the file is still successfully closed. The `bracket` function makes this easy. It takes three arguments: The first is the action to perform at the beginning. The second is the action to perform at the end, regardless of whether there's an error or not. The third is the action to perform in the middle, which might result in an error. For instance, our character-writing function might look like:
```writeChar :: FilePath -> Char -> IO ()
writeChar fp c =
bracket
(openFile fp WriteMode)
hClose
(\h -> hPutChar h c)
```
This will open the file, write the character, and then close the file. However, if writing the character fails, `hClose` will still be executed, and the exception will be reraised afterwards. That way, you don't need to worry too much about catching the exceptions and about closing all of your handles.
We can write a simple program that allows a user to read and write files. The interface is admittedly poor, and it does not catch all errors (such as reading a non-existent file). Nevertheless, it should give a fairly complete example of how to use IO. Enter the following code into "FileRead.hs," and compile/run:
```module Main
where
import System.IO
import Control.Exception
main = doLoop
doLoop = do
putStrLn "Enter a command rFN wFN or q to quit:"
command <- getLine
case command of
'q':_ -> return ()
'r':filename -> do putStrLn ("Reading " ++ filename)
doLoop
'w':filename -> do putStrLn ("Writing " ++ filename)
doWrite filename
doLoop
_ -> doLoop
(\h -> do contents <- hGetContents h
putStrLn "The first 100 chars:"
putStrLn (take 100 contents))
doWrite filename = do
putStrLn "Enter text to go into the file:"
contents <- getLine
bracket (openFile filename WriteMode) hClose
(\h -> hPutStrLn h contents)
```
What does this program do? First, it issues a short string of instructions and reads a command. It then performs a case switch on the command and checks first to see if the first character is a `q.' If it is, it returns a value of unit type.
Note
The `return` function is a function that takes a value of type `a` and returns an action of type `IO a`. Thus, the type of `return ()` is `IO ()`.
If the first character of the command wasn't a `q,' the program checks to see if it was an 'r' followed by some string that is bound to the variable `filename`. It then tells you that it's reading the file, does the read and runs `doLoop` again. The check for `w' is nearly identical. Otherwise, it matches `_`, the wildcard character, and loops to `doLoop`.
The `doRead` function uses the `bracket` function to make sure there are no problems reading the file. It opens a file in `ReadMode`, reads its contents and prints the first 100 characters (the `take` function takes an integer ${\displaystyle n}$ and a list and returns the first ${\displaystyle n}$ elements of the list).
The `doWrite` function asks for some text, reads it from the keyboard, and then writes it to the specified file.
Note
Both `doRead` and `doWrite` could have been made simpler by using `readFile` and `writeFile`, but they were written in the extended fashion to show how the more complex functions are used.
The program has one major problem: it will die if you try to read a file that doesn't already exist or if you specify some bad filename like `*\bs^#_@`. You may think that the calls to `bracket` in `doRead` and `doWrite` should take care of this, but they don't. They only catch exceptions within the main body, not within the startup or shutdown functions (`openFile` and `hClose`, in these cases). To make this completely reliable, we would need a way to catch exceptions raised by `openFile`.
Exercises
Write a variation of our program so that it first asks whether the user wants to read from a file, write to a file, or quit. If the user responds with "quit", the program should exit. If they respond with "read", the program should ask them for a file name and then print that file to the screen (if the file doesn't exist, the program may crash). If they respond with "write", it should ask them for a file name and then ask them for text to write to the file, with "." signaling completion. All but the "." should be written to the file.
For example, running this program might produce:
```Do you want to [read] a file, [write] a file, or [quit]?
Enter a file name to read:
foo
...contents of foo...
Do you want to [read] a file, [write] a file, or [quit]?
write
Enter a file name to write:
foo
Enter text (dot on a line by itself to end):
this is some
text for
foo
.
Do you want to [read] a file, [write] a file, or [quit]? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5925520658493042, "perplexity": 1350.301474018164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00538.warc.gz"} |
http://en.wikipedia.org/wiki/Random_binary_tree | # Random binary tree
In computer science and probability theory, a random binary tree refers to a binary tree selected at random from some probability distribution on binary trees. Two different distributions are commonly used: binary trees formed by inserting nodes one at a time according to a random permutation, and binary trees chosen from a uniform discrete distribution in which all distinct trees are equally likely. It is also possible to form other distributions, for instance by repeated splitting. Adding and removing nodes directly in a random binary tree will in general disrupt its random structure, but the treap and related randomized binary search tree data structures use the principle of binary trees formed from a random permutation in order to maintain a balanced binary search tree dynamically as nodes are inserted and deleted.
For random trees that are not necessarily binary, see random tree.
## Binary trees from random permutations
For any set of numbers (or, more generally, values from some total order), one may form a binary search tree in which each number is inserted in sequence as a leaf of the tree, without changing the structure of the previously inserted numbers. The position into which each number should be inserted is uniquely determined by a binary search in the tree formed by the previous numbers. For instance, if the three numbers (1,3,2) are inserted into a tree in that sequence, the number 1 will sit at the root of the tree, the number 3 will be placed as its right child, and the number 2 as the left child of the number 3. There are six different permutations of the numbers (1,2,3), but only five trees may be constructed from them. That is because the permutations (2,1,3) and (2,3,1) form the same tree.
### Expected depth of a node
For any fixed choice of a value x in a given set of n numbers, if one randomly permutes the numbers and forms a binary tree from them as described above, the expected value of the length of the path from the root of the tree to x is at most 2 log n + O(1), where "log" denotes the natural logarithm function and the O introduces big O notation. For, the expected number of ancestors of x is by linearity of expectation equal to the sum, over all other values y in the set, of the probability that y is an ancestor of x. And a value y is an ancestor of x exactly when y is the first element to be inserted from the elements in the interval [x,y]. Thus, the values that are adjacent to x in the sorted sequence of values have probability 1/2 of being an ancestor of x, the values one step away have probability 1/3, etc. Adding these probabilities for all positions in the sorted sequence gives twice a Harmonic number, leading to the bound above. A bound of this form holds also for the expected search length of a path to a fixed value x that is not part of the given set.[1]
### The longest path
Although not as easy to analyze as the average path length, there has also been much research on determining the expectation (or high probability bounds) of the length of the longest path in a binary search tree generated from a random insertion order. It is now known that this length, for a tree with n nodes, is almost surely
$\frac{1}{\beta}\log n \approx 4.311\log n,$
where β is the unique number in the range 0 < β < 1 satisfying the equation
$\displaystyle 2\beta e^{1-\beta}=1.$[2]
### Expected number of leaves
In the random permutation model, each of the numbers from the set of numbers used to form the tree, except for the smallest and largest of the numbers, has probability 1/3 of being a leaf in the tree, for it is a leaf when it inserted after its two neighbors, and any of the six permutations of these two neighbors and it are equally likely. By similar reasoning, the smallest and largest of the numbers have probability 1/2 of being a leaf. Therefore, the expected number of leaves is the sum of these probabilities, which for n ≥ 2 is exactly (n + 1)/3.
### Treaps and randomized binary search trees
In applications of binary search tree data structures, it is rare for the values in the tree to be inserted without deletion in a random order, limiting the direct applications of random binary trees. However, algorithm designers have devised data structures that allow insertions and deletions to be performed in a binary search tree, at each step maintaining as an invariant the property that the shape of the tree is a random variable with the same distribution as a random binary search tree.
If a given set of ordered numbers is assigned numeric priorities (distinct numbers unrelated to their values), these priorities may be used to construct a Cartesian tree for the numbers, a binary tree that has as its inorder traversal sequence the sorted sequence of the numbers and that is heap-ordered by priorities. Although more efficient construction algorithms are known, it is helpful to think of a Cartesian tree as being constructed by inserting the given numbers into a binary search tree in priority order. Thus, by choosing the priorities either to be a set of independent random real numbers in the unit interval, or by choosing them to be a random permutation of the numbers from 1 to n (where n is the number of nodes in the tree), and by maintaining the heap ordering property using tree rotations after any insertion or deletion of a node, it is possible to maintain a data structure that behaves like a random binary search tree. Such a data structure is known as a treap or a randomized binary search tree.[3]
## Uniformly random binary trees
The number of binary trees with n nodes is a Catalan number: for n = 1, 2, 3, ... these numbers of trees are
1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, … (sequence A000108 in OEIS).
Thus, if one of these trees is selected uniformly at random, its probability is the reciprocal of a Catalan number. Trees in this model have expected depth proportional to the square root of n, rather than to the logarithm;[4] however, the Strahler number of a uniformly random binary tree, a more sensitive measure of the distance from a leaf in which a node has Strahler number i whenever it has either a child with that number or two children with number i − 1, is with high probability logarithmic.[5]
Due to their large heights, this model of equiprobable random trees is not generally used for binary search trees, but it has been applied to problems of modeling the parse trees of algebraic expressions in compiler design[6] (where the above-mentioned bound on Strahler number translates into the number of registers needed to evaluate an expression[7]) and for modeling evolutionary trees.[8] In some cases the analysis of random binary trees under the random permutation model can be automatically transferred to the uniform model.[9]
## Random split trees
Devroye & Kruszewski (1996) generate random binary trees with n nodes by generating a real-valued random variable x in the unit interval (0,1), assigning the first xn nodes (rounded down to an integer number of nodes) to the left subtree, the next node to the root, and the remaining nodes to the right subtree, and continuing recursively in each subtree. If x is chosen uniformly at random in the interval, the result is the same as the random binary search tree generated by a random permutation of the nodes, as any node is equally likely to be chosen as root; however, this formulation allows other distributions to be used instead. For instance, in the uniformly random binary tree model, once a root is fixed each of its two subtrees must also be uniformly random, so the uniformly random model may also be generated by a different choice of distribution for x. As Devroye and Kruszewski show, by choosing a beta distribution on x and by using an appropriate choice of shape to draw each of the branches, the mathematical trees generated by this process can be used to create realistic-looking botanical trees.
## Notes
1. ^ Hibbard (1962); Knuth (1973); Mahmoud (1992), p. 75.
2. ^ Robson (1979); Pittel (1985); Devroye (1986); Mahmoud (1992), pp. 91–99; Reed (2003).
3. ^
4. ^ Knuth (2005), p. 15.
5. ^ Devroye & Kruszewski (1995). That it is at most logarithmic is trivial, because the Strahler number of every tree is bounded by the logarithm of the number of its nodes.
6. ^ Mahmoud (1992), p. 63.
7. ^
8. ^
9. ^ Mahmoud (1992), p. 70.
## References
• Aldous, David (1996), "Probability distributions on cladograms", in Aldous, David; Pemantle, Robin, Random Discrete Structures, The IMA Volumes in Mathematics and its Applications 76, Springer-Verlag, pp. 1–18.
• Devroye, Luc (1986), "A note on the height of binary search trees", Journal of the ACM 33 (3): 489–498, doi:10.1145/5925.5930.
• Devroye, Luc; Kruszewski, Paul (1995), "A note on the Horton-Strahler number for random trees", Information Processing Letters 56 (2): 95–99, doi:10.1016/0020-0190(95)00114-R.
• Devroye, Luc; Kruszewski, Paul (1996), "The botanical beauty of random binary trees", in Brandenburg, Franz J., Graph Drawing: 3rd Int. Symp., GD'95, Passau, Germany, September 20-22, 1995, Lecture Notes in Computer Science 1027, Springer-Verlag, pp. 166–177, doi:10.1007/BFb0021801, ISBN 3-540-60723-4.
• Drmota, Michael (2009), Random Trees : An Interplay between Combinatorics and Probability, Springer-Verlag, ISBN 978-3-211-75355-2.
• Flajolet, P.; Raoult, J. C.; Vuillemin, J. (1979), "The number of registers required for evaluating arithmetic expressions", Theoretical Computer Science 9 (1): 99–125, doi:10.1016/0304-3975(79)90009-4.
• Hibbard, T. (1962), "Some combinatorial properties of certain trees with applications to searching and sorting", Journal of the ACM 9 (1): 13–28, doi:10.1145/321105.321108.
• Knuth, Donald M. (1973), "6.2.2 Binary Tree Searching", The Art of Computer Programming III, Addison-Wesley, pp. 422–451.
• .
• Mahmoud, Hosam M. (1992), Evolution of Random Search Trees, John Wiley & Sons.
• Martinez, Conrado; Roura, Salvador (1998), "Randomized binary search trees", Journal of the ACM (ACM Press) 45 (2): 288–323, doi:10.1145/274787.274812.
• Pittel, B. (1985), "Asymptotical growth of a class of random trees", Annals of Probability 13 (2): 414–427, doi:10.1214/aop/1176993000.
• Reed, Bruce (2003), "The height of a random binary search tree", Journal of the ACM 50 (3): 306–332, doi:10.1145/765568.765571.
• Robson, J. M. (1979), "The height of binary search trees", Australian Computer Journal 11: 151–153.
• Seidel, Raimund; Aragon, Cecilia R. (1996), "Randomized Search Trees", Algorithmica 16 (4/5): 464–497, doi:10.1007/s004539900061. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214079737663269, "perplexity": 539.1773433123752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835670.21/warc/CC-MAIN-20140820021355-00263-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/7291-cylindrical-shells-help.html | # Math Help - Cylindrical Shells Help
1. ## Cylindrical Shells Help
R is bounded below by the $x-axis$ and above by the curve $y=2cosx,0 \leq x \leq \frac{\pi}{2}$. Find the volume of the solid generated by revolving R around the $y-axis$.by the method of cylindrical shells.
2. Hello, Yogi!
$R$ is bounded below by the $x$-axis and above by the curve $y = 2\cos x,\;0 \leq x \leq \frac{\pi}{2}$.
Find the volume of the solid generated by revolving $R$ around the $y$-axis
by the method of cylindrical shells.
The shells formula is: . $V \:=\:2\pi\int^b_axy\,dx$
We have: . $V \;= \;2\pi\int^{\frac{\pi}{2}}_0x\cdot2\cos x\,dx \;=\;4\pi\int^{\frac{\pi}{2}}_0x\cos x\,dx$
Integrate by parts:
. . $\begin{array}{cc}u = x & dv = \cos x\,dx \\ du = dx & v = \sin x\end{array}$
We have: . $V \:=\:4\pi\left[x\sin x - \int\sin x\,dx\right] \:=\:4\pi\bigg[x\sin x + \cos x \bigg]^{\frac{\pi}{2}}_0$
. . $= \;4\pi\left[\left(\frac{\pi}{2}\!\cdot\!\sin\frac{\pi}{2} + \cos\frac{\pi}{2}\right) - \left(0\!\cdot\!\sin0 + \cos0\right)\right]$
. . $= \;4\pi\left[\left(\frac{\pi}{2} + 0\right) - \left(0 + 1\right)\right] \;=\;4\pi\left(\frac{\pi}{2} - 1\right)\;=\;2\pi(\pi - 2)$
3. We can also check by doing it the 'other' way....washers.
${\pi}\int_{0}^{2}(cos^{-1}(\frac{y}{2}))^{2}dy=2{\pi}({\pi}-2)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954664945602417, "perplexity": 588.0386284919348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00235-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://master.bioconductor.org/packages/devel/bioc/vignettes/powerTCR/inst/doc/powerTCR.html | # 1 Introduction
The powerTCR package allows users to implement the model-based methods discussed in our Koch et al. (2018) paper in PLoS Computational Biology. Specifically, the clone size distribution of the T cell receptor (TCR) repertoire exhibits imperfect power law behavior; powerTCR supports a model that keeps this fact in mind. Additionally, powerTCR contains tools to fit another power law model for the TCR repertoire detailed in Desponds et al. (2016). Given a collection of sampled TCR repertoires, powerTCR equips the user with tools for comparative analysis of the samples, using one of two model-based approaches. This leads to hierarchical clustering of the samples to determine their relatedness based on the clone size distribution alone.
## 1.1 Summary of features
• Read in and parse TCR sequencing files stored in various formats into the necessary format for powerTCR
• Fit two power law-based models to the clone size distribution of the TCR repertoire
1. The discrete gamma-GPD spliced threshold model of Koch et al. (2018)
2. The type-I Pareto model of Desponds et al. (2016)
• Compare sample TCR repertoire samples based on model fit with hierarchical clustering
• Simulate data according from the gamma-GPD spliced threshold distribution
# 2 Fitting a model
In order to fit a model with powerTCR, you only need to be able to supply a vector of counts (that is, a vector of clone sizes). If your data are in a format supported by parseFile or parseFolder, you can simply read in your file using one of those functions, specify whether or not you want to use only in-frame sequences, and powerTCR will automatically give you a sorted vector of clone sizes for each sample. This functionality is a wrapper for parsing functions found in the tcR package.
powerTCR contains a toy data set, called repertoires, with two TCR repertoire samples, which we will use throughout this vignette. (In practice, you may have any number of samples.) You can load powerTCR and this data set by typing:
# BiocManager::install("powerTCR")
library(powerTCR)
repertoires is a list with 2 elements in it, each corresponding to a sample repertoire. Have a look:
str(repertoires)
## List of 2
## $samp1: num [1:1000] 1445 451 309 269 250 ... ##$ samp2: num [1:800] 2781 450 447 206 157 ...
These samples are smaller than one might expect a TCR repertoire to be in practice, but for the sake of exploring powerTCR, they permit much faster computation.
## 2.1 The discrete gamma-GPD spliced threshold model
The main model that powerTCR focuses on is the discrete gamma-GPD spliced threshold model. This distribution has probability mass function
$f(x) = \begin{cases} (1-\phi)\frac{h(x|\boldsymbol{\theta}_b)} {H(u-1|\boldsymbol{\theta}_b)} & \text{for x \leq u-1} \\ \phi g(x|\boldsymbol{\theta}_t, u) & \text{for x \geq u} \end{cases},$
where $$h$$ and $$H$$ are the density and distribution function of a gamma distribution, and $$g$$ is the density of a generalized Pareto distribution, or GPD. The gamma distribution has density
$h(x) = \frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x}$
and the GPD has density
$g(x) = \frac{1}{\sigma}\big(1+\xi \frac{x-u}{\sigma}\big)^{-(1/\xi +1)}.$
We can fit the model to each of the samples in repertoires using the function fdiscgammagpd. This function takes a few arguments. The most important are as follows:
First, fdiscgammagpd needs to be passed a sample TCR repertoire as a vector of counts. Second, you need to specify a grid of possible thresholds (that is, the parameter $$u$$) that you are interested in considering. One easy way to do this might be to specify a series of quantiles of the vector of counts. Finally, you also need to specify the shift, which for each sample is ideally the smallest count (at least for TCR repertoire samples). The shift is the minimum value in the support of the distribution, and for clone sizes, should never be smaller than 1.
Let’s try fitting the model to the data in repertoires.
# This will loop through our list of sample repertoires,
# and store a fit in each
fits <- list()
for(i in seq_along(repertoires)){
# Choose a sequence of possible u for your model fit
# Ideally, you want to search a lot of thresholds, but for quick
# computation, we are only going to test 4
thresholds <- unique(round(quantile(repertoires[[i]], c(.75,.8,.85,.9))))
fits[[i]] <- fdiscgammagpd(repertoires[[i]], useq = thresholds,
shift = min(repertoires[[i]]))
}
names(fits) <- names(repertoires)
The output for a fit looks like this:
# You could also look at the first sample by typing fits[[1]]
fits$samp1 ##$x
## [1] 1445 451 309 269 250 220 207 194 181 181 177 150 148 142
## [15] 141 138 116 115 110 102 99 97 91 89 87 87 86 84
## [29] 82 80 79 74 73 71 71 69 68 68 68 67 66 64
## [43] 64 63 62 62 62 61 61 61 61 60 59 58 58 57
## [57] 57 56 56 55 54 54 54 54 54 53 53 53 52 52
## [71] 52 52 51 51 50 49 49 49 48 47 46 46 46 46
## [85] 46 46 46 46 45 45 45 44 44 44 44 44 44 44
## [99] 44 44 43 43 42 42 42 42 42 42 42 42 41 41
## [113] 41 41 40 40 40 40 40 39 39 38 38 38 38 38
## [127] 37 37 37 37 37 36 36 36 36 35 35 35 35 35
## [141] 35 35 35 35 35 35 35 35 34 34 34 34 34 34
## [155] 34 34 34 34 34 33 33 33 33 33 33 33 33 33
## [169] 33 33 33 33 32 32 32 32 32 32 32 32 32 32
## [183] 32 32 32 32 32 32 32 31 31 31 31 31 31 31
## [197] 31 31 31 31 31 31 31 31 31 31 31 31 31 31
## [211] 30 30 30 30 30 30 30 30 30 30 30 30 30 30
## [225] 30 29 29 29 29 29 29 29 29 29 29 29 29 28
## [239] 28 28 28 28 28 28 28 28 28 28 28 28 28 28
## [253] 28 28 28 28 27 27 27 27 27 27 27 27 27 27
## [267] 27 27 27 26 26 26 26 26 26 26 26 26 26 26
## [281] 26 26 26 26 26 26 26 26 26 25 25 25 25 25
## [295] 25 25 25 25 25 25 25 25 25 25 25 25 25 25
## [309] 25 25 25 25 25 25 25 25 24 24 24 24 24 24
## [323] 24 24 24 24 24 24 24 24 24 24 24 24 24 24
## [337] 24 24 24 24 24 24 24 24 24 24 24 24 24 24
## [351] 23 23 23 23 23 23 23 23 23 23 23 23 23 23
## [365] 22 22 22 22 22 22 22 22 22 22 22 22 22 22
## [379] 22 22 22 22 22 22 22 22 22 22 22 22 22 22
## [393] 22 22 22 22 22 22 22 22 22 22 22 21 21 21
## [407] 21 21 21 21 21 21 21 21 21 21 21 21 21 21
## [421] 21 21 21 21 21 21 21 21 21 21 21 21 21 21
## [435] 21 21 21 21 21 21 21 20 20 20 20 20 20 20
## [449] 20 20 20 20 20 20 20 20 20 20 20 20 20 20
## [463] 20 20 20 20 20 20 20 20 20 20 20 19 19 19
## [477] 19 19 19 19 19 19 19 19 19 19 19 19 19 19
## [491] 19 19 19 19 19 19 19 19 19 19 19 19 19 19
## [505] 19 19 18 18 18 18 18 18 18 18 18 18 18 18
## [519] 18 18 18 18 18 18 18 18 18 18 18 18 18 18
## [533] 18 18 18 18 18 18 18 18 18 18 18 18 17 17
## [547] 17 17 17 17 17 17 17 17 17 17 17 17 17 17
## [561] 17 17 17 17 17 17 17 17 17 17 17 17 16 16
## [575] 16 16 16 16 16 16 16 16 16 16 16 16 16 16
## [589] 16 16 16 16 16 16 16 16 16 16 16 16 16 16
## [603] 16 16 16 16 16 16 16 16 16 16 16 16 16 16
## [617] 16 16 15 15 15 15 15 15 15 15 15 15 15 15
## [631] 15 15 15 15 15 15 15 15 15 15 15 15 15 15
## [645] 15 15 15 15 15 15 15 15 15 15 14 14 14 14
## [659] 14 14 14 14 14 14 14 14 14 14 14 14 14 14
## [673] 14 14 14 14 14 14 14 14 14 14 14 14 14 14
## [687] 14 14 14 14 14 13 13 13 13 13 13 13 13 13
## [701] 13 13 13 13 13 13 13 13 13 13 13 13 13 13
## [715] 13 13 13 13 13 13 13 13 13 13 13 13 13 13
## [729] 13 12 12 12 12 12 12 12 12 12 12 12 12 12
## [743] 12 12 12 12 12 12 12 12 12 12 12 12 12 12
## [757] 12 12 12 12 12 12 12 12 12 11 11 11 11 11
## [771] 11 11 11 11 11 11 11 11 11 11 11 11 11 11
## [785] 11 11 11 11 11 11 11 11 11 11 11 11 11 11
## [799] 11 11 11 11 11 10 10 10 10 10 10 10 10 10
## [813] 10 10 10 10 10 10 10 10 10 10 10 10 10 10
## [827] 10 10 10 10 10 10 10 10 10 10 9 9 9 9
## [841] 9 9 9 9 9 9 9 9 9 9 9 9 9 9
## [855] 9 9 9 9 9 9 9 9 9 9 9 9 9 9
## [869] 9 9 9 9 9 9 9 9 9 9 9 9 9 9
## [883] 8 8 8 8 8 8 8 8 8 8 8 8 8 8
## [897] 8 8 8 8 8 8 8 8 8 8 8 8 8 7
## [911] 7 7 7 7 7 7 7 7 7 7 7 7 7 7
## [925] 7 7 7 7 7 7 7 6 6 6 6 6 6 6
## [939] 6 6 6 6 6 6 6 6 6 6 6 6 6 6
## [953] 6 6 6 6 6 6 6 5 5 5 5 5 5 5
## [967] 5 5 5 5 5 5 5 5 4 4 4 4 4 4
## [981] 4 4 4 4 4 4 4 4 4 4 3 3 3 3
## [995] 3 3 3 3 2 1
##
## $shift ## [1] 1 ## ##$init
## [1] 1.64606214 0.06261648 43.10000000 16.22720566 0.82244628
##
## $useq ## [1] 28 31 34 43 ## ##$nllhuseq
## [1] 3987.444 3985.850 3986.335 3987.031
##
## $optim ##$optim$bulk ##$optim$bulk$par
## [1] 1.119930 -1.818451
##
## $optim$bulk$value ## [1] 2782.958 ## ##$optim$bulk$counts
## 71 NA
##
## $optim$bulk$convergence ## [1] 0 ## ##$optim$bulk$message
## NULL
##
## $optim$bulk$hessian ## [,1] [,2] ## [1,] 2184.494 -1396.5212 ## [2,] -1396.521 992.1183 ## ## ##$optim$tail ##$optim$tail$par
## [1] 2.4244657 0.7427503
##
## $optim$tail$value ## [1] 1202.892 ## ##$optim$tail$counts
## 47 NA
##
## $optim$tail$convergence ## [1] 0 ## ##$optim$tail$message
## NULL
##
## $optim$tail$hessian ## [,1] [,2] ## [1,] 82.54533 50.97956 ## [2,] 50.97956 93.60818 ## ## ## ##$nllh
## [1] 3985.85
##
## $mle ## phi shape rate thresh sigma xi ## 0.2100000 3.0646385 0.1622769 31.0000000 11.2961918 0.7427503 ## ##$fisherInformation
## [,1] [,2] [,3] [,4]
## [1,] 0.004571856 0.006435416 0.000000000 0.000000000
## [2,] 0.006435416 0.010066536 0.000000000 0.000000000
## [3,] 0.000000000 0.000000000 0.018254313 -0.009941404
## [4,] 0.000000000 0.000000000 -0.009941404 0.016096973
Each value of the output is described in the fdiscgammagpd help file, but the most important are
• nllh: the negative log likelihood of the most likely fit, given the thresholds you’ve checked
• mle: the maximum likelihood estimates for:
$\phi, \alpha, \beta, u, \sigma,\text{ and } \xi\text{ respectively}$
These two important items can be grabbed using convenient accessors contained in powerTCR, called get_mle and get_nllh.
# Grab mles of fits:
get_mle(fits)
## $samp1 ## phi shape rate thresh sigma xi ## 0.2100000 3.0646385 0.1622769 31.0000000 11.2961918 0.7427503 ## ##$samp2
## phi shape rate thresh sigma xi
## 0.26750000 2.72806240 0.08797711 30.00000000 5.56272962 0.84360364
# Grab negative log likelihoods of fits
get_nllh(fits)
## $samp1 ## [1] 3985.85 ## ##$samp2
## [1] 3052.027
You can also view the likelihoods for every other threshold you checked (in nllhuseq) as well as the output from optim for the “bulk” (truncated gamma) and “tail” (GPD) parts of the distribution.
## 2.2 The Type-I Pareto model
For reproducibility purposes, powerTCR also provides a means to fit the model of Desponds et al. (2016). This model is investigated and discussed in Koch et al. (2018). The model follows a type-I Pareto distribution, with density:
$f(x) = \frac{\alpha u^\alpha}{x^{\alpha+1}}.$
For a given threshold $$u$$, the estimate for parameter $$\alpha$$ is computed directly as
$\alpha=n\bigg[\sum_{i=1}^n\text{log}\frac{x_i}{u}\bigg]^{-1}+1$
where $$n$$ is the number of clones with size larger than $$u$$. This value is computed for every possible threshold $$u$$, and then the parameters that minimize the KS-statistic between empirical and theoretical distributions are chosen.
Let’s fit this model to the repertoires data, and have a look at the output for the first sample.
desponds_fits <- list()
for(i in seq_along(repertoires)){
desponds_fits[[i]] <- fdesponds(repertoires[[i]])
}
names(desponds_fits) <- names(repertoires)
desponds_fits\$samp1
## min.KS Cmin powerlaw.exponent pareto.alpha
## 0.04428183 18.00000000 2.75600901 1.75600901
Here, min.KS is the minimum KS-statistic of all possible fits. Cmin is the threshold $$u$$ that corresponds to the best fit. powerlaw.exponent and pareto.alpha are effectively the same – pareto.alpha = powerlaw.exponent-1. This is just user preference; for the Pareto density given above, pareto.alpha corresponds to the $$\alpha$$ shown there. However, if the user is more familiar with a “power law” distribution, then powerlaw.exponent is the parameter they should look at.
# 3 Density, distribution, and quantile functions, plus simulating data
powerTCR provides standard functions to compute the density, distribution, and quantile functions of the discrete gamma-GPD threshold model, as well a function to simulate data. These can be very useful for tasks such as visualizing model fit and conducting a simulation study. The functions behave exactly like popularly used functions such as, say, dnorm, pnorm, qnorm, and rnorm. In order to use these functions, you need to specify all of the model parameters. The one exception is $$\phi$$, which can go unspecified – details about how $$\phi$$ defaults are in the help file for ddiscgammagpd.
Here, we will use qdiscgammagpd to compute quantiles from the two theoretical distributions we fit above. Note that we pass qdiscgammagpd the quantiles and the fits we obtained. This is just something convenient powerTCR can do if you don’t want to manually specify shift, shape, rate, u, sigma, xi, and phi. You don’t have to pass a fit, however, as you will see when we simulate data.
# The number of clones in each sample
n1 <- length(repertoires[[1]])
n2 <- length(repertoires[[2]])
# Grids of quantiles to check
# (you want the same number of points as were observed in the sample)
q1 <- seq(n1/(n1+1), 1/(n1+1), length.out = n1)
q2 <- seq(n2/(n2+1), 1/(n2+1), length.out = n2)
# Compute the value of fitted distributions at grid of quantiles
theor1 <- qdiscgammagpd(q1, fits[[1]])
theor2 <- qdiscgammagpd(q2, fits[[2]])
Now, let’s visualize the fitted and empirical distributions by plotting them together. Here, the black represents the original data, with the quantiles of the theoretical distributions plotted on top in color.
plot(log(repertoires[[1]]), log(seq_len(n1)), pch = 16, cex = 2,
xlab = "log clone size", ylab = "log rank", main = "samp1")
points(log(theor1), log(seq_len(n1)), pch = 'x', col = "darkcyan")
plot(log(repertoires[[2]]), log(seq_len(n2)), pch = 16, cex = 2,
xlab = "log clone size", ylab = "log rank", main = "samp2")
points(log(theor2), log(seq_len(n2)), pch = 'x', col = "chocolate")
The fits look pretty good!
Let’s also try simulating data.
# Simulate 3 sampled repertoires
set.seed(123)
s1 <- rdiscgammagpd(1000, shape = 3, rate = .15, u = 25, sigma = 15,
xi = .5, shift = 1)
s2 <- rdiscgammagpd(1000, shape = 3.1, rate = .14, u = 26, sigma = 15,
xi = .6, shift = 1)
s3 <- rdiscgammagpd(1000, shape = 10, rate = .3, u = 45, sigma = 20,
xi = .7, shift = 1)
NB: it is possible to simulate data according to a distribution that is totally unrealistic. For example, what if you chose a very light-tailed gamma distribution and a comparatively very high threshold, but insisted (using $$\phi$$) that data be observed above the threshold? Here is what happens:
bad <- rdiscgammagpd(1000, shape = 1, rate = 2, u = 25, sigma = 10,
xi = .5, shift = 1, phi = .2)
plot(log(sort(bad, decreasing = TRUE)), log(seq_len(1000)), pch = 16,
xlab = "log clone size", ylab = "log rank", main = "bad simulation")
Fun, but not too realistic for a clone size distribution. There are several ways to go about finding reasonable parameters to simulate. One intuitive and easy technique is to let real data speak for itself – use parameters similar to those obtained by fitting a distribution to true TCR repertoire data sets.
# 4 Doing comparative analysis
Following the work in Koch et al. (2018), powerTCR provides the tools needed to perform hierarchical clustering of TCR repertoire samples according to their Jensen-Shannon distance. We can test this out on the 3 TCR repertoires we just simulated. First, we need to fit a model to them. For computational efficiency, let’s just supply the true thresholds. Then, we can use JS_dist to compute the Jensen-Shannon divergence between each pair of theoretical distributions corresponding to each of the TCR samples.
JS_dist needs to be supplied two model fits from fdiscgammagpd as well as a grid. The grid is important: it is the range over which each distribution gets evaluated. If you are comparing a group of TCR repertoires, the minimum value of your grid should be the smallest clone size across all samples. The upper bound of the grid should be something very large, say 100,000 or more. If you don’t select a value large enough, you will not be examining the tail of your fitted distributions sufficiently, and the tail is important! The grid should also contain every integer between its minimum and maximum. For computational efficiency, here the upper bound on our grid is only 10,000.
We’ve wrapped JS_dist up into a convenient function get_distances, which will create a symmetric matrix of distances (with 0 on the diagonal) for your use.
# Fit model to the data at the true thresholds
sim_fits <- list("s1" = fdiscgammagpd(s1, useq = 25),
"s2" = fdiscgammagpd(s2, useq = 26),
"s3" = fdiscgammagpd(s3, useq = 45))
# Compute the pairwise JS distance between 3 fitted models
grid <- min(c(s1,s2,s3)):10000
distances <- get_distances(sim_fits, grid, modelType="Spliced")
Let’s have a look at the distance matrix we just computed:
distances
## s1 s2 s3
## s1 0.00000000 0.06839429 0.4488141
## s2 0.06839429 0.00000000 0.4101173
## s3 0.44881406 0.41011734 0.0000000
Note that get_distances is just calling JS_dist for every pair, and JS_dist is simply a wrapper for two functions called JS_spliced and JS_desponds. If you want to do comparative analysis for data fit using fdesponds, then the argument “modelType” must be changed to “Desponds”.
We can use this distance matrix to perform hierarchical clustering. This is done easily with the clusterPlot function. clusterPlot is just a wrapper for hclust in R’s stats package, and takes a matrix of Jensen-Shannon distances like the one we just made, plus a type of linkage. All possible types of linkage are listed in the help file, but we recommend using Ward’s method or complete linkage.
clusterPlot(distances, method = "ward.D")
The clustering result is exactly what we might expect. Indeed, we simulated s1 and s2 using very similar parameter settings, so we should expect them to be more closely related to each other than to s3. That is exactly what the dendrogram displays.
# 5 Extracting diversity estimators
In Koch et al. (2018), we introduced new measures of diversity, alongise comparisons to often-used estimators of sample diversity borrowed from ecology literature. Providing a list of model fits, powerTCR can easily compute for you sample richness, Shannon entropy, clonality, and our introduced measure – the proportion of highly stimulated clones. This is done with the function get_diversity. Let’s demonstrate this on our fits from simulated data:
get_diversity(sim_fits)
## richness shannon clonality prop_stim
## s1 1000 6.522356 0.05579228 0.6044665
## s2 1000 6.346489 0.08125162 0.6600013
## s3 1000 6.500144 0.05900772 0.3920100
# 6 Bootstrapping model fits
Finally, powerTCR allows you to run a parametric bootstrapping procedure on the discrete Gamma-GPD spliced threshold model, and can be executed in parallel if the processors are available to you. Just supply the fits you’d like to bootstrap and the number of resamples you’d like to do (and the number of cores you want to use, if running in parallel). If you want to speed up the bootstrapping by choosing a reduced number of thresholds, you can do so by changing gridStyle. Setting gridStyle to the default “copy” will just copy the original useq parameter in the fit. More details are in the package documentation. For each fit being bootstrapped, this returns a list of fits of length resamples. So, for example, if you are bootstrapping 5 fits with 1,000 resamples each, the function will return a list of length 5. Each of those lists will be length 1,000.
Here, we set the number of resamples to something very small to save computational time, but we recommend 1,000 resamples to get reasonable confidence bands.
boot <- get_bootstraps(fits, resamples = 5, cores = 1, gridStyle = "copy")
Finally, you can get confidence intervals based on your resampling. For example, this is how you can get the 95% confidence interval around the estimate for $$\xi$$ from the first fit.
mles <- get_mle(boot[[1]])
xi_CI <- map(mles, 'xi') %>%
unlist %>%
quantile(c(.025,.5,.975))
xi_CI
## 2.5% 50% 97.5%
## 0.7056353 0.9194535 1.1049098
# 7 References
Desponds, J., Mora, T., & Walczak, A. M. (2016). Fluctuating fitness shapes the clone-size distribution of immune repertoires. Proceedings of the National Academy of Sciences, 113(2), 274-279. Chicago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44540467858314514, "perplexity": 601.2846941704241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00568.warc.gz"} |
http://www.is.mpg.de/all_publications?page=4&publication_type%5B%5D=Article&publication_type%5B%5D=Miscellaneous&publication_type%5B%5D=Patent&publication_type%5B%5D=Poster&year%5B%5D=2007&year%5B%5D=2018&year%5B%5D=2010&year%5B%5D=1991&year%5B%5D=1997 | #### 2018
##### Photorealistic Video Super Resolution
Workshop and Challenge on Perceptual Image Restoration and Manipulation (PIRM) at the 15th European Conference on Computer Vision (ECCV), 2018 (poster)
ei
#### 2018
##### Dissecting the synapse- and frequency-dependent network mechanisms of in vivo hippocampal sharp wave-ripples
Ramirez-Villegas, J. F., Willeke, K. F., Logothetis, N. K., Besserve, M.
Neuron, 100(5):1224-1240, 2018 (article)
ei
##### Retinal image quality of the human eye across the visual field
14th Biannual Conference of the German Society for Cognitive Science (KOGWIS 2018), 2018 (poster)
ei
##### In-Hand Object Stabilization by Independent Finger Control
Veiga, F. F., Edin, B. B., Peters, J.
IEEE Transactions on Robotics, 2018 (article) Submitted
ei
##### Visualizing and understanding Sum-Product Networks
Vergari, A., Di Mauro, N., Esposito, F.
Machine Learning, 2018 (article)
ei
##### Large sample analysis of the median heuristic
2018 (misc) In preparation
ei pn slt
##### Boosting for Comparison-Based Learning
2018, arXiv preprint (arXiv:1810.13333) (article)
slt
##### Object Scene Flow
Menze, M., Heipke, C., Geiger, A.
ISPRS Journal of Photogrammetry and Remote Sensing, 2018 (article)
Abstract
This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.
avg
##### Controllable switching between planar and helical flagellar swimming of a soft robotic sperm
Khalil, I. S. M., Tabak, A. F., Seif, M. A., Klingner, A., Sitti, M.
PloS One, 13(11):e0206456, 2018 (article)
pi
##### Kinetics of orbitally shaken particles constrained to two dimensions
Ipparthi, D., Hageman, T. A. G., Cambier, N., Sitti, M., Dorigo, M., Abelmann, L., Mastrangeli, M.
Physical Review E, 98(4):042137, 2018 (article)
pi
##### Seed-mediated synthesis of plasmonic gold nanoribbons using cancer cells for hyperthermia applications
Singh, A. V., Alapan, Y., Jahnke, T., Laux, P., Luch, A., Aghakhani, A., Kharratian, S., Onbasli, M. C., Bill, J., Sitti, M.
Journal of Materials Chemistry B, 6(46):7573-7581, 2018 (article)
pi
##### Geckos Race across Water using Multiple Mechanisms
Nirody, J., Jinn, J., Libby, T., Lee, T., Jusufi, A., Hu, D., Full, R.
Current Biology, 2018 (article)
bio
##### Learning a Structured Neural Network Policy for a Hopping Task.
IEEE Robotics and Automation Letters, 3(4):4092-4099, October 2018 (article)
mg
##### Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments
Grau-Moya, J, Krüger, M, Braun, DA
Entropy, 20(1:1):1-28, January 2018 (article)
Abstract
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.
ei
##### Thick permalloy films for the imaging of spin texture dynamics in perpendicularly magnetized systems
Finizio, S., Wintz, S., Bracher, D., Kirk, E., Semisalova, A. S., Förster, J., Zeissler, K., We\ssels, T., Weigand, M., Lenz, K., Kleibert, A., Raabe, J.
{Physical Review B}, 98(10), American Physical Society, Woodbury, NY, 2018 (article)
mms
##### Dynamic Janus metasurfaces in the visible spectral region
Yu, P., Li, J., Zhang, S., Jin, Z., Schütz, G., Qiu, C., Hirscher, M., Liu, N.
{Nano Letters}, 18(7):4584-4589, American Chemical Society, Washington, DC, 2018 (article)
mms
##### Review of ultrafast demagnetization after femtosecond laser pulses: A complex interaction of light with quantum matter
Fähnle, M., Haag, M., Illg, C., Müller, B. Y., Weng, W., Tsatsoulis, T., Huang, H., Briones Paz, J. Z., Teeny, N., Zhang, L., Kuhn, T.
{American Journal of Modern Physics}, 7(2):68-74, Science Publishing Group, New York, NY, 2018 (article)
mms
##### Direct observation of Zhang-Li torque expansion of magnetic droplet solitons
Chung, S., Tuan Le, Q., Ahlberg, M., Awad, A. A., Weigand, M., Bykova, I., Khymyn, R., Dvornik, M., Mazraati, H., Houshang, A., Jiang, S., Nguyen, T. N. A., Goering, E., Schütz, G., Gräfe, J., \AAkerman, J.
{Physical Review Letters}, 120(21), American Physical Society, Woodbury, N.Y., 2018 (article)
mms
##### Current-induced skyrmion generation through morphological thermal transitions in chiral ferromagnetic heterostructures
Lemesh, I., Litzius, K., Böttcher, M., Bassirian, P., Kerber, N., Heinze, D., Zázvorka, J., Büttner, F., Caretta, L., Mann, M., Weigand, M., Finizio, S., Raabe, J., Im, M., Stoll, H., Schütz, G., Dupé, B., Kläui, M., Beach, G. S. D.
{Advanced Materials}, 30(49), Wiley-VCH, Weinheim, 2018 (article)
mms
##### Emission and propagation of multi-dimensional spin waves in anisotropic spin textures
Sluka, V., Schneider, T., Gallardo, R. A., Kakay, A., Weigand, M., Warnatz, T., Mattheis, R., Roldan-Molina, A., Landeros, P., Tiberkevich, V., Slavin, A., Schütz, G., Erbe, A., Deac, A., Lindner, J., Raabe, J., Fassbender, J., Wintz, S.
2018 (misc)
mms
##### 3d nanofabrication of high-resolution multilayer Fresnel zone plates
Sanli, U. T., Jiao, C., Baluktsian, M., Grévent, C., Hahn, K., Wang, Y., Srot, V., Richter, G., Bykova, I., Weigand, M., Schütz, G., Keskinbora, K.
{Advanced Science}, 5(9), Wiley-VCH, Weinheim, 2018 (article)
mms
##### Photocatalytic CO2 reduction by Cr-substituted Ba2(In2-xCrx)O5\mbox⋅(H2O)δ(0.04 ≤x ≤0.60)
Yoon, S., Gaul, M., Sharma, S., Son, K., Hagemann, H., Ziegenbalg, D., Schwingenschlogl, U., Widenmeyer, M., Weidenkaff, A.
{Solid State Sciences}, 78, pages: 22-29, Elsevier Masson SAS, Paris, 2018 (article)
mms
##### The Impact of Robotics and Automation on Working Conditions and Employment [Ethical, Legal, and Societal Issues]
Pham, Q., Madhavan, R., Righetti, L., Smart, W., Chatila, R.
IEEE Robotics and Automation Magazine, 25(2):126-128, June 2018 (article)
mg
##### Correction of axial position uncertainty and systematic detector errors in ptychographic diffraction imaging
Loetgering, L., Rose, M., Keskinbora, K., Baluktsian, M., Dogan, G., Sanli, U., Bykova, I., Weigand, M., Schütz, G., Wilhein, T.
{Optical Engineering}, 57(8), The Society, Redondo Beach, Calif., 2018 (article)
mms
##### The role of surface oxides on hydrogen sorption kinetics in titanium thin films
Hadjixenophontos, E., Michalek, L., Roussel, M., Hirscher, M., Schmitz, G.
{Applied Surface Science}, 441, pages: 324-330, Elsevier B.V., Amsterdam, 2018 (article)
mms
##### Ferromagnetism in nitrogen and fluorine substituted BaTiO3
Yoon, S., Son, K., Ebbinghaus, S. G., Widenmeyer, M., Weidenkaff, A.
{Journal of Alloys and Compounds}, 749, pages: 628-633, Elsevier B.V., Lausanne, Switzerland, 2018 (article)
mms
##### New concepts for 3d optics in x-ray microscopy
Sanli, U., Ceylan, H., Jiao, C., Baluktsian, M., Grevent, C., Hahn, K., Wang, Y., Srot, V., Richter, G., Bykova, I., Weigand, M., Sitti, M., Schütz, G., Keskinbora, K.
{Microscopy and Microanalysis}, 24(Suppl 2):288-289, Cambridge University Press, New York, NY, 2018 (article)
mms
##### Thermal skyrmion diffusion applied in probabilistic computing
Zázvorka, J., Jakobs, F., Heinze, D., Keil, N., Kromin, S., Jaiswal, S., Litzius, K., Jakob, G., Virnau, P., Pinna, D., Everschor-Sitte, K., Donges, A., Nowak, U., Kläui, M.
2018 (misc)
mms
##### Lethal Autonomous Weapon Systems [Ethical, Legal, and Societal Issues]
Righetti, L., Pham, Q., Madhavan, R., Chatila, R.
IEEE Robotics \& Automation Magazine, 25(1):123-126, March 2018 (article)
Abstract
The topic of lethal autonomous weapon systems has recently caught public attention due to extensive news coverage and apocalyptic declarations from famous scientists and technologists. Weapon systems with increasing autonomy are being developed due to fast improvements in machine learning, robotics, and automation in general. These developments raise important and complex security, legal, ethical, societal, and technological issues that are being extensively discussed by scholars, nongovernmental organizations (NGOs), militaries, governments, and the international community. Unfortunately, the robotics community has stayed out of the debate, for the most part, despite being the main provider of autonomous technologies. In this column, we review the main issues raised by the increase of autonomy in weapon systems and the state of the international discussion. We argue that the robotics community has a fundamental role to play in these discussions, for its own sake, to provide the often-missing technical expertise necessary to frame the debate and promote technological development in line with the IEEE Robotics and Automation Society (RAS) objective of advancing technology to benefit humanity.
mg
##### Spin-wave interference in magnetic vortex stacks
Behncke, C., Adolff, C. F., Lenzing, N., Hänze, M., Schulte, B., Weigand, M., Schütz, G., Meier, G.
{Communications Physics}, 1, Nature Publishing Group, London, 2018 (article)
mms
##### High-throughput synthesis of modified Fresnel zone plate arrays via ion beam lithography
Keskinbora, K., Sanli, U. T., Baluktsian, M., Grévent, C., Weigand, M., Schütz, G.
{Beilstein Journal of Nanotechnology}, 9, pages: 2049-2056, Beilstein-Institut, Frankfurt am Main, 2018 (article)
mms
##### Deterministic creation and deletion of a single magnetic skyrmion observed by direct time-resolved X-ray microscopy
Woo, S., Song, K. M., Zhang, X., Ezawa, M., Zhou, Y., Liu, X., Weigand, M., Finizio, S., Raabe, J., Park, M.-C., Lee, K.-Y., Choi, J. W., Min, B.-C., Koo, H. C., Chang, J.
{Nature Electronics}, 1(5):288-296, Springer Nature, London, 2018 (article)
mms
##### Magnetic skyrmion as a nonlinear resistive element: A potential building block for reservoir computing
Prychynenko, D., Sitte, M., Litzius, K., Krüger, B., Bourianoff, G., Kläui, M., Sinova, J., Everschor-Sitte, K.
{Physical Review Applied}, 9(1), American Physical Society, College Park, Md. [u.a.], 2018 (article)
mms
##### Tunable geometrical frustration in magnoic vortex crystals
Behncke, C., Adolff, C. F., Wintz, S., Hänze, M., Schulte, B., Weigand, M., Finizio, S., Raabe, J., Meier, G.
{Scientific Reports}, 8, Nature Publishing Group, London, UK, 2018 (article)
mms
#### 2010
##### Similarities in resting state and feature-driven activity: Non-parametric evaluation of human fMRI
Shelton, J., Blaschko, M., Gretton, A., Müller, J., Fischer, E., Bartels, A.
NIPS Workshop on Learning and Planning from Batch Time Series Data, December 2010 (poster)
ei
#### 2010
##### Causal relationships between frequency bands of extracellular signals in visual cortex revealed by an information theoretic analysis
Besserve, M., Schölkopf, B., Logothetis, N., Panzeri, S.
Journal of Computational Neuroscience, 29(3):547-566, December 2010 (article)
ei
##### Tackling Box-Constrained Optimization via a New Projected Quasi-Newton Approach
Kim, D., Sra, S., Dhillon, I.
SIAM Journal on Scientific Computing, 32(6):3548-3563 , December 2010 (article)
Abstract
Numerous scientific applications across a variety of fields depend on box-constrained convex optimization. Box-constrained problems therefore continue to attract research interest. We address box-constrained (strictly convex) problems by deriving two new quasi-Newton algorithms. Our algorithms are positioned between the projected-gradient [J. B. Rosen, J. SIAM, 8 (1960), pp. 181–217] and projected-Newton [D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), pp. 221–246] methods. We also prove their convergence under a simple Armijo step-size rule. We provide experimental results for two particular box-constrained problems: nonnegative least squares (NNLS), and nonnegative Kullback–Leibler (NNKL) minimization. For both NNLS and NNKL our algorithms perform competitively as compared to well-established methods on medium-sized problems; for larger problems our approach frequently outperforms the competition.
ei
##### Algorithmen zum Automatischen Erlernen von Motorfähigkeiten
at - Automatisierungstechnik, 58(12):688-694, December 2010 (article)
Abstract
Robot learning methods which allow autonomous robots to adapt to novel situations have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics. If possible, scaling was usually only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i. e., firstly, we study policy learning algorithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structures for task representation and execution.
ei
##### PAC-Bayesian Analysis of Co-clustering and Beyond
Seldin, Y., Tishby, N.
Journal of Machine Learning Research, 11, pages: 3595-3646, December 2010 (article)
ei
##### Augmentation of fMRI Data Analysis using Resting State Activity and Semi-supervised Canonical Correlation Analysis
Shelton, JA., Blaschko, MB., Bartels, A.
NIPS Women in Machine Learning Workshop (WiML), December 2010 (poster)
Abstract
Resting state activity is brain activation that arises in the absence of any task, and is usually measured in awake subjects during prolonged fMRI scanning sessions where the only instruction given is to close the eyes and do nothing. It has been recognized in recent years that resting state activity is implicated in a wide variety of brain function. While certain networks of brain areas have different levels of activation at rest and during a task, there is nevertheless significant similarity between activations in the two cases. This suggests that recordings of resting state activity can be used as a source of unlabeled data to augment kernel canonical correlation analysis (KCCA) in a semisupervised setting. We evaluate this setting empirically yielding three main results: (i) KCCA tends to be improved by the use of Laplacian regularization even when no additional unlabeled data are available, (ii) resting state data seem to have a similar marginal distribution to that recorded during the execution of a visual processing task implying largely similar types of activation, and (iii) this source of information can be broadly exploited to improve the robustness of empirical inference in fMRI studies, an inherently data poor domain.
ei
##### Gaussian Processes for Machine Learning (GPML) Toolbox
Rasmussen, C., Nickisch, H.
Journal of Machine Learning Research, 11, pages: 3011-3015, November 2010 (article)
Abstract
The GPML toolbox provides a wide range of functionality for Gaussian process (GP) inference and prediction. GPs are specified by mean and covariance functions; we offer a library of simple mean and covariance functions and mechanisms to compose more complex ones. Several likelihood functions are supported including Gaussian and heavy-tailed for regression as well as others suitable for classification. Finally, a range of inference methods is provided, including exact and variational inference, Expectation Propagation, and Laplace's method dealing with non-Gaussian likelihoods and FITC for dealing with large regression tasks.
ei
##### Cryo-EM structure and rRNA model of a translating eukaryotic 80S ribosome at 5.5-Å resolution
Armache, J-P., Jarasch, A., Anger, AM., Villa, E., Becker, T., Bhushan, S., Jossinet, F., Habeck, M., Dindar, G., Franckenberg, S., Marquez, V., Mielke, T., Thomm, M., Berninghausen, O., Beatrix, B., Söding, J., Westhof, E., Wilson, DN., Beckmann, R.
Proceedings of the National Academy of Sciences of the United States of America, 107(46):19748-19753, November 2010 (article)
Abstract
Protein biosynthesis, the translation of the genetic code into polypeptides, occurs on ribonucleoprotein particles called ribosomes. Although X-ray structures of bacterial ribosomes are available, high-resolution structures of eukaryotic 80S ribosomes are lacking. Using cryoelectron microscopy and single-particle reconstruction, we have determined the structure of a translating plant (Triticum aestivum) 80S ribosome at 5.5-Å resolution. This map, together with a 6.1-Å map of a Saccharomyces cerevisiae 80S ribosome, has enabled us to model ∼98% of the rRNA. Accurate assignment of the rRNA expansion segments (ES) and variable regions has revealed unique ES–ES and r-protein–ES interactions, providing insight into the structure and evolution of the eukaryotic ribosome.
ei
Scholarpedia, 5(11):3698, November 2010 (article)
Abstract
Policy gradient methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by gradient descent. They do not suffer from many of the problems that have been marring traditional reinforcement learning approaches such as the lack of guarantees of a value function, the intractability problem resulting from uncertain state information and the complexity arising from continuous states & actions.
ei
##### High frequency phase-spike synchronization of extracellular signals modulates causal interactions in monkey primary visual cortex
Besserve, M., Murayama, Y., Schölkopf, B., Logothetis, N., Panzeri, S.
40(616.2), 40th Annual Meeting of the Society for Neuroscience (Neuroscience), November 2010 (poster)
Abstract
Functional correlates of Rhythms in the gamma band (30-100Hz) are observed in the mammalian brain with a large variety of functional correlates. Nevertheless, their functional role is still debated. One way to disentangle this issue is to go beyond usual correlation analysis and apply causality measures that quantify the directed interactions between the gamma rhythms and other aspects of neural activity. These measures can be further compared with other aspects of neurophysicological signals to find markers of neural interactions. In a recent study, we analyzed extracellular recordings in the primary visual cortex of 4 anesthetized macaques during the presentation of movie stimuli using a causality measure named Transfer Entropy. We found causal interactions between high frequency gamma rhythms (60-100Hz) recorded in different electrodes, involving in particular their phase, and between the gamma phase and spiking activity quantified by the instantaneous envelope of the MUA band (1-3kHz). Here, we further investigate in the same dataset the meaning of these phase-MUA and phase-phase causal interactions by studying the distribution of phases at multiple recording sites at lags around the occurrence of spiking events. First, we found a sharpening of the gamma phase distribution in one electrode when spikes are occurring in other recording site. This phenomena appeared as a form of phase-spike synchronization and was quantified by an information theoretic measure. We found this measure correlates significantly with phase-MUA causal interactions. Additionally, we quantified in a similar way the interplay between spiking and the phase difference between two recording sites (reflecting the well-know concept of phase synchronization). We found that, depending on the couple of recording site, spiking can correlate either with a phase synchronization or with a desynchronization with respect to the baseline. This effect correlates very well with the phase-phase causality measure. These results provide evidence for high frequency phase-spike synchronization to reflect communication between distant neural populations in V1. Conversely, both phase synchronization or desynchronization may favor neural communication between recording sites. This new result, which contrasts with current hypothesis on the role of phase synchronization, could be interpreted as the presence of inhibitory interactions that are suppressed by desynchronization. Finally, our findings give new insights into the role of gamma rhythms in regulating local computation in the visual cortex.
ei
##### Localization of eukaryote-specific ribosomal proteins in a 5.5-Å cryo-EM map of the 80S eukaryotic ribosome
Armache, J-P., Jarasch, A., Anger, AM., Villa, E., Becker, T., Bhushan, S., Jossinet, F., Habeck, M., Dindar, G., Franckenberg, S., Marquez, V., Mielke, T., Thomm, M., Berninghausen, O., Beatrix, B., Söding, J., Westhof, E., Wilson, DN., Beckmann, R.
Proceedings of the National Academy of Sciences of the United States of America, 107(46):19754-19759, November 2010 (article)
Abstract
Protein synthesis in all living organisms occurs on ribonucleoprotein particles, called ribosomes. Despite the universality of this process, eukaryotic ribosomes are significantly larger in size than their bacterial counterparts due in part to the presence of 80 r proteins rather than 54 in bacteria. Using cryoelectron microscopy reconstructions of a translating plant (Triticum aestivum) 80S ribosome at 5.5-Å resolution, together with a 6.1-Å map of a translating Saccharomyces cerevisiae 80S ribosome, we have localized and modeled 74/80 (92.5%) of the ribosomal proteins, encompassing 12 archaeal/eukaryote-specific small subunit proteins as well as the complete complement of the ribosomal proteins of the eukaryotic large subunit. Near-complete atomic models of the 80S ribosome provide insights into the structure, function, and evolution of the eukaryotic translational apparatus.
ei
##### Attenuation Correction for Whole Body PET/MR: Quantitative Evaluation and Lung Attenuation Estimation with Consistency Information
Bezrukov, I., Hofmann, M., Aschoff, P., Beyer, T., Mantlik, F., Pichler, B., Schölkopf, B.
2010(M13-122), 2010 Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC), November 2010 (poster)
ei
##### PET/MRI: Observation of Non-Isotropic Positron Distribution in High Magnetic Fields and Its Diagnostic Impact
Kolb, A., Hofmann, M., Sauter, A., Liu, C., Schölkopf, B., Pichler, B.
2010 Nuclear Science Symposium and Medical Imaging Conference, 2010(M18-119):1, November 2010 (poster)
ei
##### Spatio-Spectral Remote Sensing Image Classification With Graph Kernels
IEEE Geoscience and Remote Sensing Letters, 7(4):741-745, October 2010 (article)
Abstract
This letter presents a graph kernel for spatio-spectral remote sensing image classification with support vector machines (SVMs). The method considers higher order relations in the neighborhood (beyond pairwise spatial relations) to iteratively compute a kernel matrix for SVM learning. The proposed kernel is easy to compute and constitutes a powerful alternative to existing approaches. The capabilities of the method are illustrated in several multi- and hyperspectral remote sensing images acquired over both urban and agricultural areas.
ei
##### Causal Inference Using the Algorithmic Markov Condition
IEEE Transactions on Information Theory, 56(10):5168-5194, October 2010 (article)
Abstract
Inferring the causal structure that links $n$ observables is usually based upon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when the sample size is one. We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information and describe the corresponding causal inference rules. We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work. We also sketch some ideas on how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution.
ei | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4921382963657379, "perplexity": 11756.762358814769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00314.warc.gz"} |
http://mathhelpforum.com/calculus/86021-integral-calculus-question.html | # Math Help - Integral calculus question.
1. ## Integral calculus question.
let S be the graph of $f(x,y)=2xy-y$ over the square $[0,2] \times [0,2]$, ie. $S:=\{(x,y,2xy-y)| 0 \leq x \leq 2, 0 \leq y \leq 2 \}$.
Fair enough!
a). Find an equation of the tangent plane to S at $(1,1,-1).$
$\bigtriangledown f|_{(1,1,-1)}= (2yi+(2x-1)j) |_{(1,1-1)}=2i+j$
Hence the tangent plane is $2(x-1)+(y-1)=0 \Rightarrow \ 2x+y=3$
Well that was nice! The rest of the question is tricky though!
b). Write down a normal to S at $(x,y,2xy-y)$.
This is where I start to have trouble.
The answer I have so far is $x_1=x+2t$, $y_1=y+t$ and $z_1=2xy-y$.
Where $t \in \mathbb{R}$ and $x_1,y_1$ and $z_1$ are the new x,y and z values respectively.
I've been trying to get $z_1$ in terms of $x_1$ and $y_1$, but I can't elminate t!
c). At which point is the tangent plane parallel to the x-y plane?
Call the tangent plane $g(x,y)=2x+y-3$.
Hence $\bigtriangledown g(x,y)=2i+j$.
This is where I start to get a little confused. I was hoping to set $\bigtriangledown g(x,y)$ to 0 since the x-y plane has a gradient of 0. However, $\bigtriangledown g(x,y)$ is a constant function so can never equal 0!
d). Evaluate $\int \int_S \frac{z}{\sqrt{1+2x^2+2y^2-2x}}~ dA$
The problem here is that I appear to have three variables: $x,y$ and $z$. This seems to contradict the fact that this is a double integral.
Is z a variable or a constant? If z is a variable then is $z=2xy-y$?
I would appreciate on any of the parts to this question.
If you have any ideas or explanations to any parts, feel free to post!!
2. Originally Posted by Showcase_22
Fair enough!
$\bigtriangledown f|_{(1,1,-1)}= (2yi+(2x-1)j) |_{(1,1-1)}=2i+j$
Hence the tangent plane is $2(x-1)+(y-1)=0 \Rightarrow \ 2x+y=3$
Well that was nice! The rest of the question is tricky though!
This is where I start to have trouble.
The answer I have so far is $x_1=x+2t$, $y_1=y+t$ and $z_1=2xy-y$.
Where $t \in \mathbb{R}$ and $x_1,y_1$ and $z_1$ are the new x,y and z values respectively.
I've been trying to get $z_1$ in terms of $x_1$ and $y_1$, but I can't elminate t!
Call the tangent plane $g(x,y)=2x+y-3$.
Hence $\bigtriangledown g(x,y)=2i+j$.
This is where I start to get a little confused. I was hoping to set $\bigtriangledown g(x,y)$ to 0 since the x-y plane has a gradient of 0. However, $\bigtriangledown g(x,y)$ is a constant function so can never equal 0!
The problem here is that I appear to have three variables: $x,y$ and $z$. This seems to contradict the fact that this is a double integral.
Is z a variable or a constant? If z is a variable then is $z=2xy-y$?
I would appreciate on any of the parts to this question.
If you have any ideas or explanations to any parts, feel free to post!!
If $z = f(x,y)$ then the normal of the tangent plane is given by ${\bf n} = $ where the derivatives are evaluated at some point $P$.
3. $
{\bf n} = <(f_x,f_y,-1>
$
What does this mean?
4. Originally Posted by Showcase_22
What does this mean?
You mean $< \;\cdot \; , \;\cdot \;, \;\cdot \;>$ - It's a vector.
5. Ah, I thought it spanned something (that's my excuse anyway.......).
Here $P=(x,y,2xy-y)$.
So ${\bf n}=< f_x,f_y,-1>=$? (obtained by making $x=t$ and rearranging $2x+y=3$).
What are the precise meanings of $f_x$ and $f_y$?
6. Originally Posted by Showcase_22
Ah, I thought it spanned something (that's my excuse anyway.......).
Here $P=(x,y,2xy-y)$.
So ${\bf n}=< f_x,f_y,-1>=$? (obtained by making $x=t$ and rearranging $2x+y=3$).
What are the precise meanings of $f_x$ and $f_y$?
They are partial derivatives. So ${\bf n}=< f_x,f_y,-1>=<2y,2x-1,-1>$ and at $P(1,1,-1)$ ${\bf n}=<2,1,-1>$ so the tangent plane is
$
2(x-1) + (y-1) - (z+1) = 0
$
7. $
2(x-1) + (y-1) - (z+1) = 0
$
$2x+y-z=4$
For c), I need to find when this is parallel to the x-y plane.
The x-y plane has an equation $z=0$.
$z=2x+y-4 \Rightarrow \ x=2, y=0$ so the point where the tangent plane is parallel to the x-y plane is $(2,0,0)$.
Is this right? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 68, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867791295051575, "perplexity": 248.839969184393}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148834.81/warc/CC-MAIN-20160205193908-00193-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://medium.com/quantum-physics/bertrands-paradox-9a6789dcf02e | ## Solving the “hard problem”
Joseph Louis François Bertrand (1822 — 1900) was a French mathematician who worked in the fields of number theory, differential geometry, probability theory, economics and thermodynamics.
In his influential book on probability theory, Calcul des probabilités (1889), he introduced a problem challenging the classical interpretation of probability theory, today known as Bertrand’s paradox. In Bertrand’s own words, the problem is the following
We draw at random a chord onto a circle. What is the probability that it is longer than the side of the inscribed equilateral triangle?
Bertrand gave three different answers to his question, yielding three different values for the sought probability. And since these three answers appear to be based, all three, on a valid reasoning, that’s why Bertrand’s problem (and similar problems) was subsequently qualified, by Poincaré, as a paradox.
The first solution
The first proposed answer consists in choosing an arbitrary point on the circle, considering it as one of the vertexes of an inscribed equilateral triangle. This point, describing one of the two points of intersection of the chord with the circle, is kept fixed, whereas the second point is varied (so that the chord moves like a sort of pendulum). One then observes that by considering all possible (second) points on the circle, the chord will rotate of a total angle of 180°, but that only the chords lying within the arc subtended by an angle of 60° at the vertex (see the figure), satisfy the condition of being longer than the side of the inscribed equilateral triangle. Thus, one finds that the probability is:
P = 60° divided by 180° = 1/3.
The second solution
The second proposed solution consists in first choosing an arbitrary direction, and then considering chords which are all parallel to that direction. Then, moving the chords along the circle, one observes that those intersecting its diameter in its central segment, whose length is half the diameter of the circle, satisfy the condition of being longer than the side of the inscribed equilateral triangle. Thus, one finds this time that the probability is:
P = half-diameter divided by diameter = 1/2.
The third solution
The third solution proposed by Bertrand consists in choosing an arbitrary point inside the circle, considering it as the middle point of the chord. Then, moving this point within the entire area of the circle, one observes that all the chords having their middle point within an internal smaller circle, whose radius is one half the radius of the big circle (and which is the incircle of the equilateral triangle), satisfy the condition of being longer than the side of the inscribed equilateral triangle. Being the area of the internal circle one fourth of the area of the big circle, this time the probability is:
P = area-small-circle divided by area-big-circle = 1/4.
According to a recent analysis of philosopher Nicholas Shackel, the current situation is that after more than a century, the paradox remains unresolved, and continues to stand in refutation of the so-called principle of indifference [N. Shackel, “Bertrand’s Paradox and the Principle of Indifference,” Philosophy of Science, 74, April 2007, pp. 150–175]
Even more pessimistically, philosopher Darrell P. Rowbottom recently affirmed that Bertrand’s proposed solutions to his own question, which generate his chord paradox, are all inapplicable, so that there is no solace for the defenders of the principle of indifference, as it emerges that the paradox is much harder to solve than previously anticipated. [D. P. Rowbottom, “Bertrand’s Paradox Revisited: Why Bertrand’s ‘Solutions’ Are All Inapplicable,” Philosophia Mathematica (III) Vol. 21 No. 1, 2012].
Let me remind that the principle of indifference, originally formulated by Jakob Bernoulli as the principle of insufficient reason, and later on by John Maynard Keynes (who strenuously opposed the principle, and devoted an entire chapter of his book in an attempt to refute it), tells us that [Keynes, John Maynard, A Treatise on Probability, London: Macmillan ([1921] 1963)]:
If there is no known reason for predicating of our subject one rather than another of several alternatives, then relatively to such knowledge the assertions of each of these alternatives have an equal probability.
This principle is usually assumed to incorporate a necessary truth about the relation between “possibilities” and “probabilities”:
Possibilities of which we have equal (objective) ignorance have equal probabilities.
And it is generally assumed that its application is sufficient for solving probability problems and finding for them unique solutions!
But this belief is precisely what has been undermined by Bertrand, with his three different “solutions”, all three apparently based on that fundamental principle.
Now, recently, together with Diederik Aerts, we could propose what we think is a convincing solution to this old and important problem, lying at the foundation of probability theory. The solution came about to us as a consequence of a mathematical problem we were able to solve with respect to the measurement problem of quantum theory. In fact, Bertrand’s paradox stood in the way of the solution we searched, so that by solving the latter we also obtained what we think is a convincing solution to the former.
This, however, should not come as a surprise. The intimate connection between fundamental problems of probability theory, like Bertrand’s paradox, and of quantum mechanics, like the measurement problem, is in fact not coincidental. Both disciplines deal with the description of systems subjected to specific experimental actions, according to protocols which incorporate the presence of irreducible fluctuations, so that the outcomes of these actions cannot be predicted in advance with certainty, not even in principle.
In that sense, we can certainly affirm that that the founding fathers of probability theory, without knowing it, were actually quantum physicists ante litteram!
Solving the hard problem of Bertrand’s paradox
by Diederik Aerts and Massimiliano Sassoli de Bianchi
J. Math. Phys. 55, 083503 (2014); http://dx.doi.org/10.1063/1.4890291
Abstract: Bertrand’s paradox is a famous problem of probability theory, pointing to a possible inconsistency in Laplace’s principle of insufficient reason. In this article we show that Bertrand’s paradox contains two different problems: an “easy” problem and a “hard” problem. The easy problem can be solved by formulating Bertrand’s question in sufficiently precise terms, so allowing for a non ambiguous modelization of the entity subjected to the randomization. We then show that once the easy problem is settled, also the hard problem becomes solvable, provided Laplace’s principle of insufficient reason is applied not to the outcomes of the experiment, but to the different possible “ways of selecting” an interaction between the entity under investigation and that producing the randomization. This consists in evaluating a huge average over all possible “ways of selecting” an interaction, which we call a universal average. Following a strategy similar to that used in the definition of the Wiener measure, we calculate such universal average and therefore solve the hard problem of Bertrand’s paradox. The link between Bertrand’s problem of probability theory and the measurement problem of quantum mechanics is also briefly discussed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146820306777954, "perplexity": 618.8524104978892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201707.53/warc/CC-MAIN-20190318211849-20190318233849-00187.warc.gz"} |
http://export.arxiv.org/list/hep-th/9709?show=320 | # High Energy Physics - Theory
## Authors and titles for Sep 1997
[ total of 320 entries: 1-320 ]
[ showing 320 entries per page: fewer | more ]
[1]
Title: Seiberg-Witten Theory, Integrable Systems and D-branes
Authors: A.Marshakov
Comments: LaTeX, 11pp, 3 figs in tex-format requiring emlines2.sty; Based on the talks given at NATO Advanced Research Workshop on Theoretical Physics "New Developments in Quantum Field Theory", Zakopane, 14-20 June 1997 and IV International Conference "Conformal Field Theories and Integrable Models", Chernogolovka, 23-27 June 1997
Subjects: High Energy Physics - Theory (hep-th)
[2]
Title: The Manifestly Sl(2;Z)-covariant Superstring
Journal-ref: JHEP 9709:003,1997
Subjects: High Energy Physics - Theory (hep-th)
[3]
Title: Geometry of the BFV Theorem
Authors: K. Bering
Comments: 16 pages, LaTeX2e. Some signs corrected. To appear in J.Math.Phys
Journal-ref: J.Math.Phys. 39 (1998) 2507-2519
Subjects: High Energy Physics - Theory (hep-th)
[4]
Title: String Amplitudes and N=2, d=4 Prepotential in Heterotic K3 x T^2 Compactifications
Comments: 28 TeX pages, uses harvmac, Final Version to appear in NP B
Journal-ref: Nucl.Phys.B514:135-160,1998
Subjects: High Energy Physics - Theory (hep-th)
[5]
Title: Space-time Uncertainty Principle from Breakdown of Topological Symmetry
Authors: I.Oda
Journal-ref: Mod.Phys.Lett. A13 (1998) 203-210
Subjects: High Energy Physics - Theory (hep-th)
[6]
Title: Membrane, Four-Brane and Dual Coordinates in the M(atrix) Theory Compactified on Tori
Authors: Shijong Ryang
Comments: 9 pages, latex, no figures
Journal-ref: Mod.Phys.Lett.A13:1463-1472,1998
Subjects: High Energy Physics - Theory (hep-th)
[7]
Title: Fine Structure of Matrix Darboux-Toda Integrable Mapping
Journal-ref: Phys.Lett. A242 (1998) 31-35
Subjects: High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI)
[8]
Title: BPS Solitons and Killing Spinors in Three Dimensional N=2 Supergravity
Comments: 7 pages, LaTeX. Short Talk given at the Meeting Trends in Theoretical Physics'', April 28th-May 6th, 1997, La Plata, Argentina
Subjects: High Energy Physics - Theory (hep-th)
[9]
Title: Can (noncommutative) geometry accommodate leptoquarks?
Authors: Mario Paschke, Florian Scheck, Andrzej Sitarz (Univ. Mainz)
Comments: LaTeX2e, uses amsmath, amsthm, amsfonts
Journal-ref: Phys. Rev. D 59, 035003 (1999)
Subjects: High Energy Physics - Theory (hep-th)
[10]
Title: On the M-Theory Approach to (Compactified) 5D Field Theories
Journal-ref: Phys.Lett.B415:127-134,1997
Subjects: High Energy Physics - Theory (hep-th)
[11]
Title: Interacting Quantum Fields on a Curved Background
Comments: 6 pages, latex, contribution to the Proceedings of the ICMP Brisbane 1997
Subjects: High Energy Physics - Theory (hep-th)
[12]
Title: Anomalies of the SO(32) five-brane and their cancellation
Journal-ref: Nucl.Phys. B512 (1998) 199-208
Subjects: High Energy Physics - Theory (hep-th)
[13]
Title: Exceptional groups from open strings
Journal-ref: Nucl.Phys.B518:151-172,1998
Subjects: High Energy Physics - Theory (hep-th)
[14]
Title: The Self-Dual String Soliton
Comments: 21 pages phyzzx. The discussion of the supersymmetry transformations and the fivebrane equations of motion has been significantly extended. Some references have been added. The results are the same as the previous version
Journal-ref: Nucl.Phys. B515 (1998) 203-216
Subjects: High Energy Physics - Theory (hep-th)
[15]
Title: E_8 x E_8 Small Instantons in Matrix Theory
Authors: David A. Lowe
Comments: 17 pages, TeX, harvmac, final version to appear Nucl. Phys. B
Journal-ref: Nucl.Phys. B519 (1998) 180-194
Subjects: High Energy Physics - Theory (hep-th)
[16]
Title: Free boson representation of DY_{\hbar}(gl_N)_k
Journal-ref: J.Math.Phys.39:2273-2289,1998
Subjects: High Energy Physics - Theory (hep-th); Quantum Algebra (math.QA)
[17]
Title: An SL(2, Z) Multiplet of Black Holes in $D = 4$ Type II Superstring Theory
Journal-ref: Phys.Lett.B421:185-195,1998
Subjects: High Energy Physics - Theory (hep-th)
[18]
Title: Observing Quantum Tunneling in Perturbation Series
Authors: Hiroshi Suzuki, Hirofumi Yasuta (Ibaraki University)
Comments: 5 pages, LaTeX with espcrc2 macro. Talk given at the 7th Asia Pacific Physics Conference, August 19--23, 1997, Peking, China
Subjects: High Energy Physics - Theory (hep-th)
[19]
Title: Multiloop $Φ^3$ Amplitudes from Bosonic String Theory
Comments: 1+26 pages (Latex), 4 figures included (PicTex)
Journal-ref: Nucl.Phys. B515 (1998) 488-508
Subjects: High Energy Physics - Theory (hep-th)
[20]
Title: Faddeev-Jackiw Analysis of Topological Mass Generating Action
Authors: Chang-Yeong Lee (Sejong Univ.), Dong Won Lee (Kon-kuk Univ.)
Subjects: High Energy Physics - Theory (hep-th)
[21]
Title: Nonpolynomial gauge invariant interactions of 1-form and 2-form gauge potentials
Comments: 3 pages, revtex, no figures; title amended, comments on related works modified/added, ref. added; to appear in the Proceedings of the 31st International Symposium Ahrenshoop on the Theory of Elementary Particles, Buckow (Germany), September 2 - 6, 1997
Subjects: High Energy Physics - Theory (hep-th)
[22]
Title: Particle production in string cosmology models
Comments: 20 pages, no figures, latex, RevTex
Journal-ref: Phys.Rev. D57 (1998) 725-740
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[23]
Title: Supersymmetry breaking in M-theory
Comments: 10 pages, latex + espcrc2.sty, no figures. Based on talks given at the 5th International Conference on Supersymmetries in Physics, SUSY 97, May 27-31, 1997, University of Pennsylvania, Philadelphia, PA; and, International Europhysics Conference on High Energy Physics, HEP 97, 19-26 August, Jerusalem, Israel
Journal-ref: Nucl.Phys.Proc.Suppl. 62 (1998) 312-320
Subjects: High Energy Physics - Theory (hep-th)
[24]
Title: Negative Dimensional Integration: "Lab Testing" at Two Loops
Comments: 10 pages, LaTeX2e, uses style jhep.cls (included)
Journal-ref: JHEP 9709 (1997) 002
Subjects: High Energy Physics - Theory (hep-th)
[25]
Title: Eleven Dimensional Superstring with New Supersymmetry and D=10 type 2A Green-Schwarz Superstring
Authors: A.A. Deriglazov
Comments: 11 pages, LaTex file, Subm. to Phys.Lett.B (1997)
Subjects: High Energy Physics - Theory (hep-th)
[26]
Title: Topological contents of 3D Seiberg-Witten theory
Authors: Boguslaw Broda (University of Lodz)
Comments: 8 pages, 7 Postscript figures, uses plenum and epsfig, talk at the NATO Advanced Research Workshop: "Recent Developments in Quantum Field Theory", June 14-20, 1997, Zakopane, Poland
Subjects: High Energy Physics - Theory (hep-th)
[27]
Title: Born-Infeld particles and Dirichlet p-branes
Authors: G. W. Gibbons
Comments: 40 pages Latex file, no figures
Journal-ref: Nucl.Phys.B514:603-639,1998
Subjects: High Energy Physics - Theory (hep-th)
[28]
Title: Non-integrable aspects of the multi-frequency Sine-Gordon model
Comments: 39 pages, latex, 10 figures
Journal-ref: Nucl.Phys. B516 (1998) 675-703
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat)
[29]
Title: Gauge fields and interactions in matrix string theory
Authors: Thomas Wynter
Journal-ref: Phys.Lett.B415:349-357,1997
Subjects: High Energy Physics - Theory (hep-th)
[30]
Title: U-duality Covariant M-theory Cosmology
Comments: 18 pages, LATEX, 1 Postscript figure included
Journal-ref: Phys.Lett. B437 (1998) 291-302
Subjects: High Energy Physics - Theory (hep-th)
[31]
Title: Potential Topography and Mass Generation
Comments: 15 pages, latex2e, three figures, uses plenum.sty Invited talk by P. Orland at the NATO Advanced Workshop, New Developments in Quantum Field Theory", Zakopane, Poland, June 14-19, 1997, proceedings to be published by Plenum Press
Subjects: High Energy Physics - Theory (hep-th)
[32]
Title: Universal Fluctuations in Dirac Spectra
Comments: 30 pages, 6 figures, Latex, Invited talk at the "Nato Advanced Research Workshop on Theoretical Physics: New Development in Quantum Field Theory", Zakopane, 1997
Subjects: High Energy Physics - Theory (hep-th)
[33]
Title: Wrapped Branes and Supersymmetry
Comments: 18 pages, Latex with Revtex, minor corrections and references added, version to appear in Nuclear Physics B
Journal-ref: Nucl.Phys.B519:141-158,1998
Subjects: High Energy Physics - Theory (hep-th)
[34]
Title: Expectation values of local fields in Bullough-Dodd model and integrable perturbed conformal field theories
Comments: 27 pages, harvmac.tex, one epsf figure
Journal-ref: Nucl.Phys. B516 (1998) 652-674
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat); Quantum Algebra (math.QA)
[35]
Title: Gauge Invariance and Effective Actions in D=3 at Finite Temperature
Journal-ref: Phys.Rev. D57 (1998) 1171-1177
Subjects: High Energy Physics - Theory (hep-th)
[36]
Title: Representation of Quantum Mechanical Resonances in the Lax-Phillips Hilbert Space
Comments: Plain TeX, 26 pages. Minor revisions
Journal-ref: J.Math.Phys. 41 (2000) 8050-8071
Subjects: High Energy Physics - Theory (hep-th)
[37]
Title: Regularised Supermembrane Theory and Static Configurations of M-Theory
Comments: Revised version to appear in Euro. Phys. Jour. C
Journal-ref: Eur.Phys.J.C8:507-511,1999
Subjects: High Energy Physics - Theory (hep-th)
[38]
Title: Testing M(atrix)-Theory at Two Loops
Authors: Katrin Becker
Comments: 7 pages, latex, 1 figure. Lecture given at STRINGS97
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 165-171
Subjects: High Energy Physics - Theory (hep-th)
[39]
Title: Equivalence of the sine-Gordon and massive Thirring models at finite temperature
Journal-ref: Phys.Lett. B419 (1998) 296-302
Subjects: High Energy Physics - Theory (hep-th)
[40]
Title: New Type of Vector Gauge Theory from Noncommutative Geometry
Authors: Chang-Yeong Lee (Sejong Univ.)
Comments: 12 pages, LaTeX file, to appear in Phys. Lett. B
Journal-ref: Phys.Lett. B427 (1998) 77-84
Subjects: High Energy Physics - Theory (hep-th)
[41]
Title: Entropy of very low energy localized states
Authors: Ken D. Olum
Comments: 27 pages, RevTeX, 7 postscript figures with epsf
Journal-ref: Phys.Rev. D57 (1998) 2486-2499
Subjects: High Energy Physics - Theory (hep-th)
[42]
Title: Matrix Membranes and Integrability
Comments: 14 pages, Latex, uses lamuphys.sty; talk by 1st author at the UIC "Supersymmetry and Integrable Systems" Workshop, Chicago, June 12-14, 1997; proceedings in Springer Lecture Notes in Physics, H Aratyn et al (eds)
Subjects: High Energy Physics - Theory (hep-th)
[43]
Title: Supersymmetry breaking in M-theory and quantization rules
Authors: Emilian Dudas
Comments: 16 pages, LaTex, no figures
Journal-ref: Phys.Lett. B416 (1998) 309-318
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[44]
Title: SO(9,1) invariant matrix formulation of supermembrane
Journal-ref: Nucl.Phys. B510 (1998) 175-198
Subjects: High Energy Physics - Theory (hep-th)
[45]
Title: High-energy quark-quark scattering and the eikonal approximation
Comments: Talk given at the High Energy Conference on Quantum Chromodynamics'', Montpellier (France), July 3rd-9th 1997 (QCD 97); 6 pages, LaTeX file, uses espcrc2.sty''
Journal-ref: Nucl.Phys.Proc.Suppl. 64 (1998) 191-196
Subjects: High Energy Physics - Theory (hep-th)
[46]
Title: Schwinger-Dyson Equation, Area Law and Chiral Symmetry in QCD
Authors: G. M. Prosperi (Dipartimento di Fisica dell'Universita, Milano INFN)
Comments: 29 pages, RevTex file, Phys. Rev. macro
Subjects: High Energy Physics - Theory (hep-th)
[47]
Title: Deformation of Super Virasoro Algebra in Noncommutative Quantum Superspace
Journal-ref: Phys.Lett. B415 (1997) 170-174
Subjects: High Energy Physics - Theory (hep-th)
[48]
Title: T-duality and HKT manifolds
Authors: A. Opfermann
Comments: 14 pages, latex2e, acknowledgement added, version to appear in Phys. Lett. B
Journal-ref: Phys.Lett.B416:101-107,1998
Subjects: High Energy Physics - Theory (hep-th)
[49]
Title: M-Theory Model-Building and Proton Stability
Journal-ref: Phys.Lett.B419:123-131,1998
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[50]
Title: Properties of Naked Black Holes
Comments: 23 pages, no figures, latex
Journal-ref: Phys.Rev. D57 (1998) 1098-1107
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[51]
Title: Physical fields and Clifford algebras
Subjects: High Energy Physics - Theory (hep-th)
[52]
Title: Path Integral for Relativistic Dionium System
Authors: De-Hone Lin
Comments: 14 pages, Standard LaTex, some descriptions are preseinted in more accuracy
Subjects: High Energy Physics - Theory (hep-th)
[53]
Title: Calogero-Moser Systems in SU(N) Seiberg-Witten Theory
Comments: 45 pages, Tex, no figures
Journal-ref: Nucl.Phys. B513 (1998) 405-444
Subjects: High Energy Physics - Theory (hep-th)
[54]
Title: Mixed Boundary Conditions and Brane-String Bound States
Journal-ref: Nucl.Phys.B526:278-294,1998
Subjects: High Energy Physics - Theory (hep-th)
[55]
Title: Two-Dimensional Reduced Theory and General Static Solution for Uncharged Black p-Branes
Authors: Marco Cavaglia
Comments: 11 pages, plain LaTex, accepted for publication in Phys. Lett. B
Journal-ref: Phys.Lett. B413 (1997) 287-292
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[56]
Title: BPS States in F-Theory
Subjects: High Energy Physics - Theory (hep-th)
[57]
Title: The Second Virial Coefficient of Spin-1/2 Anyon Systems
Comments: 12 pages, 3 figures included
Subjects: High Energy Physics - Theory (hep-th)
[58]
Title: Heterotic/Type-I Duality in D<10 Dimensions, Threshold Corrections and D-Instantons
Authors: E. Kiritsis, N.A. Obers (CERN)
Comments: Latex, 67 pages, 1 figure
Journal-ref: JHEP 9710:004,1997
Subjects: High Energy Physics - Theory (hep-th)
[59]
Title: Relativistic Particle in the Liouville Field
Comments: 17 pages, Latex, no figures
Journal-ref: Theor.Math.Phys. 118 (1999) 183-196; Teor.Mat.Fiz. 118 (1999) 229-247
Subjects: High Energy Physics - Theory (hep-th)
[60]
Title: Renormalization Group with Condensate
Comments: Talk presented at the Eotvos Conference on "Strong and Electroweak Matter '97", Eger, Hungary, 20 pages, LaTex
Subjects: High Energy Physics - Theory (hep-th)
[61]
Title: BPS States in N=3 Superstrings
Comments: 29 pages, LaTeX, uses fleqn.sty and cite.sty
Journal-ref: Nucl.Phys. B511 (1998) 216-242
Subjects: High Energy Physics - Theory (hep-th)
[62]
Title: Introduction to Superstring Theory
Authors: E. Kiritsis (CERN)
Comments: 244 pages, Latex, 22 figures, uuencoded gzipped size of source file+figures ~ .6 Mb. Minor errors and misprints corrected. Version to be published in book form by Leuven University Press
Subjects: High Energy Physics - Theory (hep-th)
[63]
Title: A note on supersymmetric D-brane dynamics
Comments: LaTex file, 12 pages, no figures, some corrections in last section and references added; version to appear in Physics Letters B
Journal-ref: Phys.Lett. B417 (1998) 233-239
Subjects: High Energy Physics - Theory (hep-th)
[64]
Title: Universality of Quantum Entropy for Extreme Black Holes
Comments: 18 pages, latex, no figures, minor changes, to appear in Nucl. Phys. B
Journal-ref: Nucl.Phys. B523 (1998) 293-307
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[65]
Title: Realization of D4-Branes at Angles in Super Yang-Mills Theory
Comments: 13 pages, Latex, discussion of matrix model energy is modified and references added
Journal-ref: Phys.Lett. B418 (1998) 70-76
Subjects: High Energy Physics - Theory (hep-th)
[66]
Title: Models for Chronology Selection
Journal-ref: Phys.Rev. D57 (1998) 2372-2380
Subjects: High Energy Physics - Theory (hep-th)
[67]
Title: Constraining differential renormalization in abelian gauge theories
Comments: 13 pages, LaTeX. Some equations corrected and a reference added. Complete ps paper also available at this http URL or this ftp URL
Journal-ref: Phys.Lett. B419 (1998) 263-271
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[68]
Title: Duality in osp(1|2) Conformal Field Theory and link invariants
Journal-ref: Int.J.Mod.Phys. A13 (1998) 2931-2978
Subjects: High Energy Physics - Theory (hep-th)
[69]
Title: Worldvolume Supersymmetry
Authors: Renata Kallosh
Journal-ref: Phys.Rev.D57:3214-3218,1998
Subjects: High Energy Physics - Theory (hep-th)
[70]
Title: Nahm's equations and root systems
Journal-ref: Czech.J.Phys. 47 (1997) 1101-1106
Subjects: High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI)
[71]
Title: Non-Abelian Duality Based on Non-Semi-Simple Isometry Groups
Authors: Noureddine Mohammedi (Tours University, France)
Comments: 13 pages, Latex file, to appear in Phys. Lett. B
Journal-ref: Phys.Lett. B414 (1997) 104-110
Subjects: High Energy Physics - Theory (hep-th)
[72]
Title: Supersymmetry and the Multi-Instanton Measure II: From N=4 to N=0
Authors: N. Dorey (Swansea), T.J. Hollowood (Swansea), V.V. Khoze (Durham), M.P. Mattis (Los Alamos)
Journal-ref: Nucl.Phys. B519 (1998) 470-482
Subjects: High Energy Physics - Theory (hep-th)
[73]
Title: Gauge Symmetry and Integrable Models
Comments: 28 pages, no figures, latex
Subjects: High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI)
[74]
Title: Finite Temperature Nonlocal Effective Action for Scalar Fields
Comments: 9 pages, LaTeX (title is changed)
Journal-ref: Class.Quant.Grav.15:L13-L19,1998
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph)
[75]
Title: Chiral solitons from dimensional reduction of Chern-Simons gauged non-linear Schrödinger model of FQHE: classical and quantum aspects
Comments: 39 page, RevTeX, 6 figures
Journal-ref: Nucl.Phys. B516 (1998) 467-498
Subjects: High Energy Physics - Theory (hep-th)
[76]
Title: Conjectured Z_2-Orbifold Constructions of Self-Dual Conformal Field Theories at Central Charge 24 - the Neighborhood Graph
Authors: P.S. Montague
Journal-ref: Lett.Math.Phys. 44 (1998) 105-120
Subjects: High Energy Physics - Theory (hep-th)
[77]
Title: Dynamical symmetry breaking in the external gravitational and constant magnetic fields
Comments: 23 pages, Latex, epic.sty and eepic.sty are used
Journal-ref: Int.J.Mod.Phys.A14:481-504,1999
Subjects: High Energy Physics - Theory (hep-th)
[78]
Title: Fractional-Spin Integrals of Motion for the Boundary Sine-Gordon Model at the Free Fermion Point
Comments: 19 pages, LaTeX, no figures
Journal-ref: Int.J.Mod.Phys. A13 (1998) 2747-2764
Subjects: High Energy Physics - Theory (hep-th)
[79]
Title: On the Chern-Simons topological term at finite temperature
Journal-ref: Phys.Lett. B417 (1998) 114-118
Subjects: High Energy Physics - Theory (hep-th)
[80]
Title: Two Massive and One Massless Sp(4) Monopoles
Comments: 35 pages, five figures, Latex with revtex
Journal-ref: Phys. Rev. D 57, 5260 (1998)
Subjects: High Energy Physics - Theory (hep-th)
[81]
Title: Duality, Phases, Spinors and Monopoles in SO(N) and Spin(N) Gauge Theories
Journal-ref: JHEP 9809:017,1998
Subjects: High Energy Physics - Theory (hep-th)
[82]
Title: The Implicit Metric on a Deformation of the Atiyah-Hitchin Manifold
Authors: Gordon Chalmers
Comments: 15 pages, latex, appendix corr
Journal-ref: Phys.Rev. D58 (1998) 125011
Subjects: High Energy Physics - Theory (hep-th)
[83]
Title: Regularization of superstring amplitudes and a cancellation of divergences in superstring theory
Authors: G. S. Danilov
Subjects: High Energy Physics - Theory (hep-th)
[84]
Title: Path integral quantization of electrodynamics in dielectric media
Authors: M. Bordag, K. Kirsten (Leipzig), D.V. Vassilevich (St.Petersburg)
Comments: 10 pages, Latex, submitted to J.Phys.A, revised (a misprint in the bibliography)
Journal-ref: J.Phys.A31:2381-2389,1998
Subjects: High Energy Physics - Theory (hep-th)
[85]
Title: Field theory approach to one-dimensional electronic systems
Authors: Carlos M. Naon
Comments: 25 pages, latex, no figures. Talk delivered at Trends in Theoretical Physics - CERN - Santiago de Compostela - La Plata Meeting, La Plata, April-May 1997, to be published. Abstract modified
Subjects: High Energy Physics - Theory (hep-th)
[86]
Title: Scattering of scalar and Dirac particles by a magnetic tube of finite radius
Journal-ref: J.Phys.A30:7603-7620,1997
Subjects: High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[87]
Title: Long-distance interactions of branes: correspondence between supergravity and super Yang-Mills descriptions
Comments: 43 pages, latex. Few misprints corrected, to appear in Nuclear Physics
Journal-ref: Nucl.Phys.B515:73-113,1998
Subjects: High Energy Physics - Theory (hep-th)
[88]
Title: Self-interacting vector-tensor multiplet
Authors: Norbert Dragon, Sergei M. Kuzenko (Institut fuer Theoretische Physik, Universitaet Hannover)
Journal-ref: Phys.Lett. B420 (1998) 64-68
Subjects: High Energy Physics - Theory (hep-th)
[89]
Title: Duality and Light Cone Symmetries of the Equations of Motion
Comments: 13 pages, uses harvmac. An expanded and clarified version
Journal-ref: Phys.Lett. B432 (1998) 83-89
Subjects: High Energy Physics - Theory (hep-th)
[90]
Title: Evolution of (Ward-)Takahashi Relations and How I Used Them
Authors: R. Jackiw
Comments: 11 pages, 1 figure, BoxedEPS, REVTeX; email to jackiw@mitlns.mit.edu
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); History and Philosophy of Physics (physics.hist-ph); Quantum Physics (quant-ph)
[91]
Title: Schwarzschild Black Holes from Matrix Theory
Comments: 9 pages, latex; minor typos corrected
Journal-ref: Phys.Rev.Lett.80:226-229,1998
Subjects: High Energy Physics - Theory (hep-th)
[92]
Title: The O(N) Nonlinear Sigma Model in the Functional Schrödinger Picture
Comments: 13 pages, no figures, Latex file
Journal-ref: J.Phys.A31:6029-6036,1998
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat)
[93]
Title: A Remark on Dilaton Stabilization
Journal-ref: Phys.Lett. B417 (1998) 50-52
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[94]
Title: Schrödinger Wave Function for a Free Falling Particle in the Schwarzschild Black Hole
Comments: 13 pages, Latex format, no figures
Subjects: High Energy Physics - Theory (hep-th)
[95]
Title: QED processes beyond the Aharonov-Bohm effect
Comments: LaTeX file (RevTeX), 7 pages, to appear in the special issue of Foudation of Physics in the memory of the late Prof.A.O.Barut
Journal-ref: Found.Phys.28:777-788,1998
Subjects: High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[96]
Title: M Theory Fivebrane Interpretation for Strong Coupling Dynamics of SO(N_c) Gauge Theories
Comments: 15 pages, Latex, English corrected, Version to Appear in Physics Letters B
Journal-ref: Phys.Lett.B416:75-84,1998
Subjects: High Energy Physics - Theory (hep-th)
[97]
Title: The string tension in massive $QCD_2$
Journal-ref: Phys.Rev.Lett. 80 (1998) 430-433
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[98]
Title: D-brane interaction in the type IIB Matrix Model
Comments: Revtex, 18 pages, No figure
Journal-ref: Phys.Lett. B419 (1998) 62-72
Subjects: High Energy Physics - Theory (hep-th)
[99]
Title: Branes probing black holes
Comments: latex, 2 figures. Expanded version of my talk at STRINGS'97, v3: more typos corrected
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 17-27
Subjects: High Energy Physics - Theory (hep-th)
[100]
Title: New Developments in the Continuous Renormalization Group
Authors: Tim R. Morris
Comments: Invited key talk at NATO Advanced Research Workshop on Theoretical Physics: New Developments in Quantum Field Theory, Zakopane, Poland, 14-20 Jun 1997. 12 pages latex, includes plenum.sty
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat); High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph)
[101]
Title: Two-Dimensional QCD in the Wu-Mandelstam-Leibbrandt Prescription
Journal-ref: Phys.Rev. D57 (1998) 2456-2459
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[102]
Title: Chern-Simons Field Theory and Generalizations of Anyons
Authors: Stefan Mashkevich (ITP, Kiev)
Comments: 6 pages, LATEX. Contributed paper at the International Europhysics Conference on High Energy Physics HEP-97 (Jerusalem, Israel, 19--26 August 1997)
Subjects: High Energy Physics - Theory (hep-th)
[103]
Title: Fusion rules for admissible representations of affine algebras: the case of $A_2^{(1)}$
Comments: containing two TEX files: main file using input files harvmac.tex, amssym.def, amssym.tex, 19p.; file with figures using XY-pic package, 6p. Correction in the definition of general shifted weight diagram
Journal-ref: Nucl.Phys. B518 (1998) 645-668
Subjects: High Energy Physics - Theory (hep-th)
[104]
Title: Ectoplasm Has No Topology: The Prelude
Comments: 13 pages, UMDEPP 98-13, Presentation at the International Seminar on Supersymmetries and Quantum Symmetries'', Dubna, Russia, July 22 - 26, 1997, LaTeX, 1 figure
Subjects: High Energy Physics - Theory (hep-th)
[105]
Title: Bosonisation and Soldering of Dual Symmetries in Two and Three Dimensions
Comments: 21 pages, LaTex file, Ref.(14) has been corrected
Subjects: High Energy Physics - Theory (hep-th)
[106]
Title: Corrections to the Emergent Canonical Commutation Relations Arising in the Statistical Mechanics of Matrix Models
Comments: 32 pages, plain TeX, no figures
Journal-ref: J.Math.Phys. 39 (1998) 5083-5097
Subjects: High Energy Physics - Theory (hep-th)
[107]
Title: Notes on Matrix and Micro Strings
Comments: 27 pages, latex with espcrc2, 3 figures. References added. Some corrections at the end of section 10. Based on lectures given by H.V. at the APCTP Winter School held in Sokcho, Korea (Feb 1997) and joint lectures at Cargese Summer School (June 1997), as well as on talks given by H.V. at SUSY'97 (May 1997), and by R.D. and E.V. at STRINGS'97 (June 1997)
Journal-ref: Nucl.Phys.Proc.Suppl. 62 (1998) 348-362
Subjects: High Energy Physics - Theory (hep-th)
[108]
Title: Schwarzschild Black Holes in Various Dimensions from Matrix Theory
Comments: 7 pages, latex; a typo corrected (version to appear in Physics Letters B)
Journal-ref: Phys.Lett.B416:62-66,1998
Subjects: High Energy Physics - Theory (hep-th)
[109]
Title: Abelian-Projected Effective Gauge Theory of QCD with Asymptotic Freedom and Quark Confinement
Authors: Kei-ichi Kondo (Chiba Univ.)
Comments: 39 pages, Latex, no figures, (2.2, 4.1, 4.3 are modified; 4.4, Appendices A,B,C and references are added. No change in conclusion)
Journal-ref: Phys.Rev. D57 (1998) 7467-7487
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph)
[110]
Title: Unstable Systems in Relativistic Quantum Field Theory
Authors: L. Maiani, M. Testa
Journal-ref: Annals Phys. 263 (1998) 353-367
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[111]
Title: On the effective interactions of a light gravitino with matter fermions
Comments: 12 pages, 1 figure, plain LaTeX. An important proof added in section 5. Final version to be published in JHEP
Journal-ref: JHEP 9711:001,1997
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[112]
Title: Modulo 2 periodicity of complex Clifford algebras and electromagnetic field
Subjects: High Energy Physics - Theory (hep-th)
[113]
Title: Duality, Central Charges and Entropy of Extremal BPS Black Holes
Comments: 12 pages, LaTeX; Talk given by S. Ferrara at the STRINGS'97 Conference, 16-21 June 1997, Amsterdam, The Netherlands
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 185-196
Subjects: High Energy Physics - Theory (hep-th)
[114]
Title: Black Holes in Matrix Theory
Authors: M. Li, E. Martinec
Comments: 7 pages, latex; (uses espcrc2.sty). Talk by the second author, presented at STRINGS97 (Amsterdam, June 16-20, 1997).
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 329-335
Subjects: High Energy Physics - Theory (hep-th)
[115]
Title: Effective potential for the order parameter of the SU(2) Yang-Mills deconfinement transition
Authors: Michael Engelhardt, Hugo Reinhardt (Tuebingen Univ.)
Comments: 5 pages latex, 1 ps figure
Journal-ref: Phys.Lett. B430 (1998) 161-167
Subjects: High Energy Physics - Theory (hep-th)
[116]
Title: Construction of $R^4$ Terms in N=2 D=8 Superspace
Authors: Nathan Berkovits (IFT/UNESP, Sao Paulo)
Comments: 15 pages harvmac tex (reference 2 is corrected and some details are added to the conclusion)
Journal-ref: Nucl.Phys. B514 (1998) 191-203
Subjects: High Energy Physics - Theory (hep-th)
[117]
Title: A Semiclassical Approach to Level Crossing in Supersymmetric Quantum Mechanics
Authors: J. F. Beacom (Univ. of Wisconsin and Caltech), A. B. Balantekin (Univ. of Wisconsin)
Comments: 15 pages, Latex with lamuphys and psfig macros. Talk by first Author at the UIC "Supersymmetry and Integrable Models Workshop", Chicago, June 12-14, 1997; proceedings to be published in Springer Lecture Notes in Physics, H. Aratyn et al., eds. This paper also available at this http URL
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[118]
Title: Matrix Description of (1,0) Theories in Six Dimensions
Journal-ref: Phys.Lett.B420:55-63,1998
Subjects: High Energy Physics - Theory (hep-th)
[119]
Title: The Conformal Anomaly in General Rank 1 Symmetric Spaces and Associated Operator Product
Journal-ref: Mod.Phys.Lett. A13 (1998) 99-108
Subjects: High Energy Physics - Theory (hep-th)
[120]
Title: Gauge Transformations For Self/Anti-Self Charge Conjugate States
Authors: Valeri V. Dvoeglazov (Escuela de Fisica, UAZ)
Comments: ReVTeX file, 7pp., accepted in "Acta Physica Polonica B"
Journal-ref: Acta Phys.Polon. B29 (1998) 619-627
Subjects: High Energy Physics - Theory (hep-th)
[121]
Title: Chiral Interactions of Massive Particles in the (1/2,0)+(0,1/2) Representation
Authors: Valeri V. Dvoeglazov (Escuela de Fisica, UAZ)
Comments: CRCKAPB.STY used, 7pp. Presented at the Second Vigier Symposium, August 25-29, 1997, York University, Toronto, Canada
Journal-ref: Causality and Locality in Modern Physics -- Proceedings of a Symposium in honour of Jean-Pierre Vigier (Eds. G. Hunter, S. Jeffers, J.-P. Vigier), Kluwer Academic, Dordrecht, pp. 269-276
Subjects: High Energy Physics - Theory (hep-th)
[122]
Title: Born-Infeld Actions from Matrix Theory
Authors: Esko Keski-Vakkuri, Per Kraus (Caltech)
Journal-ref: Nucl.Phys. B518 (1998) 212-236
Subjects: High Energy Physics - Theory (hep-th)
[123]
Title: Interactions Between Branes and Matrix Theories
Authors: A.A. Tseytlin
Comments: 12 pages, latex. Contribution to Proceedings of STRINGS'97 (misprints corrected)
Journal-ref: Nucl.Phys.Proc.Suppl.68:99-110,1998
Subjects: High Energy Physics - Theory (hep-th)
[124]
Title: String Solitons and Black Hole Thermodynamics
Authors: Ramzi R. Khuri
Journal-ref: Proc. 19th MRST Meeting, Syracuse, 1997 p.44-51
Subjects: High Energy Physics - Theory (hep-th)
[125]
Title: Is Quantization of QCD Unique at the Non-Perturbative Level ?
Comments: 5 pages, Previous version updated and an example in 1+1 dimensions supplied
Journal-ref: Phys.Rept. 398 (2004) 245-252
Subjects: High Energy Physics - Theory (hep-th)
[126]
Title: Geometrising the closed string field theory action
Authors: Sabbir Rahman
Subjects: High Energy Physics - Theory (hep-th)
[127]
Title: Is the Planck Momentum Attainable?
Comments: 39 pages, 2 figures, latex2e, epsf
Subjects: High Energy Physics - Theory (hep-th)
[128]
Title: Energy Radiation from a Moving Mirror with Finite Mass
Comments: 13 pages, no figures, RevTex
Subjects: High Energy Physics - Theory (hep-th)
[129]
Title: The low-energy effective action for perturbative heterotic strings on $K_3 \times T^2$ and the d=4 N=2 vector-tensor multiplet
Comments: 36 pages, Latex 209, no figures, v2: minor textual changes
Journal-ref: Nucl.Phys. B524 (1998) 86-128
Subjects: High Energy Physics - Theory (hep-th)
[130]
Title: On the higher-loop effective action in NJL model
Authors: S.A.Garnov
Subjects: High Energy Physics - Theory (hep-th)
[131]
Title: The reduced covariant phase space quantization of the three dimensional Nambu-Goto string
Authors: Eduardo Ramos
Comments: LaTeX file (elsart macros), 22 pages
Journal-ref: Nucl.Phys. B519 (1998) 435-452
Subjects: High Energy Physics - Theory (hep-th)
[132]
Title: On the Construction of Zero Energy States in Supersymmetric Matrix Models
Authors: Jens Hoppe
Comments: By accident, the wrong file was submitted
Subjects: High Energy Physics - Theory (hep-th)
[133]
Title: Bremsstrahlung of relativistic electrons in the Aharonov-Bohm potential
Journal-ref: Phys.Rev. D53 (1996) 2178
Subjects: High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[134]
Title: Electron-positron pair production in the Aharonov-Bohm potential
Journal-ref: Phys.Rev. D53 (1996) 2190-2200
Subjects: High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[135]
Title: Topology and physics-a historical essay
Authors: C. Nash
Comments: Plain TeX, 60 pages, postscript figures included. v2: Spelling of K\"onigsberg corrected, thank you to all those who told me of this infelicity. v3: Some extra material added. I am much obliged to the numerous people who sent me emails about this article. v4: Some final additions. I am again much obliged to the numerous people who sent me emails
Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG); Differential Geometry (math.DG); Quantum Algebra (math.QA)
[136]
Title: Random Bond Potts Model: the Test of the Replica Symmetry Breaking
Comments: 50 pages, Latex, 2 eps figures
Journal-ref: Nucl.Phys. B520 (1998) 633-674
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat); High Energy Physics - Lattice (hep-lat)
[137]
Title: Degenerate Domain Wall Solutions in Supersymmetric Theories
Comments: 21 pages, LaTeX, 3 figures using epsf.sty
Journal-ref: Phys.Rev.D57:2590-2598,1998
Subjects: High Energy Physics - Theory (hep-th)
[138]
Title: Seiberg-Witten Solution from Matrix Theory
Authors: S.Gukov (Princeton U.)
Comments: 19 pages, LaTeX, no figures
Subjects: High Energy Physics - Theory (hep-th)
[139]
Title: On The M(atrix)-Model For M-Theory On $T^6$
Authors: Ori J. Ganor
Journal-ref: Nucl.Phys. B528 (1998) 133-155
Subjects: High Energy Physics - Theory (hep-th)
[140]
Title: Renormalization of number density in nonequilibrium quantum-field theory and absence of pinch singularities
Authors: A. Niégawa
Journal-ref: Phys.Lett. B416 (1998) 137-143
Subjects: High Energy Physics - Theory (hep-th)
[141]
Title: Stability and mass of point particles
Authors: J.W. van Holten
Journal-ref: Nucl.Phys. B529 (1998) 525-543
Subjects: High Energy Physics - Theory (hep-th)
[142]
Title: Chiral supersymmetric pp-wave solutions of IIA supergravity
Journal-ref: Phys.Lett. B415 (1997) 54-62
Subjects: High Energy Physics - Theory (hep-th)
[143]
Title: Conformally Invariant Path Integral Formulation of the Wess-Zumino-Witten $\to$ Liouville Reduction
Comments: Plain TeX file; 28 Pages
Journal-ref: Nucl.Phys. B520 (1998) 513-532
Subjects: High Energy Physics - Theory (hep-th)
[144]
Title: Two-loop self-energy diagrams worked out with NDIM
Comments: LaTeX, 10 pages, 2 figures, styles included
Journal-ref: Eur.Phys.J.C5:175-179,1998
Subjects: High Energy Physics - Theory (hep-th)
[145]
Title: The standard model in noncommutative geometry and fermion doubling
Authors: J. M. Gracia-Bondia, B.Iochum, T. Schucker (Marseille)
Journal-ref: Phys.Lett.B416:123-128,1998
Subjects: High Energy Physics - Theory (hep-th)
[146]
Title: Fayet-Iliopoulos Potentials from Four-Folds
Authors: Wolfgang Lerche
Journal-ref: JHEP 9711 (1997) 004
Subjects: High Energy Physics - Theory (hep-th)
[147]
Title: The semiclassical approximation for the Chern-Simons partition function
Comments: 14 p., latex (a typo corrected), to appear in Phys.Lett.B
Journal-ref: Phys.Lett. B417 (1998) 53-60
Subjects: High Energy Physics - Theory (hep-th); Differential Geometry (math.DG); Quantum Algebra (math.QA)
[148]
Title: First Massive State of the Superstring in Superspace
Authors: N. Berkovits, M.M. Leite (IFT/UNESP, Sao Paulo)
Journal-ref: Phys.Lett. B415 (1997) 144-148
Subjects: High Energy Physics - Theory (hep-th)
[149]
Title: Field configurations with half-integer angular momentum in purely bosonic theories without topological charge
Authors: Tanmay Vachaspati (CWRU)
Comments: LaTeX, 3 pages. New title, significant revisions
Journal-ref: Phys. Lett. B427, 323 (1998)
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[150]
Title: Parity-Preserving Pauli-Villars Regularization in (2+1)-Dimensional Gauge Models
Journal-ref: Yad.Fiz. 58 (1995) 1718-1720; Phys.Atom.Nucl. 58 (1995) 1619-1621
Subjects: High Energy Physics - Theory (hep-th)
[151]
Title: Initial Conditions for Semiclassical Field Theory
Comments: 20 pages, Plain TeX, one postscript figure
Journal-ref: Theor.Math.Phys. 114 (1998) 184-197; Teor.Mat.Fiz. 114 (1998) 233-249
Subjects: High Energy Physics - Theory (hep-th)
[152]
Title: Quantum Cohomology and Free Field Representation
Journal-ref: Nucl.Phys. B510 (1998) 608-622
Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)
[153]
Title: General Aspects of Symmetry Breaking
Authors: Martin Haft
Subjects: High Energy Physics - Theory (hep-th)
[154]
Title: The Renormalization of the Electroweak Standard Model to All Orders
Authors: Elisabeth Kraus
Comments: 107 pages, latex2e, to be published in Ann. Phys. (NY) 1997
Journal-ref: Annals Phys.262:155-259,1998
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[155]
Title: Fixed point analysis of a scalar theory with an external field
Authors: A.Bonanno, D.Zappala'
Journal-ref: Phys.Rev. D56 (1997) 3759-3762
Subjects: High Energy Physics - Theory (hep-th); Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Phenomenology (hep-ph)
[156]
Title: The Role of Renormalization Group in Fundamental Theoretical Physics
Comments: 6 pages, LaTeX The text of the talk presented at the Conference "RG-96" (Dubna, Aug.'96). To appear at the proceedings
Journal-ref: Int.J.Mod.Phys.B12:1247-1253,1998
Subjects: High Energy Physics - Theory (hep-th)
[157]
Title: Duality and the cosmological constant
Comments: To appear in International Journal of Theoretical Physics, Vol 36, No. 9, (1997)
Journal-ref: Int.J.Theor.Phys. 36 (1997) 2035-2042
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[158]
Title: Hadamard renormalization, conformal anomaly and cosmological event horizons
Comments: To appear in Phys. Rev. D 56, 1 (1997)
Journal-ref: Phys.Rev.D56:4633-4639,1997; Erratum-ibid.D57:5311,1998
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[159]
Title: Orientifold Limit of F-theory Vacua
Authors: Ashoke Sen
Comments: LaTeX file, 14 pages, 3 figures, Talk at STRINGS'97 and Trieste conference on Duality Symmetries
Journal-ref: Nucl.Phys.Proc.Suppl.68:92-98,1998; Nucl.Phys.Proc.Suppl.67:81-87,1998
Subjects: High Energy Physics - Theory (hep-th)
[160]
Title: D-branes and Creation of Strings
Authors: Igor R. Klebanov
Comments: 13 pages, latex with espcrc2. Based on talks at Strings'97 and SUSY'97. References added
Journal-ref: Nucl.Phys.Proc.Suppl.68:140-152,1998
Subjects: High Energy Physics - Theory (hep-th)
[161]
Title: 't Hooft Conditions in Supersymmetric Dual Theories
Authors: Gustavo Dotti
Journal-ref: Phys.Lett. B417 (1998) 275-280
Subjects: High Energy Physics - Theory (hep-th)
[162]
Title: Monodromy Properties of the Energy Momentum Tensor on General Algebraic Curves
Comments: 21 pages, plain TeX + harvmac
Journal-ref: J.Geom.Phys.29:161-176,1999
Subjects: High Energy Physics - Theory (hep-th)
[163]
Title: Bosonization in d=2 from finite chiral determinants with a Gauss decomposition
Journal-ref: Phys.Rev. D56 (1997) 6706-6709
Subjects: High Energy Physics - Theory (hep-th)
[164]
Title: Brackets in the jet-bundle approach to field theory
Authors: Glenn Barnich
Comments: 15 pages AMS-latex file, talk given at the conference Secondary Calculus and Cohomological Physics, August 24--31, 1997, Moscow, Russia''
Subjects: High Energy Physics - Theory (hep-th)
[165]
Title: The Continuum Version of φ^4_{1+1}-theory in Light-Front Quantization
Authors: Pierre Grangé (Laboratoire de Physique Mathématique et Theórique Université Montpellier II), Peter Ullrich, Ernst Werner (Institut für Theoretische Physik, Universität Regensburg)
Comments: 19 pages, 6 eps figures included, in Latex 2.09, uses psfig
Journal-ref: Phys.Rev. D57 (1998) 4981-4989
Subjects: High Energy Physics - Theory (hep-th)
[166]
Title: Lattice Black Holes
Comments: 26 pages, LaTeX, 2 figures included with psfig. Several improvements in the presentation. One figure added. Final version to appear in Phys.Rev.D
Journal-ref: Phys. Rev. D 57, 6269 (1998)
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Lattice (hep-lat)
[167]
Title: Negative Dimensional Integration for Massive Four Point Functions--II: New Solutions
Comments: 16 pages, LaTeX using elsart.cls(included), 5 figures gzip compacted
Subjects: High Energy Physics - Theory (hep-th)
[168]
Title: Zero curvature representation for classical lattice sine-Gordon equation via quantum R-matrix
Authors: A.Zabrodin
Comments: 10 pages, LaTeX (misprints are corrected)
Journal-ref: JETP Lett. 66 (1997) 653-659
Subjects: High Energy Physics - Theory (hep-th)
[169]
Title: Vacuum-Bounded States and the Entropy of Black Hole Evaporation
Authors: Ken D. Olum
Comments: MIT thesis. 79 pages. LaTex with MIT thesis style (included). 11 figures with epsf. Most of this material (but not chapter 2) has previously appeared in somewhat different form in hep-th/9710086 and hep-th/9709041
Subjects: High Energy Physics - Theory (hep-th)
[170]
Title: Global Anomalies in Canonical Gravity
Comments: 17 pages, LaTeX, uses packages amstex, amssymb. One reference added, one sign error corrected. Conclusions unchanged. To appear in Nucl. Phys. B
Journal-ref: Nucl.Phys. B523 (1998) 391-402
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[171]
Title: Creation of Strings in D-particle Quantum Mechanics
Authors: Ulf H. Danielsson, Gabriele Ferretti (Uppsala University)
Comments: 6 pages, 2 figures. Uses espcrc2.sty. Talk given at STRINGS'97
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 78-83
Subjects: High Energy Physics - Theory (hep-th)
[172]
Title: Black Hole Thermodynamics in Semi-Classical and Superstring Theory
Authors: Sascha Vongehr
Subjects: High Energy Physics - Theory (hep-th)
[173]
Title: On the Equivalence between 2D Induced Gravity and a WZNW system
Journal-ref: Mod.Phys.Lett. A13 (1998) 911-920
Subjects: High Energy Physics - Theory (hep-th)
[174]
Title: Finite Energy Solutions in Three-Dimensional Heterotic String Theory
Journal-ref: Nucl.Phys. B522 (1998) 137-157
Subjects: High Energy Physics - Theory (hep-th)
[175]
Title: On the Symmetry of Real-Space Renormalisation
Authors: D.C. Brody, A. Ritz
Comments: 16 pages, RevTeX, 3 postscript figures
Journal-ref: Nucl.Phys. B522 (1998) 588-604
Subjects: High Energy Physics - Theory (hep-th)
[176]
Title: Gauge Fixing in the Partition Function for Generalized Quantum Dynamics
Comments: 13 pages, plain TeX, no figures
Journal-ref: J.Math.Phys. 39 (1998) 1723-1729
Subjects: High Energy Physics - Theory (hep-th)
[177]
Title: Analyzing Chiral Symmetry Breaking in Supersymmetric Gauge Theories
Authors: Thomas Appelquist (1), Andreas Nyffeler (1 and 2), Stephen B. Selipsky (1 and 3) ((1) Yale University, (2) DESY-Zeuthen, (3) Washington University, St. Louis)
Comments: LaTex, 14 pages, including 1 figure in EPS format. Revised to correct gluino anomalous dimension, with minor accompanying text changes
Journal-ref: Phys.Lett. B425 (1998) 300-308
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[178]
Title: Supersymmetry and gauge theory on Calabi-Yau 3-folds
Journal-ref: Phys.Lett.B419:167-174,1998
Subjects: High Energy Physics - Theory (hep-th)
[179]
Title: The Gross-Neveu Model with Chemical Potential; An Effective Theory for Solitonic-Metallic Phase Transition in Polyacetylene?
Comments: 14 pages, revtex, no figure
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat)
[180]
Title: Duality in Quantum Field Theory (and String Theory)
Comments: 38 pages, 7 figures, LaTeX, revtex, two references added. Based on a lectures delivered by L. A.-G. at The Workshop on Fundamental Particles and Interactions', held in Vanderbilt University, and CERN-La Plata-Santiago de Compostela School of Physics, both in May 1997
Subjects: High Energy Physics - Theory (hep-th)
[181]
Title: Internal structure of non-Abelian black holes and nature of singularity
Authors: D. V. Gal'tsov (Moscow State University), E. E. Donets (JINR, Dubna), M. Yu. Zotov (INP, Moscow State University)
Comments: LaTeX 2.09, 26 pp., 6 EPS figures. Talk given at the International Workshop on the Internal Structure of Black Holes and Spacetime Singularities, Haifa, Israel, June 29 - July 3, 1997. Published in "Haifa 1997: Internal structure of black holes and spacetime singularities", 142-162
Subjects: High Energy Physics - Theory (hep-th)
[182]
Title: The Faddeev-Popov trick in the presence of boundaries
Authors: D.V. Vassilevich
Journal-ref: Phys.Lett. B421 (1998) 93-98
Subjects: High Energy Physics - Theory (hep-th)
[183]
Title: Sine-Gordon Solitons and Black Holes
Comments: 11 pages, Latex, uses ccgrra.sty; to appear in Proc. 7th Canadian Conference on General Relativity and Relativistic Astrophysics
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); Exactly Solvable and Integrable Systems (nlin.SI)
[184]
Title: General Solution of Quantum Master Equation in Finite-Dimensional Case
Authors: I.A.Batalin, I.V.Tyutin (Theor. Phys. Dept., Lebedev Institute)
Comments: 19 pages, LaTeX, minor misprints corrected
Journal-ref: Theor.Math.Phys. 114 (1998) 198-214; Teor.Mat.Fiz. 114 (1998) 250-270
Subjects: High Energy Physics - Theory (hep-th)
[185]
Title: On the ERG approach in $3 - d$ Well Developed Turbulence
Comments: 25 pages LaTex2e file with 2 .eps figures. submitted to Phys. Rew. E
Journal-ref: Phys.Lett. B411 (1997) 117-126
Subjects: High Energy Physics - Theory (hep-th); Chaotic Dynamics (nlin.CD); Fluid Dynamics (physics.flu-dyn)
[186]
Title: Duality and Superconvergence Relation in Supersymmetric Gauge Theories
Authors: Motoi Tachibana (Kobe Univ.)
Journal-ref: Phys.Rev. D58 (1998) 045015
Subjects: High Energy Physics - Theory (hep-th)
[187]
Title: Skyrmions and domain walls in (2+1) dimensions
Comments: plain tex : 15 pages, 21 Postscript figures, uses epsf.tex
Journal-ref: Nonlinearity11:783-795,1998
Subjects: High Energy Physics - Theory (hep-th)
[188]
Title: On Graceful Exit in String Cosmology with Pre-Big Bang Phase
Authors: Aram A. Saharian
Comments: 24 pages, Latex, 2 figures included
Subjects: High Energy Physics - Theory (hep-th)
[189]
Title: Static solutions in the U(1) gauged Skyrme model
Journal-ref: Phys.Rev. D62 (2000) 025020
Subjects: High Energy Physics - Theory (hep-th)
[190]
Title: Symmetry Properties of Self-Dual Fields
Authors: Dmitri Sorokin
Comments: LaTeX file, 4 pages. Talk given at the Fifth International Wigner Symposium (Vienna, August 25-29, 1997)
Subjects: High Energy Physics - Theory (hep-th)
[191]
Title: On T-duality in dilatonic gravity
Authors: Pablo M. Llatas
Journal-ref: Phys.Lett.B422:82-87,1998
Subjects: High Energy Physics - Theory (hep-th)
[192]
Title: Lectures in Topological Quantum Field Theory
Comments: 62 pages, latex, epsf, 5 figures, Lectures given by J.M.F.L. at the CERN-La Plata-Santiago de Compostela Workshop on Trends in Theoretical Physics held at La Plata in May 1997
Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG); Differential Geometry (math.DG)
[193]
Title: Integration over the u-plane in Donaldson theory
Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG); Differential Geometry (math.DG)
[194]
Title: Fermion zero-modes of a new constrained instanton in Yang-Mills-Higgs theory
Comments: Version to be published, changes in last two sections. 20 pages LaTeX Final version.
Journal-ref: Nucl.Phys. B517 (1998) 142-160
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[195]
Title: An Approximate Large $N$ Method for Lattice Chiral Models
Authors: Stuart Samuel (MPI, Columbia University and City College of New York)
Journal-ref: Phys.Rev. D56 (1997) 1470-1474
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Lattice (hep-lat)
[196]
Title: Gravitational Analogues of Non-linear Born Electrodynamics
Authors: James A. Feigenbaum, Peter G.O. Freund, Mircea Pigli (University of Chicago)
Comments: 20 pages, 2 figures, included a detailed discussion of "non-trace" field equations
Journal-ref: Phys.Rev. D57 (1998) 4738-4744
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
[197]
Title: The TBA, the Gross-Neveu Model, and Polyacetylene
Authors: Alan Chodos (Yale Univ.), Hisakzau Minakata (Tokyo Metropolitan Univ.)
Comments: 9 pages (lamuphys) reference change
Subjects: High Energy Physics - Theory (hep-th)
[198]
Title: Electric-magnetic Duality in Noncommutative Geometry
Comments: 13 pages LaTeX, no figures
Journal-ref: Phys.Lett. B417 (1998) 303-311
Subjects: High Energy Physics - Theory (hep-th)
[199]
Title: Spin-Orbit Interaction from Matrix Theory
Authors: Per Kraus
Comments: 9 pages, LaTex. Some typos and a few formulas on p. 7 corrected
Journal-ref: Phys.Lett. B419 (1998) 73-78
Subjects: High Energy Physics - Theory (hep-th)
[200]
Title: Quantum Hamilton-Jacobi equation
Authors: Vipul Periwal
Journal-ref: Phys.Rev.Lett. 80 (1998) 4366-4369
Subjects: High Energy Physics - Theory (hep-th)
[201]
Title: One-loop effective multi-gluon Lagrangian in arbitrary dimensions
Authors: E Rodulfo, R Delbourgo (University of Tasmania)
Journal-ref: Int.J.Mod.Phys.A14:4457-4472,1999
Subjects: High Energy Physics - Theory (hep-th)
[202]
Title: Quantization of p-branes, D-p-branes and M-branes
Authors: Renata Kallosh
Comments: 9 pages, Talk at STRINGS'97 Meeting, Amsterdam, The Netherlands, 16-21 Jun 1997
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 197-205
Subjects: High Energy Physics - Theory (hep-th)
[203]
Title: Reducible Connections in Massless Topological QCD and 4-manifolds
Authors: A.Sako
Comments: 23 pages, Latex, Some mistakes and typographical errors are revised in it. Especially, results written in section 4 are changed
Journal-ref: Nucl.Phys. B522 (1998) 373-395
Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)
[204]
Title: Normal Ordering in the Theory of Correlation Functions of Exactly Solvable Models
Authors: V. E. Korepin (State University of New York, Stony Brook, USA), N. A. Slavnov (Steklov Mathematical Institute, Moscow, Russia)
Comments: 16 pages, Latex, no figures
Journal-ref: J.Phys.A30:8623-8633,1997
Subjects: High Energy Physics - Theory (hep-th); Condensed Matter (cond-mat); Quantum Algebra (math.QA); Exactly Solvable and Integrable Systems (nlin.SI)
[205]
Title: Aspects of classical and quantum motion on a flux cone
Authors: E. S. Moreira, Jnr
Comments: LaTeX file, 21 pages, 8 figures
Journal-ref: Phys.Rev. A58 (1998) 1678
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); Quantum Physics (quant-ph)
[206]
Title: D-brane decay and Hawking Radiation
Authors: Sumit R. Das
Comments: 9 pages latex with espcrc. Based on talk given at STRINGS'97 held at Amsterdam, June, 1997. References added
Journal-ref: Nucl.Phys.Proc.Suppl.68:119-127,1998
Subjects: High Energy Physics - Theory (hep-th)
[207]
Title: Classical limit of the Knizhnik-Zamolodchikov-Bernard equations as hierarchy of isomonodromic deformations. Free fields approach
Comments: 43 pages, Latex, solution to the Schlesinger equations by the projection method is added, typos corrected
Subjects: High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI)
[208]
Title: From Reissner-Nordström quantum states to charged black holes mass evaporation
Authors: P. V. Moniz (DAMTP, U. Cambridge)
Comments: 3 pages, LaTeX, requires mprocl.sty available at this http URL, talk given in the Quantum Gravity session at the MG8 conference, minor correction in the abstract
Subjects: High Energy Physics - Theory (hep-th)
[209]
Title: Closing the Generation Gap
Authors: Eva Silverstein
Comments: 8 pages, harvmac big, preprint number corrected (Talk presented at STRINGS'97, Amsterdam)
Journal-ref: Nucl.Phys.Proc.Suppl. 68 (1998) 274-278
Subjects: High Energy Physics - Theory (hep-th)
[210]
Title: Degeneracy Structure of the Calogero-Sutherland Model: an Algebraic Approach
Journal-ref: Mod.Phys.Lett. A13 (1998) 339-346
Subjects: High Energy Physics - Theory (hep-th)
[211]
Title: M Theory Fivebrane and SQCD
Authors: Hirosi Ooguri (UC Berkeley/LBNL)
Comments: 8 pages, 4 figures, latex. Talk presented at STRINGS'97, Amsterdam
Journal-ref: Nucl.Phys.Proc.Suppl.68:84-91,1998
Subjects: High Energy Physics - Theory (hep-th)
[212]
Title: What is quantum field theory and why have some physicists abandoned it?
Authors: R. Jackiw
Comments: Email correspondence to jackiw@mitlns.mit.edu ; 4 pages, LaTeX
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph); History and Philosophy of Physics (physics.hist-ph); Quantum Physics (quant-ph)
[213]
Title: Euclidean and Canonical Formulations of Statistical Mechanics in the Presence of Killing Horizons
Authors: Dmitri Fursaev
Journal-ref: Nucl.Phys. B524 (1998) 447-468
Subjects: High Energy Physics - Theory (hep-th)
[214]
Title: Soliton Solutions of M-theory on an Orbifold
Authors: Zygmunt Lalak (ITP Warsaw), Andre' Lukas (UPenn), Burt A. Ovrut (UPenn, HUB, IASSNS)
Comments: 19 pages, 1 figure, Latex, uses epsf.sty
Journal-ref: Phys.Lett. B425 (1998) 59-70
Subjects: High Energy Physics - Theory (hep-th)
[215]
Title: The path towards manifest background independence
Authors: Sabbir Rahman
Subjects: High Energy Physics - Theory (hep-th)
[216]
Title: A geometrical angle on Feynman integrals
Comments: 47 pages, including 42 pages of the text (in plain Latex) and 5 pages with the figures (in a separate Latex file, requires axodraw.sty) a note and three references added, minor problem with notation fixed
Journal-ref: J.Math.Phys.39:4299-4334,1998
Subjects: High Energy Physics - Theory (hep-th); High Energy Physics - Phenomenology (hep-ph)
[217]
Title: On the Construction of Zero Energy States in Supersymmetric Matrix Models II
Authors: Jens Hoppe
Subjects: High Energy Physics - Theory (hep-th)
[218]
Title: Duality in Landau-Zener-Stueckelberg potential curve crossing
Authors: Kazuo Fujikawa (Univ. of Tokyo), Hiroshi Suzuki (Ibaraki Univ.)
Comments: 7 pages, 2 figures. LaTeX with espcrc2.sty and epsbox.sty. Invited talk presented at 7th Asia Pacific Physics Conference, August 19-23, 1997, Beijing, China (to be published in the Proceedings)
Subjects: High Energy Physics - Theory (hep-th)
[219]
Title: Anomaly Inflow on Orientifold Planes
Comments: harvmac, 12 pages (b); Some changes in wording, in order to convey more accurately what is being done; no changes in computations or formulae
Journal-ref: JHEP 9803 (1998) 004
Subjects: High Energy Physics - Theory (hep-th)
[220]
Title: D0 Branes on T^n and Matrix Theory
Authors: Ashoke Sen
Comments: LaTeX file, 10 pages, typos corrected
Subjects: High Energy Physics - Theory (hep-th)
[221]
Title: A Skyrme lattice with hexagonal symmetry
Comments: 12 pages, 1 figure. To appear in Phys. Lett. B
Journal-ref: Phys.Lett. B416 (1998) 385-391
Subjects: High Energy Physics - Theory (hep-th)
[222]
Title: N=1 Dual String Pairs and their Modular Superpotentials
Authors: Dieter Lust
Comments: 12 pages, latex, no figures. Talk presented at STRINGS'97, Amsterdam
Journal-ref: Nucl.Phys.Proc.Suppl.68:66-77,1998
Subjects: High Energy Physics - Theory (hep-th)
[223]
Title: Non-Equilibrium Quantum Electrodynamics
Comments: 28 pages, 2 figures. Substantially revised, one important mistake corrected; discussion on decoherence upgraded, section 4 essentially rewritten
Journal-ref: Phys.Rev. D58 (1998) 105006
Subjects: High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[224]
Title: (Anti-)Evaporation of Schwarzschild-de Sitter Black Holes
Authors: Raphael Bousso, Stephen Hawking (DAMTP, Cambridge)
Comments: 16 pages, LaTeX2e; submitted to Phys. Rev. D
Journal-ref: Phys.Rev. D57 (1998) 2436-2442
Subjects: High Energy Physics - Theory (hep-th); Astrophysics (astro-ph); General Relativity and Quantum Cosmology (gr-qc)
[225]
Title: Six Dimensional Schwarzschild Black Holes in M(atrix) Theory
Authors: Edi Halyo
Subjects: High Energy Physics - Theory (hep-th)
[226]
Title: Spontaneous Compactification to Robertson-Walker Universe Due To Dynamical Torsion
Authors: Viktoria Malyshenko, Domingo Marin Ricoy (Moscow State University, Russia)
Comments: 13 pages in LaTeX including 3 Encapsulated Postscript figures
Subjects: High Energy Physics - Theory (hep-th)
[227]
Title: New Canonical Variables for d=11 Supergravity
Journal-ref: Phys.Lett. B416 (1998) 91-100
Subjects: High Energy Physics - Theory (hep-th)
[228]
Title: Two Dimensional Mirror Symmetry From M-theory
Authors: John Brodie (Princeton University)
Comments: 21 pages, 3 figures, uses harvmac and epsf
Journal-ref: Nucl.Phys. B517 (1998) 36-52
Subjects: High Energy Physics - Theory (hep-th)
[229]
Title: Canonical Structure of Classical Field Theory in the Polymomentum Phase Space
Authors: I.V. Kanatchikov
Comments: 45 pages, LaTeX2e, to appear in Reports on Mathematical Physics v. 41 No. 1 (1998)
Journal-ref: Rept.Math.Phys. 41 (1998) 49-90
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph); Differential Geometry (math.DG)
[230] arXiv:alg-geom/9709029 (cross-list from alg-geom) [pdf, ps, other]
Title: Vector Bundles over Elliptic Fibrations
Comments: 101 pages, AMS-TeX, amsppt style
Subjects: Algebraic Geometry (math.AG); High Energy Physics - Theory (hep-th)
[231] arXiv:chao-dyn/9709005 (cross-list from chao-dyn) [pdf, ps, other]
Title: Intermittent dissipation of a passive scalar in turbulence
Comments: 4 pages, RevTeX 3.0, Submitted to Phys. Rev. Lett
Subjects: Chaotic Dynamics (nlin.CD); Condensed Matter (cond-mat); High Energy Physics - Theory (hep-th)
[232] arXiv:cond-mat/9709053 (cross-list from cond-mat.stat-mech) [pdf, ps, other]
Title: Berezin Integrals and Poisson Processes
Subjects: Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[233] arXiv:cond-mat/9709102 (cross-list from cond-mat.str-el) [pdf, ps, other]
Title: The Hubbard quantum wire
Authors: You-Quan Li (IPT-EPFL/ZIMP,Zhejiang Univ.), Christian Gruber (IPT-EPFL)
Comments: 4 pages, 0 figures, previous version no more adopted. Phys. Rev. Lett. Vol.80 No.4(1998)
Journal-ref: Phys.Rev.Lett.80:1034-1037,1997
Subjects: Strongly Correlated Electrons (cond-mat.str-el); High Energy Physics - Theory (hep-th)
[234] arXiv:cond-mat/9709109 (cross-list from cond-mat.str-el) [pdf, ps, other]
Title: $U(1)\times SU(2)$ gauge theory of underdoped cuprate superconductors
Comments: 8 pages, REVTEX, no figures
Subjects: Strongly Correlated Electrons (cond-mat.str-el); High Energy Physics - Theory (hep-th)
[235] arXiv:cond-mat/9709163 (cross-list from cond-mat.stat-mech) [pdf, ps, other]
Title: The su(N) XX model
Authors: Z. Maassarani, P. Mathieu (Laval university)
Comments: 16 pages, TeX and harvmac (option b). Minor corrections, accepted for publication in Nuclear Physics B
Journal-ref: Nucl. Phys. B 517 (1998) 395-408
Subjects: Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th); Quantum Algebra (math.QA); Exactly Solvable and Integrable Systems (nlin.SI)
[236] arXiv:cond-mat/9709171 (cross-list from cond-mat.stat-mech) [pdf, ps, other]
Title: Stability of Relativistic Matter With Magnetic Fields
Comments: This is an announcement of the work in cond-mat/9610195 (LaTeX)
Journal-ref: Phys.Rev.Lett.79:1785-1788,1997
Subjects: Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th)
[237] arXiv:cond-mat/9709172 (cross-list from cond-mat.soft) [pdf, ps, other]
Title: Director configuration of planar solitons in nematic liquid crystals
Authors: Henryk Arodz, Joachim Stelzer (Uniwersytet Jagiellonski, Krakow, Poland)
Comments: 12 pages, Revtex, 8 postscript figures
Subjects: Soft Condensed Matter (cond-mat.soft); High Energy Physics - Theory (hep-th); Pattern Formation and Solitons (nlin.PS)
[238] arXiv:cond-mat/9709244 (cross-list from cond-mat.stat-mech) [pdf, ps, other]
Title: Gauge Theory Description of Spin Chains and Ladders
Authors: Yutaka Hosotani
Comments: 4 pages. To appear in the Proceedings of SOLITONS, a CRM-Fields-CAP Summer Workshop in Theoretical Physics, July 20 - July 26, 1997, Kingston, Ontario, Canada. (Springer-Verlag)
Subjects: Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th)
[239] arXiv:cond-mat/9709245 (cross-list from cond-mat) [pdf, ps, other]
Title: Non-perturbative Results on Universal Quantities of Statistical Mechanics Models
Authors: G. Mussardo
Comments: 10 pages, Latex, 6 figures
Subjects: Condensed Matter (cond-mat); High Energy Physics - Theory (hep-th)
[240] arXiv:cond-mat/9709252 (cross-list from cond-mat.stat-mech) [pdf, ps, other]
Title: The su(N) Hubbard model
Authors: Z. Maassarani (Laval university)
Comments: 5 pages, LaTeX. Two equations added to clarify the integrability proof and minor modifications. Accepted for publication in Physics Letters A
Journal-ref: Phys. Lett. A 239 (1998) 187-190.
Subjects: Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI)
[241] arXiv:cond-mat/9709285 (cross-list from cond-mat.dis-nn) [pdf, ps, other]
Title: Exact results at the 2-D percolation point
Authors: P. Kleban (1), R. M. Ziff (2) ((1) Laboratory for Surface Science and Technology & Department of Physics and Astronomy, University of Maine, Orono, ME, (2) Department of Chemical Engineering, University of Michigan, Ann Arbor, MI)
Comments: 12 pages, 2 figures, LaTeX, submitted to Physical Review Letters
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[242] arXiv:cond-mat/9709298 (cross-list from cond-mat) [pdf, ps, other]
Title: Effective Field Theory Approach to Ferromagnets and Antiferromagnets in Crystaline Solids
Authors: J. M. Roman, J. Soto (Univ. de Barcelona and IFAE)
Comments: Latex file, 37p . Published version
Journal-ref: Int. J. Mod. Phys. B 13 (1999) 755
Subjects: Condensed Matter (cond-mat); High Energy Physics - Theory (hep-th)
[243] arXiv:cond-mat/9709309 (cross-list from cond-mat.stat-mech) [pdf, ps, other]
Title: Two-band random matrices
Journal-ref: Physical Review E 57, 6604 (1998)
Subjects: Statistical Mechanics (cond-mat.stat-mech); Disordered Systems and Neural Networks (cond-mat.dis-nn); High Energy Physics - Theory (hep-th); Chaotic Dynamics (nlin.CD)
[244] arXiv:dg-ga/9709005 (cross-list from dg-ga) [pdf, ps, other]
Title: Higher-Order Lagrangian Formalism on Grassmann Manifolds
Subjects: Differential Geometry (math.DG); High Energy Physics - Theory (hep-th)
[245] arXiv:dg-ga/9709012 (cross-list from dg-ga) [pdf, ps, other]
Title: From a Relativistic Phenomenology of Anyons to a Model of Unification of Forces via the Spencer Theory of Lie Structures
Authors: Jacques L. Rubin
Comments: Only changes in formula (6) chapter 3.1 and in the conclusion
Subjects: Differential Geometry (math.DG); Condensed Matter (cond-mat); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[246] arXiv:dg-ga/9709022 (cross-list from dg-ga) [pdf, ps, other]
Title: PU(2) monopoles and relations between four-manifold invariants
Comments: LaTeX 2e, 35 pages. Slightly revised version to appear in Topology and its Applications, (Proceedings of the Georgia Topology Conference, Atlanta, GA, June 1996). Physics reference and comment added
Journal-ref: Topology and its Applications 88 (1998), 111-145
Subjects: Differential Geometry (math.DG); High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)
[247] arXiv:funct-an/9709006 (cross-list from funct-an) [pdf, ps, other]
Title: Operator space structures and the split property II
Authors: Francesco Fidaleo (Dipartimento di Matematica II Universita' di Roma Tor Vergata)
Comments: 25 pages, LaTex, Some changes in the macroes
Subjects: Functional Analysis (math.FA); High Energy Physics - Theory (hep-th); Operator Algebras (math.OA)
[248] arXiv:gr-qc/9709002 (cross-list from gr-qc) [pdf, ps, other]
Title: Gravity on Fuzzy Space-Time
Comments: ESI Preprint 478, 30 pages, Latex
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[249] arXiv:gr-qc/9709004 (cross-list from gr-qc) [pdf, ps, other]
Title: Massive string modes and non-singular pre-big-bang cosmology
Authors: Michele Maggiore
Comments: 25 pages, Latex, 3 figures. Conceptual revisions. To be published in Nucl. Phys. B
Journal-ref: Nucl.Phys. B525 (1998) 413-431
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[250] arXiv:gr-qc/9709013 (cross-list from gr-qc) [pdf, ps, other]
Title: Gravitational collapse to toroidal, cylindrical and planar black holes
Comments: 6 pages, Revtex, modifications in the title and in the interpretation of some results, to appear in Physical Review D
Journal-ref: Phys.Rev. D57 (1998) 4600-4605
Subjects: General Relativity and Quantum Cosmology (gr-qc); Astrophysics (astro-ph); High Energy Physics - Theory (hep-th)
[251] arXiv:gr-qc/9709019 (cross-list from gr-qc) [pdf, ps, other]
Title: A KMS-like state of Hadamard type on Robertson-Walker spacetimes and its time evolution
Authors: Mathias Trucks
Journal-ref: Commun.Math.Phys. 197 (1998) 387-404
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[252] arXiv:gr-qc/9709028 (cross-list from gr-qc) [pdf, ps, other]
Title: Relativistic spin networks and quantum gravity
Comments: 10 pages, amstex, some errors corrected, more references
Journal-ref: J.Math.Phys. 39 (1998) 3296-3302
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[253] arXiv:gr-qc/9709029 (cross-list from gr-qc) [pdf, ps, other]
Title: Global monopoles in dilaton gravity
Comments: 15 pages, 3 figures, version to appear in Class Quant Grav
Journal-ref: Class.Quant.Grav. 15 (1998) 985-995
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[254] arXiv:gr-qc/9709043 (cross-list from gr-qc) [pdf, ps, other]
Title: Hamiltonian Thermodynamics of Black Holes in Generic 2-D Dilaton Gravity
Authors: G. Kunstatter, R. Petryk, S. Shelemy (U. of Winnipeg)
Comments: 25 pages Revtex including 7 (eps) figures
Journal-ref: Phys.Rev.D57:3537-3547,1998
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[255] arXiv:gr-qc/9709045 (cross-list from gr-qc) [pdf, ps, other]
Title: Cross Section of a Resonant-Mass Detector for Scalar Gravitational Waves
Journal-ref: Phys.Rev.D57:4525-4534,1998
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[256] arXiv:gr-qc/9709048 (cross-list from gr-qc) [pdf, ps, other]
Title: Comment on "Accelerated Detectors and Temperature in (Anti) de Sitter Spaces"
Authors: Ted Jacobson
Journal-ref: Class.Quant.Grav. 15 (1998) 251-253
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[257] arXiv:gr-qc/9709050 (cross-list from gr-qc) [pdf, ps, other]
Title: Classical and Quantum Anisotropic Wormholes in Pure General Relativity
Authors: Hongsu Kim
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[258] arXiv:gr-qc/9709052 (cross-list from gr-qc) [pdf, ps, other]
Title: Spin Foam Models
Authors: John C. Baez
Comments: 41 pages LaTeX, some small corrections
Journal-ref: Class.Quant.Grav. 15 (1998) 1827-1858
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Quantum Algebra (math.QA)
[259] arXiv:gr-qc/9709055 (cross-list from gr-qc) [pdf, ps, other]
Title: Non-Abelian Black Holes in Brans-Dicke Theory
Comments: 31 pages, revtex, 21 figures
Journal-ref: Phys.Rev.D57:4870-4884,1998
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[260] arXiv:gr-qc/9709057 (cross-list from gr-qc) [pdf, ps, other]
Title: Topology change from Kaluza-Klein dimensions
Authors: Radu Ionicioiu (DAMTP, University of Cambridge, UK)
Comments: 5 pages, LaTeX, no figures, uses epsf
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[261] arXiv:gr-qc/9709068 (cross-list from gr-qc) [pdf, ps, other]
Title: Darboux coordinates for (first order) tetrad gravity
Comments: 12 pages, Latex. Minor presentation changes and some references added. Version to appear in Classical and Quantum Gravity
Journal-ref: Class.Quant.Grav. 15 (1998) 1527-1534
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[262] arXiv:gr-qc/9709080 (cross-list from gr-qc) [pdf, ps, other]
Title: Wave Function for the Reissner-Nordström Black-Hole
Authors: P. V. Moniz (DAMTP, U. Cambridge)
Journal-ref: Mod.Phys.Lett. A12 (1997) 1491-1505
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[263] arXiv:hep-lat/9709007 (cross-list from hep-lat) [pdf, ps, other]
Title: Singular Structure in 4D Simplicial Gravity
Journal-ref: Phys.Lett. B416 (1998) 274-280
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[264] arXiv:hep-lat/9709014 (cross-list from hep-lat) [pdf, ps, other]
Title: The Overlap Formalism and Topological Susceptibility on the Lattice
Comments: 3 pages, LaTeX, 3 figures. This paper is based on a talk given by R. Singleton at Lattice '97, held in Edinburgh, Scotland
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 555-557
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[265] arXiv:hep-lat/9709026 (cross-list from hep-lat) [pdf, ps, other]
Title: Remarks on the realization of the Atiyah-Singer index theorem in lattice gauge theory
Comments: Talk given at LATTICE97, 3 pages, 1 figure
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 498-500
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[266] arXiv:hep-lat/9709038 (cross-list from hep-lat) [pdf, ps, other]
Title: Abelian monopoles in lattice gluodynamics as physical objects
Comments: 3 pages, 2 figures, LaTeX using espcrc2.sty and epsfig.sty; Talk given by M.I. Polikarpov at the International Symposium on Lattice Field Theory, 22-26 July 1997, Edinburgh, Scotland
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 486-488
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[267] arXiv:hep-lat/9709039 (cross-list from hep-lat) [pdf, ps, other]
Title: Electric and magnetic currents in SU(2) lattice gauge theory
Comments: 3 pages, 1 figure, LaTeX using espcrc2.sty and epsfig.sty; Talk given by F.V. Gubarev at the International Symposium on Lattice Field Theory, 22-26 July 1997, Edinburgh, Scotland
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 516-518
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[268] arXiv:hep-lat/9709042 (cross-list from hep-lat) [pdf, ps, other]
Title: Z(2) vortices and the string tension in SU(2) gauge theory
Authors: Tamas G. Kovacs (University of Colorado), E.T. Tomboulis (UCLA)
Comments: 3 pages, LaTeX, 4 figures, uses espcrc2.sty, Talk given at LATTICE97
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 534-536
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[269] arXiv:hep-lat/9709062 (cross-list from hep-lat) [pdf, ps, other]
Title: A non-trivial spectrum for the trivial φ^4 theory
Authors: F. Gliozzi
Comments: 3 pages, talk given at LATTICE'97, Edinburgh
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 634-636
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[270] arXiv:hep-lat/9709066 (cross-list from hep-lat) [pdf, ps, other]
Title: The QCD vacuum
Authors: Pierre van Baal
Comments: 12p with 7 figs. Review presented at Lattice'97, Edinburgh, 22-26 July, 1997
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 126-137
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[271] arXiv:hep-lat/9709079 (cross-list from hep-lat) [pdf, ps, other]
Title: A lattice determination of QCD field strength correlators
Authors: Gunnar S. Bali (U Southampton), Nora Brambilla (INFN Milano & U Heidelberg), Antonio Vairo (U Heidelberg)
Comments: 13 pages LaTeX (elsart.sty) with 5 eps figures, typos corrected, Figure 2 replaced and some changes in results section for enhanced clarity of presentation
Journal-ref: Phys.Lett.B421:265-272,1998
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[272] arXiv:hep-lat/9709087 (cross-list from hep-lat) [pdf, ps, other]
Title: Spectrum of the gauge Ising model in three dimensions
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 616-618
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[273] arXiv:hep-lat/9709089 (cross-list from hep-lat) [pdf, ps, other]
Title: Universal Amplitude Ratios in the 3D Ising Model
Comments: 3 pages, talk given at LATTICE97
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 613-615
Subjects: High Energy Physics - Lattice (hep-lat); Condensed Matter (cond-mat); High Energy Physics - Theory (hep-th)
[274] arXiv:hep-lat/9709092 (cross-list from hep-lat) [pdf, ps, other]
Title: Various representations of infrared effective lattice QCD
Comments: 3 pages, LaTeX, 2 figures; talk presented at LATTICE97
Journal-ref: Nucl.Phys.Proc.Suppl.63:471-473,1998
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[275] arXiv:hep-lat/9709097 (cross-list from hep-lat) [pdf, ps, other]
Title: A Guide to Precision Calculations in Dyson's Hierarchical Scalar Field Theory
Authors: J. J. Godina (1 and 3), Y. Meurice (1 and 2), M. B. Oktay (1 and 2), S. Niermann (1) ((1) Univ. of Iowa, (2) CERN, (3) CINVESTAV-IPN)
Comments: Uses revtex with psfig, 31 pages including 15 figures
Journal-ref: Phys.Rev. D57 (1998) 6326-6336
Subjects: High Energy Physics - Lattice (hep-lat); Statistical Mechanics (cond-mat.stat-mech); High Energy Physics - Theory (hep-th)
[276] arXiv:hep-lat/9709101 (cross-list from hep-lat) [pdf, ps, other]
Title: The role of diffeomorphisms in the integration over a finite dimensional space of geometries
Authors: Pietro Menotti (Department of Physics, University of Pisa, Italy)
Comments: 3 pages, LaTeX, Talk given at LATTICE'97, Edinburgh
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 760-762
Subjects: High Energy Physics - Lattice (hep-lat); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[277] arXiv:hep-lat/9709113 (cross-list from hep-lat) [pdf, ps, other]
Title: Gauge-Fixing Approach to Lattice Chiral Gauge Theories
Comments: 6 pages, 2 figures, LaTeX, plenary talk at LATTICE'97, Edinburgh
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 147-152
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[278] arXiv:hep-lat/9709115 (cross-list from hep-lat) [pdf, ps, other]
Title: Gauge-Fixing Approach to Lattice Chiral Gauge Theories, Part II
Comments: 6 pages, 5 figures, LaTeX, contribution to LATTICE'97, Edinburgh
Journal-ref: Nucl.Phys.Proc.Suppl. 63 (1998) 581-586
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th)
[279] arXiv:hep-lat/9709145 (cross-list from hep-lat) [pdf, ps, other]
Title: Scalar-gauge dynamics in (2+1) dimensions at small and large scalar couplings
Comments: 36 pages, LaTeX, 13 postscript files, to be included with epsf; improved presentation, updated references, conclusions unchanged; version to appear in Nucl. Phys. B
Journal-ref: Nucl.Phys. B528 (1998) 379-407
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[280] arXiv:hep-lat/9709154 (cross-list from hep-lat) [pdf, ps, other]
Title: Lattice Chiral Fermions Through Gauge Fixing
Authors: Wolfgang Bock (Humboldt University), Maarten Golterman (Washington University), Yigal Shamir (Tel Aviv University)
Comments: 4 pages, 3 figures (postscript), version to appear in Phys. Rev. Lett
Journal-ref: Phys.Rev.Lett. 80 (1998) 3444-3447
Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[281] arXiv:hep-ph/9709212 (cross-list from hep-ph) [pdf, ps, other]
Title: Perspectives in High Energy Physics
Authors: G. Rajasekaran
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[282] arXiv:hep-ph/9709250 (cross-list from hep-ph) [pdf, ps, other]
Title: Supergravity Radiative Effects on Soft Terms and the $μ$ Term
Journal-ref: Phys.Rev.Lett. 80 (1998) 3686-3689
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[283] arXiv:hep-ph/9709285 (cross-list from hep-ph) [pdf, ps, other]
Title: Quantum Fluctuations of Axions
Comments: Revtex, 15 pages including epsf figures, final version to appear in Phys. Rev. D: now contains a detailed discussion taking into account the time dependence of the axion mass; conclusions unchanged
Journal-ref: Phys.Rev. D58 (1998) 105004
Subjects: High Energy Physics - Phenomenology (hep-ph); Astrophysics (astro-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[284] arXiv:hep-ph/9709287 (cross-list from hep-ph) [pdf, ps, other]
Title: Microphysics of Gauge Vortices and Baryogenesis
Authors: Mark Trodden (Case Western Reserve University, MIT LNS)
Comments: 6 pages, RevTeX, Invited talk at Solitons'', Kingston, Ontario, Canada, July 20-25, 1997. To appear in the proceedings
Subjects: High Energy Physics - Phenomenology (hep-ph); Astrophysics (astro-ph); High Energy Physics - Theory (hep-th)
[285] arXiv:hep-ph/9709296 (cross-list from hep-ph) [pdf, ps, other]
Title: Fixed points and vacuum energy of dynamically broken gauge theories
Comments: 17 pages, uuencoded latex file, 3 figures, uses epsf and epsfig. Submitted to Mod. Phys. Lett. A
Journal-ref: Mod.Phys.Lett. A12 (1997) 2511-2522
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[286] arXiv:hep-ph/9709329 (cross-list from hep-ph) [pdf, ps, other]
Title: Charge and Color Breaking in Supersymmetry and Superstrings
Authors: Carlos Munoz
Comments: Based on talks given at "Beyond the desert: accelerator and non-accelerator approaches", Castle Ringberg, Tegernsee (Germany), June 1997; "8th Miniworshop on Particle and Astroparticle Physics", Pusan (South Korea), May 1997. Uuencoded LaTex file. 11 pages + macro iopconf1.sty + 1 Postscript figure, including the macro psfig.tex
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[287] arXiv:hep-ph/9709356 (cross-list from hep-ph) [pdf, ps, other]
Title: A Supersymmetry Primer
Comments: 160 pages. Version 7 (January 2016) contains many updates and improvements. Errata, source files, and a version with larger type (12 pt, 179 pages) can be found at this http URL
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex); High Energy Physics - Theory (hep-th)
[288] arXiv:hep-ph/9709364 (cross-list from hep-ph) [pdf, ps, other]
Title: The Gaugino β-Function
Authors: I. Jack, D.R.T. Jones
Comments: 11 pages, tex, no figures. Uses harvmac. Minor error in derivation of Eq. (14) corrected
Journal-ref: Phys.Lett. B415 (1997) 383-389
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[289] arXiv:hep-ph/9709371 (cross-list from hep-ph) [pdf, ps, other]
Title: Selection rules at the quark-antiquark vertex of the QCD Pomeron
Authors: H. Navelet, R. Peschanski (SPhT,Saclay)
Comments: 12 pages, latex with tcilatex, no figure
Journal-ref: Nucl.Phys.B515:269-278,1998
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[290] arXiv:hep-ph/9709379 (cross-list from hep-ph) [pdf, ps, other]
Title: Predictions from conformal algebra for the deeply virtual Compton scattering
Comments: 14 pages, LaTeX; reference to the paper of Mankiewicz et al. added, typos fixed
Journal-ref: Phys.Lett.B417:129-140,1998
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[291] arXiv:hep-ph/9709383 (cross-list from hep-ph) [pdf, ps, other]
Title: New Models of Gauge Mediated Dynamical Supersymmetry Breaking
Authors: Yuri Shirman
Journal-ref: Phys.Lett. B417 (1998) 281-286
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[292] arXiv:hep-ph/9709397 (cross-list from hep-ph) [pdf, ps, other]
Title: Renormalizations in Softly Broken SUSY Gauge Theories
Journal-ref: Nucl.Phys. B510 (1998) 289-312
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[293] arXiv:hep-ph/9709423 (cross-list from hep-ph) [pdf, ps, other]
Title: Feynman Diagrams and Cutting Rules
Authors: J.S. Rozowsky
Comments: Latex, 22 pages, 6 figures
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[294] arXiv:hep-ph/9709437 (cross-list from hep-ph) [pdf, ps, other]
Title: Effective Lagrangian Models for Gauge Theories of Fundamental Interactions
Authors: Francesco Sannino (Yale Univ. USA)
Comments: PhD Thesis at Syracuse Univ. USA, 159 pages (LaTeX), 30 PostScript Figures are included as tar.Z compressed file
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[295] arXiv:hep-ph/9709462 (cross-list from hep-ph) [pdf, ps, other]
Title: Magnetic Defects Signal Failure of Abelian Projection Gauges in QCD
Authors: Harald W. Griesshammer (U Erlangen, U of Washington)
Comments: 20 pages, LaTeX2e, uses package fontenc
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
[296] arXiv:hep-ph/9709463 (cross-list from hep-ph) [pdf, ps, other]
Title: Light-Front Hamiltonian Approach to the Bound-State Problem in Quantum Electrodynamics
Authors: Billy D. Jones (TRIUMF)
Comments: Ph.D. Dissertation at The Ohio State University, 149 pages (LaTeX2e)
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th); Nuclear Theory (nucl-th)
[297] arXiv:hep-ph/9709492 (cross-list from hep-ph) [pdf, ps, other]
Title: Supersymmetric Q-balls as dark matter
Comments: 16 pages, 3 figures (epsf); replaced with a final version, to appear in Phys. Lett. B (references added)
Journal-ref: Phys.Lett.B418:46-54,1998
Subjects: High Energy Physics - Phenomenology (hep-ph); Astrophysics (astro-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[298] arXiv:nucl-th/9709052 (cross-list from nucl-th) [pdf, ps, other]
Title: Variational treatment of quenched QED using the worldline technique
Authors: A. W. Schreiber (Adelaide U.), C. Alexandrou (Cyprus U.), R. Rosenfelder (PSI)
Comments: 5 pages LaTeX, using espcrc1.sty, no figures. Talk presented at the 15th Int. Conf. on Few-Body Problems in Physics, Groningen, The Netherlands, 22-26 July, 1997
Journal-ref: Nucl.Phys. A631 (1998) 635c-639c
Subjects: Nuclear Theory (nucl-th); High Energy Physics - Theory (hep-th)
[299] arXiv:patt-sol/9709003 (cross-list from patt-sol) [pdf, ps, other]
Title: Uses of Envelopes for Global and Asymptotic Analysis; geometrical meaning of renormalization group equation
Authors: Teiji Kunihiro
Comments: Talk presented at RIMS (Kyoto) Workshop on Geometrical Methods for Asymptotic Analysis 1997.5.20 -- 5.23. LaTex, 15 pages
Subjects: Pattern Formation and Solitons (nlin.PS); High Energy Physics - Theory (hep-th)
[300] arXiv:physics/9709009 (cross-list from math-ph) [pdf, ps, other]
Title: A New Family of Solvable Self-Dual Lie Algebras
Authors: Oskar Pelc
Journal-ref: J.Math.Phys. 38 (1997) 3832-3840
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th)
[301] arXiv:physics/9709011 (cross-list from math-ph) [pdf, ps, other]
Title: Quantum Harmonic Analysis and Geometric Invariants
Authors: Arthur Jaffe (Harvard University)
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th)
[302] arXiv:physics/9709016 (cross-list from math-ph) [pdf, ps, other]
Title: The Method of Geodesic Expansions and its Application to the Semiclassical Sum over Immersed Manifolds
Authors: Wolfgang Mueck
Comments: nearly completely rewritten, application added 17 pages, LaTeX(2e) with amsmath, amsfonts, graphicx, 1 figure
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th)
[303] arXiv:physics/9709033 (cross-list from math-ph) [pdf, ps, other]
Title: Eigenvalues of Casimir operators for $gl(m/\infty)$
Journal-ref: J.Phys.A32:391-399,1999
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th)
[304] arXiv:physics/9709043 (cross-list from math-ph) [pdf, ps, other]
Title: Do Quasi-Exactly Solvable Systems Always Correspond to Orthogonal Polynomials?
Comments: Revtex, 7 pages, No figure
Journal-ref: Phys.Lett. A239 (1998) 197-200
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph)
[305] arXiv:physics/9709045 (cross-list from math-ph) [pdf, ps, other]
Title: An Introduction to Noncommutative Geometry
Comments: 18 pages, LaTeX, updated bibliography (only). The full document is now published in the EMS Series of Lectures in Mathematics
Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th); Differential Geometry (math.DG); Quantum Algebra (math.QA)
[306] arXiv:q-alg/9709003 (cross-list from q-alg) [pdf, ps, other]
Title: A q-deformation of the parastatistics and an alternative to the Chevalley description of $U_q[osp(2n+1/2m)]$
Authors: T.D. Palev
Comments: 14 pages, TeX, minor misprints corrected. To be published in Comm. Math. Phys
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[307] arXiv:q-alg/9709004 (cross-list from q-alg) [pdf, ps, other]
Title: Highest weight irreducible representations of the quantum algebra $U_h(A_\infty)$
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[308] arXiv:q-alg/9709011 (cross-list from q-alg) [pdf, ps, other]
Title: Asymptotics of Jack polynomials as the number of variables goes to infinity
Journal-ref: Intern. Math. Research Notices 1998, no. 13, 641-682
Subjects: Quantum Algebra (math.QA); Condensed Matter (cond-mat); High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI)
[309] arXiv:q-alg/9709013 (cross-list from q-alg) [pdf, ps, other]
Title: An Elliptic Algebra $U_{q,p}(\hat{sl_2})$ and the Fusion RSOS Model
Authors: Hitoshi Konno
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[310] arXiv:q-alg/9709023 (cross-list from q-alg) [pdf, ps, other]
Title: Three Short Distance Structures from Quantum Algebras
Authors: A. Kempf (D.A.M.T.P., Cambridge)
Comments: 8 pages LaTeX2e, Proceedings 6th Coll. Quantum Groups and Integrable Systems, Prague 19-21 June '97
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[311] arXiv:q-alg/9709024 (cross-list from q-alg) [pdf, ps, other]
Title: Dynamically twisted algebra $A_{q,p;\hatπ}(\hat{gl_2})$ as current algebra generalizing screening currents of q-deformed Virasoro algebra
Authors: B.Y.Hou, W.L.Yang
Comments: 24 pages, Latex file 66K
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[312] arXiv:q-alg/9709032 (cross-list from q-alg) [pdf, ps, other]
Title: Quantum Orthogonal Planes: ISO_{q,r}(N) and SO_{q,r}(N) -- Bicovariant Calculi and Differential Geometry on Quantum Minkowski Space
Comments: LaTeX, 36 pages. Considered more real forms, added some explicit formulas, used simpler definition of hermitean momenta. To be published in European Phys. Jou. C
Journal-ref: Eur.Phys.J.C7:159-175,1999
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[313] arXiv:q-alg/9709036 (cross-list from q-alg) [pdf, ps, other]
Title: q-deformed algebras $U_q(so_n)$ and their representations
Authors: A. M. Gavrilik, N. Z. Iorgov (ITP, Kiev)
Comments: LaTeX, 14 pages. Minor corrections. Final version as published in Methods of Functional Analysis and Topology
Journal-ref: Methods Func.Anal.Topol. 3, no.4, 51-63 (1997)
Subjects: Quantum Algebra (math.QA); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[314] arXiv:q-alg/9709039 (cross-list from q-alg) [pdf, ps, other]
Title: On Fusion Algebras and Modular Matrices
Journal-ref: Commun.Math.Phys.206:1-22,1999
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th)
[315] arXiv:q-alg/9709040 (cross-list from q-alg) [pdf, ps, other]
Title: Deformation quantization of Poisson manifolds, I
Authors: Maxim Kontsevich
Comments: plain TeX and epsf.tex, 46 pages, 24 figures
Journal-ref: Lett.Math.Phys.66:157-216,2003
Subjects: Quantum Algebra (math.QA); High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)
[316] arXiv:quant-ph/9709021 (cross-list from quant-ph) [pdf, ps, other]
Title: Supersymmetric Construction of Exactly Solvable Potentials and Non-Linear Algebras
Comments: LaTeX, 11 pages, 3 figures, figures added, minor misprints corrected. to appear in Russian Journal of Nuclear Physics (Yadernaya Fizika)
Subjects: Quantum Physics (quant-ph); High Energy Physics - Theory (hep-th); Quantum Algebra (math.QA); Exactly Solvable and Integrable Systems (nlin.SI)
[317] arXiv:quant-ph/9709032 (cross-list from quant-ph) [pdf, ps, other]
Title: The Interpretation of Quantum Mechanics: Many Worlds or Many Words?
Authors: Max Tegmark (IAS)
Comments: 6 pages. More details and links at this http URL (faster from the US), from this http URL (faster from Europe) or from max@ias.edu
Journal-ref: Fortsch.Phys.46:855-862,1998
Subjects: Quantum Physics (quant-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[318] arXiv:quant-ph/9709034 (cross-list from quant-ph) [pdf, ps, other]
Title: A Comment on "Semiquantum Chaos"
Comments: 4 pages (latex), 1 figure (postscript)
Journal-ref: Phys.Rev.Lett.81:240,1998
Subjects: Quantum Physics (quant-ph); High Energy Physics - Theory (hep-th)
[319] arXiv:quant-ph/9709050 (cross-list from quant-ph) [pdf, ps, other]
Title: Exact Evolution Operator on Non-compact Group Manifolds
Authors: Nurit Krausz, M. S. Marinov (Technion, Israel)
Comments: 32 pages, 5 postscript figures, LaTex
Journal-ref: J.Math.Phys. 41 (2000) 5180-5208
Subjects: Quantum Physics (quant-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[320] arXiv:solv-int/9709004 (cross-list from solv-int) [pdf, ps, other]
Title: Solitons from Dressing in an Algebraic Approach to the Constrained KP Hierarchy | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7672226428985596, "perplexity": 18805.088068186484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00059.warc.gz"} |
https://bibbase.org/network/publication/mesleh-badarneh-younis-almehmadi-howsignificantistheassumptionoftheuniformchannelphasedistributionontheperformanceofspatialmultiplexingmimosystem-2016 | How significant is the assumption of the uniform channel phase distribution on the performance of spatial multiplexing MIMO system?. Mesleh, R., Badarneh, O. S, Younis, A., & Almehmadi, F. S Wireless Networks, 2016.
@article{mesleh_how_2016,
title = {How significant is the assumption of the uniform channel phase distribution on the performance of spatial multiplexing {MIMO} system?},
journal = {Wireless Networks},
author = {Mesleh, Raed and Badarneh, Osamah S and Younis, Abdelhamid and Almehmadi, Fares S},
year = {2016},
pages = {1--8}
} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7975302934646606, "perplexity": 14880.111472187276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00534.warc.gz"} |
https://www.vcalc.com/wiki/MichaelBartmess/What%27s+Your+Sign%3F | # What's Your Sign?
Not Reviewed
Equation / Last modified by KurtHeckman on 2017/06/06 16:57
"Zodiac Sign" =
Variable Instructions Datatype
Calendar Month of Birth Date "Select the month of your birthday" Text
Calendar Day of Birth Date "Select the day of your birthday" Integer
Type
Equation
Category
vCommons
Contents
2 variables
Tags:
Rating
ID
MichaelBartmess.What's Your Sign?
UUID
330c672c-75e5-11e5-a3bb-bc764e2038f2
Astrology1 has been around since ancient times and was created by a people who had no idea what a star or a planet really was. In fact at that time they anthropomorphically ascribed literal human characters to constellations and planets. Despite the basis in antiquity and in mysticism, astrology has persisted into the present day and this equation allows you to ascertain the most central data element of astrology, your Zodiac sign (that have been named for key star constellations) associated with your birth date .
So, for instance, if you are born between March 21st and April 20th, this equation will tell you that you are associated with the first sign of he Zodiac, Aries.
Contents
# The Zodiac
"In both astrology and [ancient] historical astronomy, the zodiac (Greek: ζῳδιακός, zōidiakos) is a circle of twelve 30° divisions of celestial longitude that are centered upon the ecliptic, the apparent path of the Sun across the celestial sphere over the course of the year. The paths of the Moon and visible planets also remain close to the ecliptic, within the belt of the zodiac, which extends 8-9° north or south of the ecliptic, as measured in celestial latitude. Because the divisions are regular, they do not correspond exactly to the twelve constellations after which they are named.
Historically, these twelve divisions are called signs. Essentially, the zodiac is a celestial coordinate system, or more specifically an ecliptic coordinate system, which takes the ecliptic as the origin of latitude, and the position of the Sun at vernal equinox as the origin of longitude."2
Zodiac signs are defined by their associated star constellations and each zodiac sign represents a creature.
The 12 zodiac signs can be divided into groups defining personality characteristics of the individual who lives under that sign. These groups are characterized by human traits or characterization attributes:
• masculine versus feminine
• positive versus negative
• active versus passive
• and several other token characteristics such referred to as cardinal, fixed and mutable signs
# Realistic Interpretation
### The Origin of the Zodiac
Obviously the creators of the Zodiac were looking for ways to connect themselves to the cosmos, to find commonalities between people and to find guiding principles upon which to divine the life choices which were so much more important in an earlier age. Stars in the night sky were something seen commonly and compellingly. Stars and planets were then associated with mystical powers and gods and other divinities.
Today, it is a rarity for people not living in remote areas of the US to even know what the Milky Way looks like or to have a clear view of any of the constellations except the ones with the brightest of stars. We do not see, as a whole, anything like the same sky that was seen in ancient times.
People of the ancient world were looking for explanations for many things for which we commonly know very precise scientific explanations today. The grade school student knows stellar trivia that the scholars of ancient Rome did not. Life exists today immersed in a physical reality where many more elements of what we experience day-to-day have concrete explanation and interpretation. This was not so when the Romans took Babylonian astronomy into their considerations to form a means to explain how life's complex path is decided.
### Stars And the Night Sky in our Modern Life
Growing up in the Midwest, I did not have a clue what the sky looked like. There is literally no place in the whole of the Midwest or central East Coast of the US today where you can really see stars. I mean stars against a background so deep that the inky blackness appears itself to be translucent, where the density of the Milky way is a tangible swath across the center of the night sky, where planets can be seen with the naked eye to actually be spheres, small marbles that look as if you could pluck them from the heavens.
That IS NOT the common experience today. Most people reading this think they have seen the starry sky but have not.
That is the sky that compelled the earliest astrologers and astronomers to imagine a physical connection to those heavenly bodies they saw oh-so-much-clearer than we do today.
The Zodiac was created at a time when fairly little was known about the stars, galaxies and planets. Even as the use of the Zodiac matured into a pseudo-science of its time, astronomy had not yet learned enough to know the distances and the physics that characterize our galaxy and our solar system.
And yet the Zodiac has persisted.
Why?
### A Traceable Explanation
I believe that most interpretations of the universe that persist do so because there is a regularity and a perceptible connection to known reality. Even something as mystical in its basis as Astrology contains kernels of this physical reality. Unfortunately for those adamant astrology practitioners, it has nothing, not the slightest thing, to do with the physical influence of the stars and the planets on human beings. We know for quite calculable fact that the gravitational, electromagnetic, light intensity and any other known "affecting" physical phenomena is so infinitesimally minute in its affect on us that this cannot have anything whatsoever to do with people, their life choices, their personalities and their interactions with other human beings.
But, before the astrology buffs in the crowd get their hackles up, if you consider all we know about physiology, early human development, and the amazing intricacies of the human brain, there is a reasonable possibility that the seasons which tie directly to the Zodiac signs could have tangible affects on our lives.
Here is my postulation of a possible interpretation of the sometimes close correlations of personalities and personal love-interactions translated from individual's Zodiac signs.
### Early Developmental Influences
We have an ever growing body of scientific knowledge that shows that environment has a tangible affect on early human development. Even during pregnancy there is a lot of developmental growth being affected by the activity level and surrounding environment the fetus experiences in the womb. We know that an infant is affected in many ways by the interactions with its parents and the world around his/her expanding intelligence. Involved parents spend significant effort surrounding a newborn with color and light and tactile sensation and conversation and music and imagery of increasing complexity. And the scientific literature is chock full of studies exemplifying the positive and negative affects that can be conveyed to an individual at even this early a period in their development.
So, it is not a stretch to think that there are commonalities imposed on newborns by their environment and that that environment has certain characteristics which have seasonal traits.
Take an infant born under the sign of an Aries. It is Springtime in North America and there is a vital change in the activity level of parents. They are already excited to get out and show-off their new baby and the Springtime brings an exhilaration with it that cause them to be much more likely to take their infant out in public, to expose the infant to the smells of flowering Spring and to sunlight, to take the stroller on a turn around the park, to more likely encounter a lot of enthusiastic faces of people they encounter who themselves are already in a great mood for being out and about on a beautiful Spring Day.
Isn't it more likely that Aries infant is exposed to a larger number of faces and to a distinctly different atmosphere altogether throughout some formative months than, let's say an Aquarius baby who's wrapped tightly against Winters harsher cold period. Isn't it likely that parents are more cautious in taking the Aquarius baby out into the cold and thus on average the Aquarius baby begins his experience of the world in a seasonally characteristic atmosphere?
And since birthdays in western civilization are a distinctly important time for young children to interact and absorb confidence and nurturing, each year that same Aries child has a birthday in a Spring-like atmosphere, where it is much more likely many other children's parents are happy to bring their child out into the sunshine at a back yard party where flowers are blooming in the neighborhood and the sun is making the day a special event. Those kinds of experiences, spread across the population, could very well have a significant affect on the maturation process of a child's personality, affecting all that happens later in their lives.
And obviously I am painting my examples based on very homogeneous expectations of what life holds for a typical child in western civilization. Note that the Zodiac applies to a sky and constellations that would not be the same in the southern hemisphere. But there are many parallels you could probably find were you to perform a detailed study of the interconnections of ascribed personality traits to seasonal-based environmental influence on earliest infant and adolescent development.
And the moon supports this as the counter argument. The moon is close enough and massive enough and reflective enough to have a known physical affect on moods and tangible attributes of our environment. The stars at many millions or billions of times the distance to the moon are just too far away to actually have an effect, any effect. The Zodiac however was set-up (as the base description in the section titled The Zodiac above states) as a system that was based on 30 degree increments of the celestial sky. This was really an unknowing attempt on the Zodiac's creators to link seasonal change to human personality and thus supports fairly conclusively the theory that the traceability of affects of a Zodiac sign on a persons life choices stemming from personality traits is MUCH MUCH more likely to be a seasonal characteristic than anything historically attributed to the stars themselves.
### A Fundamental Agreement
And so, in essence I believe I am in agreement in a sense with those people who put great stock in the predictions made based on Astrology signs of the Zodiac. I think there is a basis in reality for the uncanny coincidences that come from the somewhat nebulous predictions made about people's personalities, their affinities for personalities under other signs than their own, on the implications of life choices taken in personality-based context.
The nearest star has no affect on you or I. The balance sheet of all the possible affects it could have (leaving out mysticism) combines to something so imperceptible as to not be conceivably useful to even consider. Science doesn't rule out an affect of the seasons but it does rule out the affect of the stars. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3872184753417969, "perplexity": 2049.2206471817713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824894.98/warc/CC-MAIN-20171021190701-20171021210701-00367.warc.gz"} |
http://mathhelpforum.com/pre-calculus/44661-describe-difference.html | # Math Help - Describe the Difference
1. ## Describe the Difference
In simple words, can anyone tell me what is the basic difference between Descartes' Rule of Signs and the Rational Zero Theorem? Is there a connection between the two?
Thanks
P.S. Who is Descartes anyway?
2. Hello,
P.S. Who is Descartes anyway?
^^
René Descartes - Wikipedia, the free encyclopedia
3. Originally Posted by magentarita
In simple words, can anyone tell me what is the basic difference between Descartes' Rule of Signs and the Rational Zero Theorem? Is there a connection between the two?
Thanks
P.S. Who is Descartes anyway?
Here is a link to the Rational Zeros Theorem. It basically gives the set of all possibilities of number that would satisfy $f(x)=0$
Descartes' Rule of Signs simply determines the number of possible solutions.
Rene Descartes was a French philosopher and mathematician who is known best for his works on the Cartesian coordinate system and his analytical geometry ideas provided the framework for Newton and Leibniz's Calculus.
4. ## Great...
I thank you both. I will check out the provided links when time allows. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329674243927002, "perplexity": 771.8343899370802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299054.80/warc/CC-MAIN-20150323172139-00232-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://ompf2.com/viewtopic.php?f=3&t=2135 | ## Geometry factor for parallel light?
Practical and theoretical implementation discussion.
dawelter
Posts: 47
Joined: Sun Oct 29, 2017 3:15 pm
Location: Germany
### Geometry factor for parallel light?
Say we have a source of parallel light, described by a delta distribution like Le(x,w) = dirac_delta(w-w0)*f(x). This could be a "laser-like" source like I indicated in the pic below or an infinitely distant light.
parallel_light_handling.png (43.74 KiB) Viewed 4579 times
Question is, when to add the 1/r^2 factor in the geometry term?
For illustration, my Gedankenexperiment:
*Case on lhs: Source projects a parallel beam. Its cross section is fixed. Even after going through the mirror. Thus the power received by the target in the bottom is independent of how far we take it away from the mirror. So I would omit the r-factor.
*Case on rhs: Light spreads out the further it goes away from the light source and/or the mirror. Decreasing power density must be accounted for by the r^-2 term.
So, tracing a path from a parallel source, I would omit the r^-2 term until the path hits a non-specular surface.
I wonder if I have the wrong idea in mind because I don't recall reading anything about propagating a "parallel beam flag".
shocker_0x15
Posts: 75
Joined: Sun Aug 19, 2012 3:24 pm
Contact:
### Re: Geometry factor for parallel light?
I'm not sure if I understand your question, but I think 1 / r^2 term should always be considered and is implicitly considered if you trace a path from the light source even if it is parallel light.
Light path construction is performed as follows:
1. sample a point y0 on the light with pdf p_A(y0) and get emittance (spatial component of emission) L_e^0(y0) [W/m^2]
Monte Carlo estimate:
L_e^0(y0) / p_A(p0)
2. sample a direction along which a ray emits with pdf p_w(y0->y1) and get a directional component of emission L_e^1(y0->y1) [1/sr]
Cumulative Monte Carlo estimate:
L_e^(y0) L_e^1(y0->y1) * |dot(n0, y0->y1)| / (p_A(y0) * p_w(y0->y1)) =
L_e(y0->y1) * |dot(n0, y0->y1)| / (p_A(y0) * p_w(y0->y1))
The above is the estimate for incoming flux at y1. It doesn't contain 1/ r^2 term because it is flux, not radiance nor intensity.
However we notice that it implicitly contains 1/r^2 if it is written with respect to surface area:
L_e(y0->y1) * |dot(n0, y0->y1)| / (p_A(y0) * p_w(y0->y1)) =
L_e(y0->y1) * G(y0<->y1) / (p_A(y0) * p_w(y0->y1) / |dot(n0, y0->y1)| * G(y0<->y1)) =
L_e(y0->y1) * G(y0<->y1) / (p_A(y0) * p_A(y1))
The numerator is the measurement contribution function from y0 to y1.
The function contains 1 / r^2 term but it is cancelled by corresponding G term for the pdf.
dawelter
Posts: 47
Joined: Sun Oct 29, 2017 3:15 pm
Location: Germany
### Re: Geometry factor for parallel light?
Yeah, I wasn't clear. I was mostly thinking of the path densities used to compute MIS weights. Eq 10.9 in Veach's thesis.
So this is really a question about the density of the end points in the left hand side case. I suppose the density does not change no matter how much I would move the target surface up or down. So I would omit the r^2 term in the density.
I tried to implement that, but to my surprise I didn't see a difference to the baseline version. It is probably bugged, but the renderings look all right. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768735527992249, "perplexity": 2543.4472836130185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00078.warc.gz"} |
https://wikivisually.com/wiki/Hexagonal_prism | # Hexagonal prism
Uniform Hexagonal prism
Type Prismatic uniform polyhedron
Elements F = 8, E = 18, V = 12 (χ = 2)
Faces by sides 6{4}+2{6}
Schläfli symbol t{2,6} or {6}x{}
Wythoff symbol 2 6 | 2
2 2 3 |
Coxeter diagrams
Symmetry D6h, [6,2], (*622), order 24
Rotation group D6, [6,2]+, (622), order 12
References U76(d)
Dual Hexagonal dipyramid
Properties convex, zonohedron
Vertex figure
4.4.6
In geometry, the hexagonal prism is a prism with hexagonal base. This polyhedron has 8 faces, 18 edges, and 12 vertices.[1]
Since it has eight faces, it is an octahedron. However, the term octahedron is primarily used to refer to the regular octahedron, which has eight triangular faces, because of the ambiguity of the term octahedron and the dissimilarity of the various eight-sided figures, the term is rarely used without clarification.
Before sharpening, many pencils take the shape of a long hexagonal prism.[2]
## As a semiregular (or uniform) polyhedron
If faces are all regular, the hexagonal prism is a semiregular polyhedron, more generally, a uniform polyhedron, and the fourth in an infinite set of prisms formed by square sides and two regular polygon caps, it can be seen as a truncated hexagonal hosohedron, represented by Schläfli symbol t{2,6}. Alternately it can be seen as the Cartesian product of a regular hexagon and a line segment, and represented by the product {6}×{}, the dual of a hexagonal prism is a hexagonal bipyramid.
The symmetry group of a right hexagonal prism is D6h of order 24. The rotation group is D6 of order 12.
## Volume
As in most prisms, the volume is found by taking the area of the base, with a side length of ${\displaystyle a}$, and multiplying it by the height ${\displaystyle h}$, giving the formula:[3]
${\displaystyle V={\frac {3{\sqrt {3}}}{2}}a^{2}\times h}$
## Symmetry
The topology of a uniform hexagonal prism can have geometric variations of lower symmetry, including:
Symmetry D6h, [2,6], (*622) C6v, [6], (*66) D3h, [2,3], (*322) D3d, [2+,6], (2*3)
Construction {6}×{}, t{3}×{}, s2{2,6},
Image
Distortion
## As part of spatial tesselations
It exists as cells of four prismatic uniform convex honeycombs in 3 dimensions:
It also exists as cells of a number of four-dimensional uniform 4-polytopes, including:
## Related polyhedra and tilings
This polyhedron can be considered a member of a sequence of uniform patterns with vertex figure (4.6.2p) and Coxeter-Dynkin diagram . For p < 6, the members of the sequence are omnitruncated polyhedra (zonohedrons), shown below as spherical tilings. For p > 6, they are tilings of the hyperbolic plane, starting with the truncated triheptagonal tiling. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323166370391846, "perplexity": 2243.3818132319543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944982.32/warc/CC-MAIN-20180421032230-20180421052230-00413.warc.gz"} |
https://en.wikipedia.org/wiki/Debits_and_credits | # Debits and credits
"Debit" redirects here. It is not to be confused with Debt. For other uses, see Debit (disambiguation).
For the Rudyard Kipling collection, see Debits and Credits (book).
In double entry bookkeeping, debits and credits (abbreviated Dr and Cr, respectively) are entries made in account ledgers to record changes in value resulting from business transactions. Generally speaking, the source account for the transaction is credited (that is, an entry is made on the right side of the account's ledger) and the destination account is debited (that is, an entry is made on the left side). Total debits must equal total credits for each transaction; individual transactions may require multiple debit and credit entries to record.[1][2]
The difference between the total debits and total credits in a single account is the account's balance. If debits exceed credits, the account has a debit balance; if credits exceed debits, the account has a credit balance.[3] For the company as a whole, the totals of debit balances and credit balances must be equal as shown in the trial balance report, otherwise an error has occurred.
Accountants use the trial balance to prepare financial statements (such as the balance sheet and income statement) which communicate information about the company's financial activities in a generally accepted standard format.
## History
The first known recorded use of the terms is Venetian Luca Pacioli's 1494 work, Summa de Arithmetica, Geometria, Proportioni et Proportionalita (translated: Everything That Is Known About Arithmetic, Geometry, Proportions and Proportionality). Pacioli devoted one section of his book to documenting and describing the double-entry bookkeeping system in use during the Renaissance by Venetian merchants, traders and bankers. This system is still the fundamental system in use by modern bookkeepers.[4]
One theory is that in its original Latin, Pacioli's Summa used the Latin words debere (to owe) and credere (to entrust) to describe the two sides of a closed accounting transaction. Assets were owed to the owner and the owners' equity was entrusted to the company. At the time negative numbers were not in use. When his work was translated, the Latin words debere and credere became the English debit and credit. Under this theory, the abbreviations Dr (for debit) and Cr (for credit) derive from the original Latin.[5] However, Sherman[6] casts doubt on this idea because Pacioli uses Per (Latin for "from") for the debtor and A (Latin for "to") for the creditor in the Journal entries. Sherman goes on to say that the earliest text he found that actually uses "Dr." as an abbreviation in this context was an English text, the third edition (1633) of Ralph Handson's book Analysis or Resolution of Merchant Accompts[7] and that Handson uses Dr. as an abbreviation for the English word "debtor." (Sherman could not locate a first edition, but speculates that it too used Dr. for debtor.) The words actually used by Pacioli for the left and right sides of the Ledger are "in dare" and "in havere" (give and receive).[8] Geijsbeek the translator suggests in the preface: 'if we today would abolish the use of the words debit and credit in the ledger and substitute the ancient terms of "shall give" and "shall have" or "shall receive," the personification of accounts in the proper way would not be difficult and, with it, bookkeeping would become more intelligent to the proprietor, the layman and the student.'[9]
Jackson[10] notes that "debtor" need not be a person, but can be an abstract operator (cf. "divisor" in math) "...it became the practice to extend the meanings of the terms ... beyond their original personal connotation and apply them to inanimate objects and abstract conceptions...".
## Aspects of transactions
To determine whether one must debit or credit a specific account we use either the accounting equation approach which consists of five accounting rules[11] or the traditional approach based on three rules (for Real accounts, Personal accounts, and Nominal accounts) to determine whether to debit or to credit an account.[12]
• Real accounts are the assets of a firm, which may be tangible (machinery, buildings etc.) or intangible (goodwill, patents etc.)
• Personal accounts relate to individuals, companies, creditors, banks etc.
• Nominal accounts relate to expenses, losses, incomes or gains.[13]
Whether a debit increases or decreases an account depends on what kind of account it is. An increase to an asset account is a debit. An increase to a liability or to an equity account is a credit.
Kind of account Debit Credit
Asset Increase Decrease
Liability Decrease Increase
Income/Revenue Decrease Increase
Expense Increase Decrease
Equity/Capital Decrease Increase
Conversely, a decrease to an asset account is a credit. A decrease to a liability or equity account is a debit.
Debits and credits occur simultaneously in every financial transaction in double-entry bookkeeping. In the accounting equation—Assets = Liabilities + Equity—if an asset account increases (a debit), then either another asset account must decrease (a credit), or a liability or equity account must increase (a credit).
For example, when the customer of a bank deposits money into his bank account two things change: the customer's cash-in-hand (asset) decreases and the customer's bank account balance increases. The decrease in the cash-in-hand asset is a credit while the increase in the bank account balance is a debit of equal magnitude.
The bank views the transaction from a different perspective but follows the same rules: the bank's vault cash (asset) increases, which is a debit; the increase in the customer's account balance (liability from the bank's perspective) is a credit. A customer's periodic bank statement generally shows transactions from the bank's perspective, with bank deposits characterized as credits and withdrawals as debits.
Debits are traditionally entered on the left-hand side of a ledger and credits on the right-hand side.
## Commercial understanding
All accounts must first be classified as one of the five types of accounts (accounting elements). To determine how to classify an account into one of the five elements, the definitions of the five account types must be fully understood i.e. the definition of an asset according to IFRS is as follows, "An asset is a resource controlled by the entity as a result of past events from which future economic benefits are expected to flow to the entity".[14] To understand this definition we can break it down into its constituent parts with an example:
Example: Classify what type of account the business "Bank account" is.
The bank account of a business is "a resource controlled by the entity" as it belongs to the business. "As a result of past events" such as the opening of the business. "From which future economic benefits are expected to flow to the entity" – a business such as a grocers can expect to make money due to the sale of their goods. This basic analogy can be applied to any asset account.
All of the five accounting elements have their own definitions (discussed in other articles see: asset, liability, equity, income and expense) that must be fully understood in order to classify an account correctly.
A business will most often have more than one asset account. An essential asset account in any business is the business's bank account (see: "Accounts pertaining to the five accounting elements" below for more examples) The same applies to liability accounts i.e. if I have borrowed money from two sources (called creditors or payables), then I must open two accounts to represent this present liability, called 'Creditor/Payable A' and 'Creditor/Payable B'. In this manner I may have multiple, different accounts. However all these accounts are all classified as one of the five types of accounts, therefore my entire business can be described in terms of its assets, expenses, liabilities, income and equity/capital (see extended accounting equation). This is the extent of "my" business in relation to accounts, regardless of the business' practices (the business may be a retail franchise, furniture shop, restaurant, etc.). With respect to my business, each of the five accounting elements will have a monetary value, and this can be used to assess the financial position of my business at any time (my success, failure, or any other attributes that I might need to know).
Traditionally, transactions are recorded in two separate columns of numbers (known as a ledger or "T-account"): debit transactions in the left hand column and credit transactions in the right hand column. Keeping the debits and credits in separate columns allows each column to be recorded and totaled independently. Accounts within the general ledger are known colloquially as "T-accounts" due to the "T" shape that the table resembles. Each column of a ledger account lists transactions affecting that account.
## Terminology
The words debit and credit are both used differently depending on whether they are used in a bookkeeping (accounting) sense, or non-accounting sense.
In a non-accounting sense, "debit" is:
• a sum of money taken from a bank account.
In a non-accounting sense, "credit" is
• a sum of money placed into a bank account.
• money available to spend.
• money available to borrow.
The reason why individuals see debits and credits in the above manner, is that the bank statement presented by the bank to the customer is the bank's view of the account. The bank views money in a chequing account as money the bank owes to the customer, i.e. a liability, and in the rules of accounting, an increase to a liability account is a credit. Likewise, when a bank lends money to a customer and places the money into the customer's chequing account, the bank has increased its obligation to pay that money, which is a liability, and this increase is a credit and appears in the credit column of a bank statement.
When recording numbers in accounting, a debit value is placed on the left side of a ledger for a debited account and a credit value is placed on the right side of a ledger for a credited account. A debit or a credit either increases or decreases the total balance in each account, depending on what kind of accounts they are.
Each transaction (say, of value £100) is recorded by a debit entry of £100 in one account and a credit entry of £100 in another account. When people say, "debits must equal credits" they do not mean that the two columns of any ledger account must be equal. If that were the case, every account would have a zero balance (no difference between the columns) which is often not the case. The rule that total debits equal the total credits applies when all accounts are totaled.
More than two accounts may be affected by the same transaction. A transaction for £100 can be recorded as a £100 debit in one account and as multiple credits that total £100 in other accounts.
Example:
I owe creditors A and B £100 each. Thus my liability account for Creditor A has a credit balance of £100 and the same for Creditor B.
Cr: Creditor A (100)
Cr: Creditor B (100)
I pay them off from my bank chequing account, which from my point of view is an asset. I withdraw £200 from my bank account and split it to pay off the two liabilities. In my records, "Creditor A" is one account, "Creditor B" is another account, and "Bank" is a third account. The following transactions affect all three-ledger accounts:
Dr: Creditor A (100)
Dr: Creditor B (100)
Cr: Bank (200)
When I write two £100 cheques for a total of £200, the balance in my bank account is reduced by £200. Based on the law of accounting, a decrease in my cash asset is a credit. The total credit for my asset balance is greater than the total debit. Thus, in my records, my "Bank" ledger account has an asset credit balance, which is reduced by the credit for £200. Amounts in my records for the two creditors are liabilities, which are reduced by the two debits totaling £200.
Therefore, for this transaction, the total amount debited = 200 and the total amount credited = 200. When all three accounts are totaled, the total debits equal the total credits.
At the end of any financial period (say at the end of the quarter or the year), the total debits and the total credits for each account may be different and this difference of the two sides is called the balance. If the sum of the debit side is greater than the sum of the credit side, then the account has a "debit balance". If the sum of the credit side is greater, then the account has a "credit balance". If the two sides do equal each other (this would be a coincidence, not as a result of the laws of accounting), then we say we have a "zero balance".
### Debit cards and credit cards
Debit cards and credit cards are creative terms used by the banking industry to market and identify each card.[15] From the cardholder's point of view, a credit card account normally contains a credit balance, a debit card account normally contains a debit balance. A debit card is used to make a purchase with one's own money. A credit card is used to make a purchase by borrowing money.[16]
From the bank's point of view, when a debit card is used to pay a merchant, the payment causes a decrease in the amount of money the bank owes to the cardholder. From the bank's point of view, your debit card account is the bank's liability. A decrease to the bank's liability account is a debit. From the bank's point of view, when a credit card is used to pay a merchant, the payment causes an increase in the amount of money the bank is owed by the cardholder. From the bank's point of view, your credit card account is the bank's asset. An increase to the bank's asset account is a debit. Hence, using a debit card or credit card causes a debit to the cardholder's account in either situation when viewed from the bank's perspective.
### General ledgers
General ledger is the term for the comprehensive collection of T-accounts (so called because there was a pre-printed vertical line in the middle of each ledger page and a horizontal line at the top of each ledger page, like a large letter T). Before the advent of computerised accounting, manual accounting procedure used a book (known as a ledger) for each T-account. The collection of all these books was called the general ledger.
"Day Books" or journals are used to list every single transaction that took place during the day, and the list is totalled at the end of the day. These daybooks are not part of the double-entry bookkeeping system. The information recorded in these daybooks is then transferred to the general ledgers. Modern computer software now allows for the instant update of each ledger account – for example, when recording a cash receipt in a cash receipts journal a debit is posted to a cash ledger account with a corresponding credit in the ledger account for which the cash was received. Not every single transaction need be entered into a T-account. Usually only the sum of the book transactions (a batch total) for the day is entered in the general ledger.
## The five accounting elements
There are five fundamental elements[11] within accounting. These elements are as follows: Assets, Liabilities, Equity (or Capital), Income (or Revenue) and Expenses. The five accounting elements are all affected in either a positive or negative way. A credit transaction does not always dictate a positive value or increase in a transaction and similarly, a debit does not always indicate a negative value or decrease in a transaction. An asset account is often referred to as a "debit account" due to the account's standard increasing attribute on the debit side. When an asset (e.g. an espresso machine) has been acquired in a business, the transaction will affect the debit side of that asset account illustrated below:
Asset
Debits (Dr) Credits (Cr)
X
The "X" in the debit column denotes the increasing effect of a transaction on the asset account balance (total debits less total credits), because a debit to an asset account is an increase. The asset account above has been added to by a debit value X, i.e. the balance has increased by £X or \$X. Likewise, in the liability account below, the X in the credit column denotes the increasing effect on the liability account balance (total credits less total debits), because a credit to a liability account is an increase.
All "mini-ledgers" in this section show standard increasing attributes for the five elements of accounting.
Liability
Debits (Dr) Credits (Cr)
X
Income
Debits (Dr) Credits (Cr)
X
Expenses
Debits (Dr) Credits (Cr)
X
Equity
Debits (Dr) Credits (Cr)
X
Summary table of standard increasing and decreasing attributes for the five accounting elements:
ACCOUNT TYPE DEBIT CREDIT
Asset +
Liability +
Income +
Expense +
Equity +
## Principle
Each transaction that takes place within the business will consist of at least one debit to a specific account and at least one credit to another specific account. A debit to one account can be balanced by more than one credit to other accounts, and vice versa. For all transactions, the total debits must be equal to the total credits and therefore balance.
The general accounting equation is as follows:
${\displaystyle Assets=Equity+Liabilities}$[17]
${\displaystyle A=E+L}$
The equation thus becomes A – L – E = 0 (zero). When the total debits equals the total credits for each account, then the equation balances.
The extended accounting equation is as follows:
${\displaystyle Assets+Expenses=Equity/Capital+Liabilities+Income}$
${\displaystyle A+Ex=E+L+I}$.
In this form, increases to the amount of accounts on the left-hand side of the equation are recorded as debits, and decreases as credits. Conversely for accounts on the right-hand side, increases to the amount of accounts are recorded as credits to the account, and decreases as debits.
This can also be rewritten in the equivalent form:
${\displaystyle Assets=Liabilities+Equity/Capital+(Income-Expenses)}$
${\displaystyle A=L+E+(I-Ex)}$
where the relationship of the Income and Expenses accounts to Equity and profit is a bit clearer.[18] Here Income and Expenses are regarded as temporary or nominal accounts which pertain only to the current accounting period whereas Asset, Liabilities and Equity accounts are permanent or real accounts pertaining to the lifetime of the business.[19] The temporary accounts are closed to the Equity account at the end of the accounting period to record profit/loss for the period. Both sides of these equations must be equal (balance).
Each transaction is recorded in a ledger or "T" account, e.g. a ledger account named "Bank" that can be changed with either a debit or credit transaction.
In accounting it is acceptable to draw-up a ledger account in the following manner for representation purposes:
Bank
Debits (Dr) Credits (Cr)
### Accounts pertaining to the five accounting elements
Accounts are created/opened when the need arises for whatever purpose or situation the entity may have. For example, if your business is an airline company they will have to purchase airplanes, therefore even if an account is not listed below, a bookkeeper or accountant can create an account for a specific item, such as an asset account for airplanes. In order to understand how to classify an account into one of the five elements, a good understanding of the definitions of these accounts is required. Below are examples of some of the more common accounts that pertain to the five accounting elements:
#### Asset accounts
Asset accounts are economic resources which benefit the business/entity and will continue to do so.[20] Cash, bank, accounts receivable, inventory, land, buildings/plant, machinery, furniture, equipment, supplies, vehicles, trademarks and patents, goodwill, prepaid expenses, prepaid insurance, debtors (people who owe us money, due within one year), VAT input etc.
#### Liability accounts
Liability accounts record debts or future obligations the business/entity owes to others.[21] Accounts payable, salaries and wages payable, income taxes, bank overdrafts, trust accounts, accrued expenses, sales taxes, advance payments (unearned revenue), debt and accrued interest on debt, customer deposits, VAT output etc.
#### Equity accounts
Equity accounts record the claims of the owners of the business/entity to the assets of that business/entity.[22] Capital, retained earnings, drawings, common stock, accumulated funds, etc.
#### Income/Revenue accounts
Income accounts record all increases in Equity other than that contributed by the owner/s of the business/entity.[23] Services rendered, sales, interest income, membership fees, rent income, interest from investment, recurring receivables,donation etc.
#### Expense accounts
Expense accounts record all decreases in the owners' equity which occur from using the assets or increasing liabilities in delivering goods or services to a customer - the costs of doing business.[24] Telephone, water, electricity, repairs, salaries, wages, depreciation, bad debts, stationery, entertainment, honorarium, rent, fuel, utility, interest etc.
### Example
Quick Services business purchases a computer for £500, on credit, from ABC Computers. Recognize the following transaction for Quick Services in a ledger account (T-account):
Quick Services has acquired a new computer which is classified as an asset within the business. According to the accrual basis of accounting, even though the computer has been purchased on credit, the computer is already the property of Quick Services and must be recognised as such. Therefore, the equipment account of Quick Services increases and is debited:
Equipment (Asset)
(Dr) (Cr)
500
As the transaction for the new computer is made on credit, the payable "ABC Computers" has not yet been paid. As a result, a liability is created within the entity's records. Therefore, to balance the accounting equation the corresponding liability account is credited:
Payable ABC Computers (Liability)
(Dr) (Cr)
500
The above example can be written in journal form:
Dr Cr
Equipment 500
ABC Computers (Payable) 500
The journal entry "ABC Computers" is indented to indicate that this is the credit transaction. It is accepted accounting practice to indent credit transactions recorded within a journal.
In the accounting equation form:
A = E + L
500 = 0 + 500 (The accounting equation is therefore balanced)
### Further examples
1. A business pays rent with cash: You increase rent (expense) by recording a debit transaction, and decrease cash (asset) by recording a credit transaction.
2. A business receives cash for a sale: You increase cash (asset) by recording a debit transaction, and increase sales (income) by recording a credit transaction.
3. A business buys equipment with cash: You increase equipment (asset) by recording a debit transaction, and decrease cash (asset) by recording a credit transaction.
4. A business borrows with a cash loan: You increase cash (asset) by recording a debit transaction, and increase loan (liability) by recording a credit transaction.
5. A business pays salaries with cash: You increase salary (expenses) by recording a debit transaction, and decrease cash (asset) by recording a credit transaction.
6. The totals show the net effect on the accounting equation and the double-entry principle, where the transactions are balanced.
Account Debit (Dr) Credit (Cr)
1. Rent 100
Bank 100
2. Bank 50
Sales 50
3. Equipment 5200
Bank 5200
4. Bank 11000
Loan 11000
5. Salary 5000
Bank 5000
6. Total (Dr) \$21350
Total (Cr) \$21350
## T-accounts
The process of using debits and credits creates a ledger format that resembles the letter "T".[25] The term "T-account" is accounting jargon for a "ledger account" and is often used when discussing bookkeeping.[26] The reason that a ledger account is often referred to as a T-account is due to the way the account is physically drawn on paper (representing a "T"). The left side (column) of the "T" for Debit (Dr) transactions and the right side (column) of the "T" for Credit (Cr) transactions.
Debits (Dr) Credits (Cr)
## Contra account
All accounts have corresponding contra accounts depending on what transaction has taken place e.g., when a vehicle is purchased using cash, the asset account "Vehicles" is debited as the vehicle account increases, and simultaneously the asset account "Bank" is credited due to the payment for the vehicle using cash. Some balance sheet items have corresponding contra accounts, with negative balances, that offset them. Examples are accumulated depreciation against equipment, and allowance for bad debts (also known as allowance for doubtful accounts) against accounts receivable.[27] United States GAAP utilizes the term contra for specific accounts only and doesn't recognize the second half of a transaction as a contra, thus the term is restricted to accounts that are related. For example, sales returns and allowance and sales discounts are contra revenues with respect to sales, as the balance of each contra (a debit) is the opposite of sales (a credit). To understand the actual value of sales, one must net the contras against sales, which gives rise to the term net sales (meaning net of the contras).[28]
A more specific definition in common use is an account with a balance that is the opposite of the normal balance (Dr/Cr) for that section of the general ledger.[28] An example is an office coffee fund: Expense "Coffee" (Dr) may be immediately followed by "Coffee - employee contributions" (Cr).[29] Such an account is used for clarity rather than being a necessary part of GAAP (generally accepted accounting principles).[28]
## Real, personal, and nominal accounts
Real accounts are assets. Personal accounts are liabilities and owners' equity and represent people and entities that have invested in the business. Nominal accounts are revenue, expenses, gains, and losses. Accountants close nominal accounts at the end of each accounting period.[30] This method is used in the United Kingdom, where it is simply known as the Traditional approach.[12]
Transactions are recorded by a debit to one account and a credit to another account using these three "golden rules of accounting":
1. Real account: Debit what comes in and credit what goes out
2. Personal account: Debit who receives and Credit who gives.
3. Nominal account: Debit all expenses & losses and Credit all incomes & gains
Debit Credit
Real (assets) Increase Decrease
Personal (liability) Decrease Increase
Personal (owner's equity) Decrease Increase
Nominal (revenue) Decrease Increase
Nominal (expenses) Increase Decrease
Nominal (gain) Decrease Increase
Nominal (loss) Increase Decrease
## References
1. ^ "Debit Credit Rules". Accounting Explained. AccountingExplained.com. Retrieved 4 August 2011.
2. ^ "Making sense of Debits and Credits in Accounting". Archived from the original on 10 July 2013. Retrieved 5 May 2013.
3. ^ Larson, Kermit; Jensen, Tilly (2005). Fundamental Accounting Principles. McGraw-Hill Ryerson. ISBN 0-07-091649-7.
4. ^ "Peachtree For Dummies, 2nd Ed." (PDF). Retrieved 6 Feb 2011.
5. ^ "Basic Accounting Concepts 2 – Debits and Credits". Retrieved 6 Feb 2011.
6. ^ "Wheres's the "R" in Debit?" by W. Richard Sherman published in The Accounting Historians Journal, Vol. 13, No. 2 (Fall 1986), pp. 137-143.
7. ^
8. ^ "For each one of all the entries that you have made in the Journal you will have to make two in the Ledger. That is, one in the debit (in dare) and one in the credit (in havere). In the Journal the debtor is indicated by per, the creditor by a, as we have said...The debitor entry must be at the left, the creditor one at the right." Geijsbeek, John B (1914). Ancient Double-entry Bookkeeping. Retrieved Jul 31, 2016. A facsimile of the original Italian is given on the facing page to the translation.
9. ^ Geijsbeek, John B (1914). Ancient Double-entry Bookkeeping. p. 15. Retrieved Jul 31, 2016.
10. ^ Jackson, J.G.C., "The History of Methods of Exposition of Double-Entry Bookkeeping in England." Studies in the History of Accounting, A. C. Littleton and Basil S. Yamey (eds.). Homewood, III.: Richard D. Irwin, 1956. p. 295
11. ^ a b Pieters, A. Dempsey, H. N. (2009). Introduction to financial accounting (7th ed.). Durban: Lexisnexis. ISBN 978-0-409-10580-3.
12. ^ a b Accountancy: Higher Secondary First Year (PDF) (First ed.). Tamil Nadu Textbooks Corporation. 2004. pp. 28–34. Retrieved 12 July 2011.
13. ^ "What are the Three Type of Accounts?". Accounting Capital. Retrieved Jul 30, 2016.
14. ^ IFRS for SMEs. 1st Floor, 30 Cannon Street, London EC4M 6XH, United Kingdom: IASB (International Accounting Standards Board). 2009. p. 14. ISBN 978-0-409-04813-1.
15. ^ Difference between Credit Card and Debit Card. Diffbetween.org (2012-02-08). Retrieved on 2012-05-04.
16. ^ "Accounting made easy 4 – Debits and Credits". Retrieved 13 March 2011.
17. ^ Financial Accounting 5th Ed,p 47, Horngren, Harrison, Bamber, Best, Fraser, Willet, Pearson/PrenticeHall, 2006
18. ^ Financial Accounting 5th Ed,p 14-15, Horngren, Harrison, Bamber, Best, Fraser, Willet, Pearson/PrenticeHall, 2006
19. ^ Financial Accounting 5th Ed,p 145, Horngren, Harrison, Bamber, Best, Fraser, Willet, Pearson/PrenticeHall, 2006
20. ^ Financial Accounting, Horngren, Harrison, Bamber, Best, Fraser Willet, pp13,44, Pearson/PrenticeHall 2006,
21. ^ Financial Accounting, Horngren, Harrison, Bamber, Best, Fraser Willet, pp14,45, Pearson/PrenticeHall 2006,
22. ^ Financial Accounting, Horngren, Harrison, Bamber, Best, Fraser Willet, pp 14,46, Pearson/PrenticeHall 2006,
23. ^ Financial Accounting, Horngren, Harrison, Bamber, Best, Fraser Willet, p14, Pearson/PrenticeHAll 2006,
24. ^ Financial Accounting, Horngren, Harrison, Bamber, Best, Fraser Willet, p15, Pearson/PrenticeHall 2006,
25. ^ Weygandt, Jerry J. (2009). Financial Accounting. John Wiley and Sons. p. 53. ISBN 978-0-470-47715-1.
26. ^ Cusimano, David. "Accounting Abbreviations – Helping You Understand Accounting Jargon". Loughborough. Retrieved 18 August 2011.
27. ^ "Normal balances in the accounting double entry system". The Accounting Adventurista. Retrieved March 3, 2014.
28. ^ a b c "Contra account definition". Accounting Coach. Retrieved March 3, 2014.
29. ^ "Q&A: What is a contra expense account?". Accounting Coach. Retrieved March 3, 2014.
30. ^ "Account Types or Kinds of Accounts :: Personal, Real, Nominal". Retrieved 2011-04-08. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2588697075843811, "perplexity": 5628.935010082527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00242-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/42833/the-160-equilibrium-constant-for-the-reaction-160-160-160-160-160-160-160-160-16 | # Problem: The equilibrium constant for the reaction 2 H 2 (g) + CO (g) ⇌ CH 3OH (g) is 1.6 x 10-8 at a certain temperature. If there are 1.17 moles of H 2 and 3.46 moles of CH3OH at equilibrium in a 5.60 L flask, how many moles of CO are present at equilibrium?
###### Problem Details
The equilibrium constant for the reaction
2 H 2 (g) + CO (g) ⇌ CH 3OH (g)
is 1.6 x 10-8 at a certain temperature. If there are 1.17 moles of H 2 and 3.46 moles of CH3OH at equilibrium in a 5.60 L flask, how many moles of CO are present at equilibrium? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483272790908813, "perplexity": 10733.138203722257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00235.warc.gz"} |
https://brilliant.org/problems/population-double-enough-trouble/ | # Population Double! Enough Trouble
Algebra Level 3
In England, with respect to the initial population each year, the death rate is $\frac{1}{46}$ and the birth rate is $\frac{1}{33}.$
If there were no emigration, how many years would it take for the population to double? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833436608314514, "perplexity": 765.8713408577532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00528.warc.gz"} |
https://keio.pure.elsevier.com/en/publications/topological-defects-and-nano-hz-gravitational-waves-in-aligned-ax | # Topological defects and nano-Hz gravitational waves in aligned axion models
Tetsutaro Higaki, Kwang Sik Jeong, Naoya Kitajima, Toyokazu Sekiguchi, Fuminobu Takahashi
Research output: Contribution to journalArticlepeer-review
21 Citations (Scopus)
## Abstract
Abstract: We study the formation and evolution of topological defects in an aligned axion model with multiple Peccei-Quinn scalars, where the QCD axion is realized by a certain combination of the axions with decay constants much smaller than the conventional Peccei-Quinn breaking scale. When the underlying U(1) symmetries are spontaneously broken, the aligned structure in the axion field space exhibits itself as a complicated string-wall network in the real space. We find that the string-wall network likely survives until the QCD phase transition if the number of the Peccei-Quinn scalars is greater than two. The string-wall system collapses during the QCD phase transition, producing a significant amount of gravitational waves in the nano-Hz range at present. The typical decay constant is constrained to be below O(100) TeV by the pulsar timing observations, and the constraint will be improved by a factor of 2 in the future SKA observations.
Original language English 44 Journal of High Energy Physics 2016 8 https://doi.org/10.1007/JHEP08(2016)044 Published - 2016 Aug 1
## Keywords
• Beyond Standard Model
• Cosmology of Theories beyond the SM
## ASJC Scopus subject areas
• Nuclear and High Energy Physics
## Fingerprint
Dive into the research topics of 'Topological defects and nano-Hz gravitational waves in aligned axion models'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458096623420715, "perplexity": 3704.3333711855284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00431.warc.gz"} |
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kruskal.html | # scipy.stats.kruskal¶
scipy.stats.kruskal(*args, **kwargs)[source]
Compute the Kruskal-Wallis H-test for independent samples
The Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal. It is a non-parametric version of ANOVA. The test works on 2 or more independent samples, which may have different sizes. Note that rejecting the null hypothesis does not indicate which of the groups differs. Post-hoc comparisons between groups are required to determine which groups are different.
Parameters: sample1, sample2, ... : array_like Two or more arrays with the sample measurements can be given as arguments. nan_policy : {‘propagate’, ‘raise’, ‘omit’}, optional Defines how to handle when input contains nan. ‘propagate’ returns nan, ‘raise’ throws an error, ‘omit’ performs the calculations ignoring nan values. Default is ‘propagate’. statistic : float The Kruskal-Wallis H statistic, corrected for ties pvalue : float The p-value for the test using the assumption that H has a chi square distribution
f_oneway
1-way ANOVA
mannwhitneyu
Mann-Whitney rank test on two samples.
friedmanchisquare
Friedman test for repeated measurements
Notes
Due to the assumption that H has a chi square distribution, the number of samples in each group must not be too small. A typical rule is that each sample must have at least 5 measurements.
References
[R599] W. H. Kruskal & W. W. Wallis, “Use of Ranks in One-Criterion Variance Analysis”, Journal of the American Statistical Association, Vol. 47, Issue 260, pp. 583-621, 1952.
Examples
>>> from scipy import stats
>>> x = [1, 3, 5, 7, 9]
>>> y = [2, 4, 6, 8, 10]
>>> stats.kruskal(x, y)
KruskalResult(statistic=0.27272727272727337, pvalue=0.60150813444058948)
>>> x = [1, 1, 1]
>>> y = [2, 2, 2]
>>> z = [2, 2]
>>> stats.kruskal(x, y, z)
KruskalResult(statistic=7.0, pvalue=0.030197383422318501)
#### Previous topic
scipy.stats.wilcoxon
#### Next topic
scipy.stats.friedmanchisquare | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.510908305644989, "perplexity": 2323.0071585738583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608984.81/warc/CC-MAIN-20170527171852-20170527191852-00037.warc.gz"} |
https://www.overleaf.com/learn/how-to/How_can_I_upload_files_from_Google_Drive%3F | ## Introduction
Once uploaded from Google Drive, the corresponding Overleaf project file(s) can be refreshed at any time to resync them with the latest version stored in Google Drive—the video below demonstrates how to do that.
## Notes and caveats on changes to Google-based services
File-hosting services, such as Google Drive, can, at any time, make changes to their services; in particular, modifying the structure of link-sharing URLs. For example, in 2021 Google announced a Google Drive security update which added the resourcekey parameter to their file-sharing URLs. However, due to the nature of this update it seems that not all users (or all their files) will be immediately affected by this change: some Google Drive users will see the resourcekey parameter, others might not.
Due to such unpredictable changes, it’s impractical for Overleaf to provide consistently up-to-date and fully documented URL-conversion processes for Google Drive or similar services—such guidance could quickly become out-of-date and thus misleading. However, in this page we provide one way to convert Google Drive file-sharing URLs containing the resourcekey parameter to a direct download URL you can use with Overleaf. Our team has tested this procedure and it worked for us but we cannot guarantee it be relevant to, or work for, everyone. In case of difficulty, readers should, in the first instance, try to find and consult the most current documentation provided by the hosting service(s) they use, or seek out up-to-date articles or YouTube videos.
## Sharing a file in Google Drive
Start by identifying a file stored in Google Drive that you want to upload into your Overleaf project:
Next, you need to share that file, to make it accessible outside of Google Drive. The following screenshot, taken on a desktop PC, shows how to access the file-sharing and file-link options within Google Drive:
After you select the option to share the file, a Share with people and groups dialog box appears (see below). From here you can choose to share that file with particular individuals/groups or make it available to anyone who has the appropriate Google Drive link.
Here, we’ll share the file with anyone who has the Google Drive link by changing the link-sharing setting from Restricted to Anyone with the link, as shown in the following screenshot taken from a desktop device (Windows laptop):
After selecting the preferred link-sharing option, we need to obtain a copy of the Google Drive link (URL) to our shared file. There are several ways to do this:
• choose Copy link from the dialog box above (if still visible on your screen), or
• right-click on the file and choose Get link (see screenshot below) then select Copy link from the pop-up box, or
• select the link option displayed to the right (see screenshot below) then select Copy link from the pop-up box.
After pasting the Google Drive link (URL) into a text editor you should see it has the following structure:
https://drive.google.com/file/d/FILE_ID/view?usp=sharing&resourcekey=RESOURCE_KEY
• FILE_ID
• resourcekey=RESOURCE_KEY
Here is an image highlighting the FILE_ID and RESOURCE_KEY components:
FILE_ID and RESOURCE_KEY are lengthy alpha-numeric character sequences so be careful when you copy them to avoid missing/dropping any characters.
Having extracted the FILE_ID and RESOURCE_KEY, use the following template to construct the download URL for Overleaf:
https://drive.google.com/uc?export=download&id=FILE_ID&resourcekey=RESOURCE_KEY
Here is an image highlighting use of the FILE_ID and RESOURCE_KEY components within the constructed download URL: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20382127165794373, "perplexity": 2771.868810493197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00232.warc.gz"} |
http://www.orbifold.net/default/categories-monads-fsharp/ | • is a computational expression the same as a workflow, is it a monad?
• are the methods defined by F# as part of a computation expression dictated by the idea of a monad? How to translate the concepts from categories to functional programming and back?
• why should I care about monads, they never appear in C# or JavaScript…!?
• how can the Haskell or OCaml stuff be translated to F#? Though Haskell has a whole lot of info regarding monads, it’s not always easy to see how it maps to the F# syntax or way of thinking.
Let me say straight away that understanding monads will not directly solve your marketing challenges, nor will it make writing software easier, not is it a magical recipe to simplify the data access layer of your website. However, it does
• make you a better programmer in the same way that learning design patterns (anti-pattern etc.) teaches you to recognize structures and solutions on a higher level of abstraction
• shed light on the relationship between domains which seem unrelated (exception handling, state management, graphs and networks, asynchronous calls, domain specific languages and so on)
• suggest you to rethink software structures and how typical programming challenges are solved
• open up a large field of ideas in the overlap of software development and maths which are fun to explore.
I don’t expect to see monadic stuff appear in Silverlight animations or in CSS style sheets but for sure you’ll see it appear in
• the NoSQL movement and graph analysis algorithms
• the financial and insurance business where F# is at this moment a big hit
• DSL developments and meta-programming
and who knows what the future brings?
## A simple idea
We are surrounded by data collections and structures, sets of interrelated data and hierarchies:
• a family and its members where relationships are defined by concepts like father, mother, sister, uncle and so on
• company hierarchies with employees and relationships defined by team structures, divisions and so on
• modules of code and calling/instantiation relations where exceptions are bubbling from down the stack to the upper (UI level if not properly handled) parts
• algorithmic financial transactions (flows) based on some intricate logic
and on a more scientific level even more where one notices certain similarities and intriguing analogies:
• genes and memes: the formation of genetic structures vs. the formation of thoughts and ideas
• the spreading of viral diseases vs. the spreading of computer viruses
• the six-degrees of freedom appearing on the internet vs. the plex of actors in the IMDB database
In a lot of cases similarities are just accidental (it’s because a banana and a lemon are yellow that there is deep connection, except the shared pigment) but there are equally well amazing symmetries which can be made precise. Category theory is a theory which makes this idea of ‘being similar’ exact and at the heart of it is really a very simple notion:
• you have collection of objects
• a collection has a certain structure
• you can map objects from one collection to another and the relationship can also be mapped.
Example
Take the members of a family (with their relationships; mother, father etc.) and take the employees in a company with its hierarchy. Can you create a mapping from one collection to the other? If your thinking is more in terms of pictures and visuals you’ll probably starts drawing a graph and map the nodes and edges from one graph to the other. This is no accident, graphs and mappings from one graph onto another is a good playground to study categories.
Example
Take some business logic and imagine that at some point an exception is thrown; it propagates through the logic to end up being caught (or not). Take, on the other hand, the stateful (call it state-machine if you prefer) flow of an online ordering system where you are guided through a wizard to check out and buy some goods (like you’d do on Amazon). Is there some underlying structure governing these two systems? Probably this would take you somewhat longer than the previous example and it also requires a bit more abstraction, but the answer is ‘yes’ there is a mapping (i.e. a mathematically exact symmetry) between the two.State machines (automata) and the way exceptions or infinities can be handled in code have a common mathematical underpinning.
From this we take a bold step a define a (pedestrian) category as follows (more precise definition below):
Definition
A category is a collection of objects with some relationships. The objects in the collection are called (duh!) objects and the relationships are called morphisms. A mapping between two categories is called a functor.
Maybe it should be called at this point a pedestrian category or something because this is of course not very precise; this definition would turn pretty much everything into categories and functors! At this point I only want to help you understand where the real maths and the hard F# stuff is coming from and make it more acceptable.
A morphism is something which morphs one thing into another, something which transfers one thing into another. You will also encounter in the literature similar concepts like
• homomorphism: it emphasizes even more the idea of morphing. The prefix ‘homo’ meaning ‘the same’ and ‘morph’ means ‘changing’. Technically speaking the term homomorphism comes from group theory and morphism refers to the more general idea which can be applied outside group theory and Lie groups.
• endomorphism: ‘endo’ means ‘in itself’ and this emphasizes that a morphism is not going somewhere else but stays in the same package. So, an endomorphism of a family would map one person onto another inside the same family. A morphism would in general morph one family onto another.
• epimorphism: here the emphasis is on the fact that the morphism reaches the whole target collection.
• isomorphism: in this case the morphism is mapping elements one-to-one and onto the target. The notion is a very important one and appears pretty much everywhere in maths and science. In essence it says that if two things are isomorphic they are indistinguishable from one another and anything.everything you say about one is valid for the other.
## Some more simple examples
In order to move away from the heuristics above towards some more serious definitions I need to highlight more (simple) examples.
Example
Take the collection of sets and the mappings between sets. This is a category where the mappings are the morphisms (morphings) and it’s often denoted with Set (in bold). An endofunctor F here is, for example, a mapping into itself which maps a set, say S, to the set of subsets:$F: S \rightarrow 2^S$
Example
Take the real number (floats, if you prefer) with an operation called ‘addition’. Take the same real number with an operation called ‘multiplication’. We call them objects in the category of ‘real numbers with an operation’. A morphism f between the two objects is
$f: \mathbb{R},+ \rightarrow \mathbb{R}, x \mapsto e^x.$
This morphism has the amazing property that:
$f(x+y) = f(x).f(y).$
and this property is shared with many other types of collections. In addition, you should also notice that zero is mapped to one and that zero is the identity (null element) for the addition while the number one is the identity for the multiplication.
Example
Take the collection of lists of strings (List in C# and ‘string List’ in F#) and consider the morphism called ‘concatenation’ between lists. Take the collection of lists of integers on the other hand and with the same concatenation of of lists as with the string collection. A functor here is the List.map operation of F# and you should note that it doesn’t matter whether you concatenate things first and then apply the map or first apply the map and then concatenate things. In a mathematical fashion you can write that if F is the List.map action and f, g are morphisms then
$F(f.g) = F(f).F(g)$
Example
The collection of data types in .Net is a category so is the collection of data types in Haskell (usually denoted by Hask). Let’s call this category Net. There is a useful lifting procedure or amplifier inside the collection; take any type and map it to a collection of this same type:$Amp: T \mapsto List< T>$
Note that you can iterate this, even though in practice you won’t often see a List<List<List<T>>> type defined in an application. The things is, this is an endofunctor of the category Net into Net.
## Some simple list manipulations
There are some well-know examples of categories and monads like the so-called maybe monad, the identity monad and the state monad but I think the easiest way to explore or check the ideas is by means of the concat monad over the category of lists of integers, called from here on List (in bold). It consists of the following:
• sets and subsets of integers: [], [1], [1;2;3], [[1], [5;6]]…are examples of objects in this category
• the return operation which returns a singleton set from any given element
$return: x \mapsto [x]$
• the identity operator which returns whatever is given:
$id: x \mapsto x \newline id: [x]\mapsto [x]$
• the concatenation operator which concatenates list:
$concat: [[a],[b]] \mapsto [a,b]$
Now, you can play with these operations and discover some interesting properties like the fact that:
Property
$concat . concat = concat . (List.map)(concat)$
Indeed if you take an arbitrary set, say [[[a];[b]];[[c;d];[[f]]], an apply the left hand side you get:
$concat(concat([[[a];[b]];[[c;d];[[f]]]))\newline = concat( [[a];[b];[c;d];[f]])\newline = [a;b;c;d;f]$
The other side of the equation tells you that:
$concat . (List.map)(concat)([[[a];[b]];[[c;d];[[f]]])\newline = concat( [[a;b];[c;d;f]])\newline = [a;b;c;d;f]$
In a similar fashion you can show that
Property
$concat. return = id= concat.(List.map)(return)$
Now, let’s introduce a *-operator which acts on morphisms f in the List category:
$f^* = concat.(List.map)(f)$
This star-operator has also some interesting features which immediately generalize to arbitrary monads later on, namely:
Property
$return^*= id\newline f^* . return = f\newline g^*.f^*=(g^*.f)^*$
All these things are really easy to show inside this List category but have a deeper meaning on a general level.
## Some simple maths
At this point you are ready to receive the general definition of a category and a monad.
Definition
A category consists of three things (aka triplet):
• a collection of objects
• a collection of morphisms f between the objects
• a binary operation (which tunrs the structure in a so-called monoid) inside the collection such that
$f.(g.h) = (f.g).h$
and an identity which keeps things identitical
$id.f = f.id = f$
The only difference here with the pedestrian definition above is that morphism need to satisfy the associative constraint and the existence of an identity.
Definition
A functor is a map F from one category to another which maps both objects and morphisms in such a way that
$F . id = id.F\newline F(f).F(g) = F(f.g)$
At this point you should see how this abstract definition corresponds to the various examples above, in particular the List category in the previous section. In the literature you will also see that the collection of morphisms between two objects A and B is denoted by Hom(A,B), which is an abbreviation for ‘Hom-omorphisms’.
In the previous section you was that the List category of integers has some interesting features when you combine the (List.map), the return, the id and the concat operations. The general definition of a monad is simply expressing the fact that many structures have the same set of properties. Let me first rephrase it in this specific category;
The List category has
• an operation (actually an endofunctor) (List.map) which maps lists onto lists
• the return operation creates a singleton from an element
• the concat operation together with the return operation has the following properties:
$concat. return = id= concat.(List.map)(return) \newline concat . concat = concat . (List.map)(concat)$
and this turns the triple (List.map, return, concat) into a monad.
In general, a monad is a triple which satisfies the same set of properties and you’ll see instead of concat and return the following:
Definition
A monad is a category C together with
• an endofunctor T from C to C
• a transformation $\eta: C\rightarrow T$
• a transformation $\mu: T^2\mapsto T$
satisfying
$\mu . \mu = \mu . T (\mu)\newline \mu . \eta = id = \mu . T(\eta)$
Don’t be scared away by all this, it’s just an abstract reformulation of the simple concatenation and mappings of lists above! I do need however to push things a bit further because the way a monad is defined in maths is in general not the way you’ll recognize it in F# or Haskell. Usually you’ll see the equivalent formulation by Heinrich Kleisli who invented the Kleisli triple and which turned out to be an equivalent definition for monads. That is, it can be shown that there is a one-to-one correspondence between a Kleisli structure and a monad.
Definition
In this equivalent formulation a monad (or Kleisli triple if you prefer) is defined as a triple over a category consisting of
• an endofunctor T
• a return operation $\eta: C\rightarrow T$
• a *-operator on morphisms which lifts a morphism
satisfying:
$\eta^* = id\newline f^* .\eta = f\newline g^* .f^* = (g^* . f)^*$
and you should immediately recognize the properties we mentioned above in the context of the List category. The proof that the two definitions are leading to the same structure is not difficult but it wouldn’t be useful at this point (unless you are math oriented and you need to see it to understand it).
## Some simple F#
The prototypical example in the context of F# workflows is the ‘dividing by zero’ problem; one cannot divide by zero. If you have a computation where at some point an exception is raised (either explicitly or by the runtime) it doesn’t make sense to continue a computation but maybe you want to handle the exception.
The way to handle this in F# is by using a label attached to a number; if the label says ‘failure’ it doesn’t matter what the value is, if the label says ‘success’ it makes sense to read the value. It’s a way to include infinity in the constraint range of values the runtime can handle:
type Result = Success of float | Infinite
Next, instead of the usual division you define a divide function like so
let divide x y =
match y with
| 0.0 -> Infinite
| _ -> Success(x/y)
and plug it into a class with two members:
type SaveDivision() =
member this Bind ((x:Results), (res: float->Result))
match x with
|Success(x) -> rest x
| Infinite -> Infinite
member this.Return x = x
At this point you can look at this as a class with two methods but what you really should see is that
• the map from a float number to the Result type is an amplifier or lift map as we described in the example above; it’s the endofunctor on the Net category of data types
• the Bind operation corresponds to the *-operation of a Kleisli triple
• the return corresponds to the eta-mapping of the Kleisli triple
and it turns this triple into a monad! You should also reflect on the fact that the Bind member correspond to the concat operation on the List category (and similarly the return corresponds to the return member).
The way you use the SaveDivision monad in a concrete computation is by instantiating it and do something inside as follows:
let safe = SaveDivision()
let SomeCalculation x1 x2 =
safe {
let! x = divide 1 x1
let! y = divide 1 x2
return divide x y
}
and as soon as you try this on zero you get the Infinite as an answer. Without this construction you would get an exception, obviously.The ‘bang’ or exclamation mark after the let keywords is actually a way to tell F# that the Bind operation should be used instead of the normal assignment (or type inference).
So, this very simple example highlights the following:
• monads in F# allows you to do ‘stuff’ under the hood and redefine how things are being assigned or returned
• a monad in F# is really a Kleisli triple but because it’s mathematically equivalent to a monad there is no need to differentiate. I does confuse however when you try to compare the mathematical textbooks with the functional programming textbooks
• monads are often called workflows because it allows you do embed a whole logic or workflow inside the instantiated class (monad) and redefine how things like the assignment (let!) should function. You will also see that F# in fact has extended the notion of monad to include other monad members and in this fashion allowing you to redefine how for-next, try, yield and other statements are acting inside the monad.
• monads are sometimes called computation expression since it redefines how assignments and expressions should act on (category) objects
In Haskell you will not encounter the alternative names and the term monad is used uniformly.
The suspicious mind should by now worry about a lack of consistency and feel the need for more formal proofs that for example the constraints (or defining rules) of a Kleisli triple (aka monad) are satisfied. Indeed, is the SafeDivision monad with the specified operations satisfying the constraints? Well, yes, it does. But I will refrain to reproduce the proofs and am happy to forward you to someone you tediously wrote it all down; see Paul Abraham’s blog and this document in particular wherein you’ll find the F# proofs.
## A simple dictionary
class Monad m where
return :: a →m a
(>>=) :: m a →(a →m b) →m b
You can immediately recognize the return and the bind methods but the symbol ‘>>=’ is usually used instead of ‘bind’. Purists will tell you that the discussion around monads in Haskell is not complete without the topic of ‘comonads’ and ‘cofunctors’ but it will take a while before these discussions appear in the F# community. See this article in case you want to know more about this.
In order to use monads in F# you only need to implement an (implicit) class which implements the Bind and Return member. The language extends the purely mathematical definition with eight extra methods:
Member Description
member Bind : M<'a> * ('a -> M<'b>) -> M<'b> Required member. Converts let! and do! within the monad.
member Return : 'a -> M<'a> Required member. Converts the return within the monad.
member Delay : (unit -> M<'a>) -> M<'a> Optional. Ensures that side effects within the monad are performed when expected.
member Yield : 'a -> M<'a> Optional. Converts the yield within the monad.
member For : seq<'a> * ('a -> M<'b>) -> M<'b> Optional. Converts the for ... do ... within the monad. M<'b> can optionally be M<unit>
member While : (unit -> bool) * M<'a> -> M<'a> Optional. Converts the while ... do ... within the monad. M<'b> can optionally be M<unit>
member Using : 'a * ('a -> M<'b>) -> M<'b> when 'a :> IDisposable Optional. Converts the use bindings within the monad.
member Combine : M<'a> -> M<'a> -> M<'a> Optional. Used to convert sequencing within the monad. The first M<'a> can optionally be M<unit>
member Zero : unit -> M<'a> Optional. Converts the empty else branches of if/then in the monad.
member TryWith : M<'a> -> M<'a> -> M<'a> Optional. Converts the try/with bindings of the monad.
member TryFinally : M<'a> -> M<'a> -> M<'a> Optional. Converts the try/finally bindings of the monad.
Finally, the following table should be helpful in case you wish to move back and forth between F#, Haskell and the math literature.
return return $\eta$
>>== bind * operator
instantiate the Monad type class implement at least the Return and Bind members NA
NA NA $\mu$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6700539588928223, "perplexity": 1331.4191526943423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00260-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/100238/what-is-a-relevant-level-for-percentage-deviance-explained-in-a-glm | # What is a relevant level for percentage deviance explained in a GLM?
I am trying to sort out the unique effect of various environmental predictors on species occurrence (presence/absence data). I have been running glm models in R with family=binomial. Most of my variables have very highly significant P values, but I have a large data set (~8000 data points), so this is maybe not so surprising. I have also been calculating the percentage deviance explained using the BiodiversityR package. Some of my very significant variables based on the P value have very low explained deviance (1% or so).
Is there a rule of thumb for saying a predictor is not significant based on the deviance explained? Any other tests I should be including? I wanted to look at each variable on its own, and then test out specific interactions, but only if the variable was "important" in a model where it was the only predictor.
You are confusing two things:
Statistical significance and practical importance (effect size). As you note, significance is partly an effect of sample size.
However, % of deviance explained is an effect size measure. Such a measure is not either significant or not sig; it may be important or not important. Unfortunately, any guidelines as to how big a % is how important is going to be context dependent.
e.g.
If you reduce the rate of airplane crashes by 1 in 100,000 flights, that would be huge. If you reduce the rate of acne by 1 in 100,000 faces, that would be preposterously trivial.
Similarly, in some areas of physics, R of .9 is disappointingly small while in some areas of social science, R of .9 is so large as to lead one to think something is wrong.
The same will occur with any effect size measure, including % of deviance explained.
• Thanks for the confirmation! I am trying to compare the importance, and the possible interaction, of environmental variables to explain species distribution, using several species as the response variables (you could think of these as my reps). The species all have different numbers of presences, and different relationships with the predictive variables. Could you suggest the "best" metrics of effect size to do this? Area under the ROC curve, % deviance explained? None of these models are very complex I am only considering two variables at a time, with possibly cubic terms and interactions. – Frieda May 28 '14 at 19:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7227218151092529, "perplexity": 894.499701950265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882102.31/warc/CC-MAIN-20201024051926-20201024081926-00370.warc.gz"} |
https://2021.help.altair.com/2021.1.2/newfasant/topics/newfasant/command_line/command_reference/ogive_command.htm | # ogive
This command create a tangent ogive in the geometry.
## Inline mode usage
• ogive-h: Displays the help file that summarizes the parameters for this command.
• ogive -n <name> -p <center_x> <center_y> <center_z> <radius_bottom> <height> <radius_top>: Creates a tangent-ogive named name whose base is centered at ( center_x, center_y, center_z) and has a radius of radius_bottom, and the given height and radius_top at the peak. The tangent ogive is opened both at the base and peak.
• ogive -c -n <name> -p <center_x> <center_y> <center_z> <radius_bottom> <height> <radius_top>: Creates a spherically blunted-tangent ogive named name whose base is centered at ( center_x, center_y, center_z) and has a radius of radius_bottom, and the given height and radius_top at the peak. The tangent ogive is closed both at the base and peak.
## Interactive mode usage
Interactive mode usage
Invocation
ogive
Parameters
• -c :Creates a spherically blunted-tangent ogive that is closed both at the base and peak.
• no_param: Creates a tangent ogive that is open both at the base and peak.
• [-rightTurned | no_param]: The following parameters are required:
• Step 1: Base center, given by its x, y and z coordinates.
• Step 2: Radius at the bottom.
• Step 3: Height of the tangent ogive.
• Step 4: Radius at the top of the tangent ogive.
## Examples
• A spherically blunted-tangent ogive is shown in next figure.
• A tangent ogive with the same parameters than the spherically blunted is shown in figure below. Note that the height of the generated ogive only agrees with the specified one when the top radius is set to 0 (tangent ogive), otherwise the ogive is shorter (spherically blunted-tangent ogive). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652175068855286, "perplexity": 9050.439728201616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00239.warc.gz"} |
https://www.universetoday.com/tag/solar-disruption-theory/ | ## Solar Disruption Theory
[/caption]
Solar disruption theory was one of several theories that emerged before the 18th century concerning the formation of the solar system. Solar disruption theory states that the collision of the sun with another stars caused debris to be ejected from its mass and these debris eventually became the planets. This theory was later discarded for the nebula theory of solar system formation. However there are some scientists that propose that it has some merit.
The big question up until the 18th century was how the solar system was born. There were many explanations for why this happen but many were really only conjecture given the tools available to astronomers at the time. The real question was what would be a probable origin under the known laws of physics. The advent of classical mechanics came to prove the nebular theory as the likely theory for the creation of the solar system. The reason was that most other theories could not explain how the planets formed without giving in to the Sun’s gravity and falling in.
A new argument has emerged for a different form of solar disruption theory in this version it answers the idea in a more roundabout way that answers an interesting question. We know that the formation of the solar system itself was volatile but did the Sun and its planets really form in relative isolation from other star emerging in the Nebula? This new theory that emerged in 2004 supposed proposed that the influence of other stars may have influenced the formation of planets in the solar system.
In the meanwhile the main theory stands. We know in the nebular theory that stars are formed from spinning nebulas of gases and cosmic dust. Over time the masses clump together to the point where the mass reaches the level needed for gravity to initiate fusion. The planets are formed from the clumps of debris in the nebular disk that did not fall into the Sun and that they eventually ended up colliding with each other forming planets. Any theory that suggests interference from the gravity fields of other star systems has not been tested yet. It may have merit but we don’t have the technology to test theories on such large scales.
We have written many articles about solar disruption theory for Universe Today. Here are some interesting facts about the Solar System, and here’s an article about the model of the Solar System.
If you’d like more info on the Solar System, check out NASA’s Solar System exploration page, and here’s a link to NASA’s Solar System Simulator.
We’ve also recorded a series of episodes of Astronomy Cast about every planet in the Solar System. Start here, Episode 49: Mercury. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327596783638, "perplexity": 570.929975120582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00376.warc.gz"} |
https://aviation.stackexchange.com/questions/12125/do-small-helicopters-like-the-r44-require-an-faa-type-rating | # Do small helicopters like the R44 require an FAA type rating?
From what I read in 14 CFR 61.31 it never mentions type rating requirements for rotorcraft or helicopters. I Googled online but most of the results showed up for Australia, Canada and the UK.
I am starting to believe that the US does not require a pilot to have a type rating for a Robinson R22 or R44 helicopter.
Am I correct that unless the FAA has made a type certificate for that helicopter it does not require a type rating? Would it just require a checkout from wherever you are going to rent?
• Keep in mind that many pilots will spend their very first rotorcraft hours in a R22... – UnrecognizedFallingObject Jan 29 '15 at 1:46
• From Wikipedia's article on the R22: "Due to the issues relating to a low inertia rotor-system and a teetering main rotor, operation by any pilot in the United States of the Robinson R22 or Robinson R44 requires a special endorsement by a certified flight instructor." – Mark Jan 29 '15 at 3:05
• Pls correct me i am confused. Type certificate is for aircraft to operate legally. Type rating is for airman to legally operate the aircraft that required one. See type certificate of R22 – vasin1987 Jan 29 '15 at 3:34
• @vasin1987 You are correct. A type certificate is required for an aircraft type to be legally operated regardless of whether its pilots are required to have a type rating to operate it. – reirab Jan 29 '15 at 21:04
SFAR 73 is Robinson R-22/R-44 Special Training and Experience Requirements and lists the specific training requirements for that model. Here's one relevant piece:
(1) No person may act as pilot in command of a Robinson model R-22 unless that person:
(i) Has had at least 200 flight hours in helicopters, at least 50 flight hours of which were in the Robinson R-22; or
(ii) Has had at least 10 hours dual instruction in the Robinson R-22 and has received an endorsement from a certified flight instructor authorized under paragraph (b)(5) of this section that the individual has been given the training required by this paragraph and is proficient to act as pilot in command of an R-22. Beginning 12 calendar months after the date of the endorsement, the individual may not act as pilot in command unless the individual has completed a flight review in an R-22 within the preceding 12 calendar months and obtained an endorsement for that flight review. The dual instruction must include at least the following abnormal and emergency procedures flight training: [...]
SFAR 108 has similar requirements for operating the Mitsubishi MU-2B.
Finally, note that the type certificate is for the aircraft, and has nothing to do with whether an individual pilot is qualified to fly it or not.
• When I first read this, I missed the 'or' and the end of (1)(i) and I was really confused as to how you were supposed to get 50 hours in an R-22 if only 10 of it was dual instruction and you weren't allowed to act as PIC. - haha – reirab Jan 29 '15 at 21:08
• It's important to note that SFAR73 is modeled after an aircraft type rating, and behaves as such. – rbp Jan 30 '15 at 5:26
• Thanks, In summary 10 hours of dual to get a endorsement that last 12 months. And if you don't have 50 hours after 12 months you need to do a flight review before continuing to fly on it. – Annerajb Jan 31 '15 at 1:12
The FAA does indeed require model-specific type ratings for many aircraft (a list of aircraft which the FAA recognizes type ratings for is here). FAR 61.31 says that type ratings are required for
(1) Large aircraft (except lighter-than-air).
(2) Turbojet-powered airplanes.
(3) Other aircraft specified by the Administrator through aircraft type certificate procedures.
In this context, according to Advisory Circular 61-89E, "large" means it has a gross weight of 12,500 lbs or greater. In addition, based on (2) above, you need a type rating in the US to fly any jet, even small, single-engine jets or those designed specifically for single-pilot operation.
• Purely for argument's sake, would a propfan-powered aircraft count as a "turbojet" for the purposes of FAR 61.31? What about a turboprop? – Vikki - formerly Sean Mar 3 '19 at 4:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26663047075271606, "perplexity": 2255.3842899868505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358064.34/warc/CC-MAIN-20210227024823-20210227054823-00433.warc.gz"} |
https://zenodo.org/record/5847677/export/xd | Journal article Open Access
# Efficiency of Probabilistic Network Model for Assessment in E-Learning System
Rohit B Kaliwal; Santosh L Deshpande
### Dublin Core Export
<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:contributor>Blue Eyes Intelligence Engineering and Sciences Publication(BEIESP)</dc:contributor>
<dc:creator>Rohit B Kaliwal</dc:creator>
<dc:creator>Santosh L Deshpande</dc:creator>
<dc:date>2020-09-30</dc:date>
<dc:description>The knowledge acquirement by the learner is a major assignment of an E-Learning framework. Evaluation is required in order to adapt knowledge resources and task to learner ability. Assessment provides learner’s an approach to evaluate the skills gained through the e-learning domain they are accessing. A dissimilar method can be used to assess the information acquirement, such as probabilistic Bayesian Network model. A Bayesian Network is a graphical representation of the probabilistic relationships of a complex system. This network can be used for reasoning with uncertainty. Bayesian Network is the most challenging task in e-learning system as learner evaluation model are an element of uncertainty. In this paper the current proposed scheme is constructed on Bayesian Network to deduce the stage of knowledge possessed by the learner. It also proposes type of assessment to identify the knowledge whatever the learner identifies. Throughout the assessment, it can be performed by two approaches namely Sequential and Random. In Sequential approach, questions can be displayed on the learner machine in sequential order. In Random approach, questions can be displayed on the learner machine in random order. However, both have their inherent limitations. Questions that are considered to be answered easily by the learner may also be presented to the learner who is not desirable. This system determined on the illustration of Bayesian Network model and algorithm for inference about learner’s knowledge. The Bayesian Network model was efficiently implemented for three levels of learner called Higher Learners (HL), Regular Learners (RL) and Irregular Learners (IL) for learner’s assessment and was successfully implemented with 81.1% of probabilities for learner’s assessment.</dc:description>
<dc:identifier>https://zenodo.org/record/5847677</dc:identifier>
<dc:identifier>10.35940/ijrte.C4635.099320</dc:identifier>
<dc:identifier>oai:zenodo.org:5847677</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>issn:2277-3878</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:source>International Journal of Recent Technology and Engineering (IJRTE) 9(3) 562-566</dc:source>
<dc:subject>Assessment, Knowledge design, Bayesian Network (BN), Evaluation, E-Learning, Intelligent Tutoring System (ITS)</dc:subject>
<dc:subject>ISSN</dc:subject>
<dc:subject>Retrieval Number</dc:subject>
<dc:title>Efficiency of Probabilistic Network Model for Assessment in E-Learning System</dc:title>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>publication-article</dc:type>
</oai_dc:dc>
20
12
views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6451324224472046, "perplexity": 2999.561330920323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00220.warc.gz"} |
https://themaximalist.org/tag/linear-system/ | ## Solve Ax=b in Maxima, part 2
In a previous post, I included my little coding project to implement a general backsolve() function to use with the built-in maxima matrix function echelon(), producing an easy-to-call matrix solver matsolve(A,b). The result is meant to solve a general matrix vector equation $Ax=b$ , including cases when $A$ is non-square and/or non-invertible.
Here’s a quicker approach — convert the matrix into an explicit system of equations using a vector of dummy variables, feed the result into the built-in Maxima function linsolve(), and then extract the right hand sides of the resulting solutions and put them into a column vector.
The two methods often behave identically, but here’s an example that breaks the linsolve() method, where the backsolve() method gives a correct solution:
*Note, I’ve found that the symbol rhs is a very popular thing for users to call their problem-specific vectors or functions. Maxima’s “all symbols are global” bug/feature generally wouldn’t cause a problem with a function call to rhs(), but the function map(rhs, list of equations) ignores that rhs() is a function and uses user-defined rhs. For that reason I protect that name in the block declarations so that rhs() works as expected in the map() line at the bottom. I think I could have done the same thing with a quote: map(‘rhs, list of equations).
matsolve2(A,b):=block(
[rhs,inp,sol,Ax,m,n,vars],
[m,n]:[length(A),length(transpose(A))],
vars:makelist(xx[i],i,1,n,1),
Ax:A.vars,
inp:makelist(part(Ax,i,1)=b[i],i,1,n,1),
sol:linsolve(inp,vars),
expand(transpose(matrix(map(rhs,sol))))
);
## Solving the matrix vector equation Ax=b in Maxima
*Upadate: I’ve implemented a Maxima matrix-vector equation solver with simpler Maxima-specific algorithm in a later post. That method is based on the built-in function linsolve(). In that post I show an example that breaks linsolve() but that is handled correctly by the backsolve() method.
Is there really not a solver in Maxima that takes matrix A and vector b and returns the solution of $Ax=b$ ? Of course we could do invert(A).b, but that ignores consistent systems where $A$ isn’t invertible…or even isn’t square.
Here’s a little function matsolve(A,b) that solves $Ax=b$ for general $A$ using the built-in Gaussian Elimination routine echelon(), with the addition of a homemade backsolve() function. The function in turn relies on a little pivot column detector pivot() and my matrix dimension utility matsize(). This should include the possibilities of non-square $A$, non-invertible $A$, and treats the case of non-unique solutions in a more or less systematic way.
matsolve(A,b):=block(
[AugU],
backsolve(AugU)
);
backsolve(augU):=block(
[i,j,m,n,b,x,klist,k,np,nosoln:false],
[m,n]:matsize(augU),
b:col(augU,n),
klist:makelist(concat('%k,i),i,1,n-1),
k:0,
x:transpose(matrix(klist)),
for i:m thru 1 step -1 do (
np:pivot(row(augU,i)),
if is(equal(np,n)) then
(nosoln:true,return())
else if not(is(equal(np,0))) then
(x[np]:b[i],
for j:np+1 thru n-1 do
x[np]:x[np]-augU[i,j]*x[j])
),
if nosoln then
return([])
else
return(expand(x))
)\$
matsize(A):=[length(A),length(transpose(A))]\$
pivot(rr):=block([i,rlen],
p:0,
rlen:length(transpose(rr)),
for i:1 thru rlen do(
if is(equal(part(rr,1,i),1)) then (p:i,return())),
return(p)
)\$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7860078811645508, "perplexity": 2701.0946997156643}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626008.14/warc/CC-MAIN-20210616190205-20210616220205-00558.warc.gz"} |
http://www.quantumdiaries.org/2015/05/18/dy-resummation/ | ## View Blog | Read Bio
### Drell-Yan, Drell-Yan with Jets, Drell-Yan with all the Jets
All those super low energy jets that the LHC cannot see? LHC can still see them.
Hi Folks,
Particle colliders like the Large Hadron Collider (LHC) are, in a sense, very powerful microscopes. The higher the collision energy, the smaller distances we can study. Using less than 0.01% of the total LHC energy (13 TeV), we see that the proton is really just a bag of smaller objects called quarks and gluons.
This means that when two protons collide things are sprayed about and get very messy.
One of the most important processes that occurs in proton collisions is the Drell-Yan process. When a quark, e.g., a down quark d, from one proton and an antiquark, e.g., an down antiquark d, from an oncoming proton collide, they can annihilate into a virtual photon (γ) or Z boson if the net electric charge is zero (or a W boson if the net electric charge is one). After briefly propagating, the photon/Z can split into a lepton and its antiparticle partner, for example into a muon and antimuon or electronpositron pair! In pictures, quark-antiquark annihilation into a lepton-antilepton pair (Drell-Yan process) looks like this
By the conservation of momentum, the sum of the muon and antimuon momenta will add up to the photon/Z boson momentum. In experiments like ATLAS and CMS, this gives a very cool-looking distribution
Plotted is the invariant mass distribution for any muon-antimuon pair produced in proton collisions at the 7 TeV LHC. The rightmost peak at about 90 GeV (about 90 times the proton’s mass!) is a peak corresponding to the production Z boson particles. The other peaks represent the production of similarly well-known particles in the particle zoo that have decayed into a muon-antimuon pair. The clarity of each peak and the fact that this plot uses only about 0.2% of the total data collected during the first LHC data collection period (Run I) means that the Drell-Yan process is a very useful for calibrating the experiments. If the experiments are able to see the Z boson, the rho meson, etc., at their correct energies, then we have confidence that the experiments are working well enough to study nature at energies never before explored in a laboratory.
However, in real life, the Drell-Yan process is not as simple as drawn above. Real collisions include the remnants of the scattered protons. Remember: the proton is bag filled with lots of quarks and gluons.
Gluons are what holds quarks together to make protons; they mediate the strong nuclear force, also known as quantum chromodynamics (QCD). The strong force is accordingly named because it requires a lot of energy and effort to overcome. Before annihilating, the quark and antiquark pair that participate in the Drell-Yan process will have radiated lots of gluons. It is very easy for objects that experience the strong force to radiate gluons. In fact, the antiquark in the Drell-Yan process originates from an energetic gluon that split into a quark-antiquark pair. Though less common, every once in a while two or even three energetic quarks or gluons (collectively called jets) will be produced alongside a Z boson.
Here is a real life Drell-Yan (Z boson) event with three very energetic jets. The blue lines are the muons. The red, orange and green “sprays” of particles are jets.
As likely or unlikely it may be for a Drell-Yan process or occur with additional energetic jets, the frequency at which they do occur appear to match very well with our theoretical predictions. The plot below show the likelihood (“Production cross section“) of a W or Z boson with at least 0, 1, 2, 3, or 4(!) very energetic jets. The blue bars are the theoretical predictions and the red circles are data. Producing a W or Z boson with more energetic jets is less likely than having fewer jets. The more jets identified, the smaller the production rate (“cross section”).
How about low energy jets? These are difficult to observe because experiments have high thresholds for any part of a collision to be recorded. The ATLAS and CMS experiments, for example, are insensitive to very low energy objects, so not every piece of an LHC proton collision will be recorded. In short: sometimes a jet or a photon is too “dim” for us to detect it. But unlike high energy jets, it is very, very easy for Drell-Yan processes to be accompanied with low energy jets.
There is a subtlety here. Our standard tools and tricks for calculating the probability of something happening in a proton collision (perturbation theory) assumes that we are studying objects with much higher energies than the proton at rest. Radiation of very low energy gluons is a special situation where our usual calculation methods do not work. The solution is rather cool.
As we said, the Z boson produced in the quark-antiquark annihilation has much more energy than any of the low energy gluons that are radiated, so emitting a low energy gluon should not affect the system much. This is like massive freight train pulling coal and dropping one or two pieces of coal. The train carries so much momentum and the coal is so light that dropping even a dozen pieces of coal will have only a negligible effect on the train’s motion. (Dropping all the coal, on the other hand, would not only drastically change the train’s motion but likely also be a terrible environmental hazard.) We can now make certain approximations in our calculation of a radiating a low energy gluon called “soft gluon factorization“. The result is remarkably simple, so simple we can generalize it to an arbitrary number of gluon emissions. This process is called “soft gluon resummation” and was formulated in 1985 by Collins, Soper, and Sterman.
Low energy gluons, even if they cannot be individually identified, still have an affect. They carry away energy, and by momentum conservation this will slightly push and kick the system in different directions.
If we look at Z bosons with low momentum from the CDF and DZero experiments, we see that the data and theory agree very well! In fact, in the DZero (lower) plot, the “pQCD” (perturbative QCD) prediction curve, which does not include resummation, disagrees with data. Thus, soft gluon resummation, which accounts for the emission of an arbitrary number of low energy radiations, is important and observable.
In summary, Drell-Yan processes are a very important at high energy proton colliders like the Large Hadron Collider. They serve as a standard candle for experiments as well as a test of high precision predictions. The LHC Run II program has just begun and you can count on lots of rich physics in need of studying.
Happy Colliding,
Richard (@bravelittlemuon) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917866349220276, "perplexity": 970.3204826172499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00359.warc.gz"} |
https://www.themathdoctors.org/place-value-whole-numbers/ | # Place Value: Whole Numbers
We’ll be looking at various aspects of place value, starting here with the basic concepts. As soon as you learn to write numbers beyond 10, you have to start understanding this concept; so we have to begin at a concrete level and move gradually to something more abstract.
## Place value for children
Our first question comes from a mother in 2007:
Introducing Place Value to Children
I'm wondering how to teach my son about place value. How can I show him that units are up to 9 then when it comes to 10 is tens and so on? I really get confused myself how to explain it to him.
1,2,3,4,5,6,7,8,9 are ones 10,11,12,13,14,15,16, up to 99 is tens and so on
So, what does it mean to talk about tens, and how can we make it meaningful to young children?
I answered by referring to my own experience with my homeschooled kids, since I am not an elementary teacher:
Hi, Auria.
I introduced place value to my children with several different models.
One was beans, cups, and trays: I would put 10 beans in each cup, and 10 cups on a tray, while counting beans. Thus up to 9 would just go on the table, individually; when there were 10 they would fill a cup, and then I'd keep putting more on the table until there were another 10. Eventually I might have 1 tray of 10 cups (100 beans), and 3 additional cups (30), and 7 single beans, for a total of 137. The hundreds place represents the number of trays (1), the tens place the number of cups (3), and the ones place the number of single beans (7). This makes it clear that any numeral represents a number we can count, and any number of beans can be expressed this way. You can also do addition and subtraction with this system, and see how the digits work.
Here is my representation of 137 beans:
To count beans, we put them individually on the counter (!), and as soon as we have 10 of them, we put them in a cup so we have “1 cup and 0 [extra] beans”, or “10”. Then we keep placing more beans until we get 2 cups (“20”), and so on. The end result can be called “one hundred, three tens, and 7 ones (or units)”; this is what “137” means. This is about as concrete and tangible as you can get in math.
The nice thing here is that, unlike some other representations, we are simultaneously representing the numeral “137”, and showing 137 individual beans!
There are also manipulatives you can buy (base ten blocks) that represent different numbers of little cubes (“units”), rows of ten (“rods” or “longs”), and squares of a hundred (“flats”), and even big cubes of a thousand (“cubes” or “blocks”):
These are only slightly less “real”, because they are stuck together, not made of separate bits.
Another model was play money, which is a little more abstract so it should be used after the beans are fully understood. A $10 bill represents a stack of 10$1's; a $100 bill can be exchanged for a stack of 10$10's, or 100 $1's. By always having no more than 9 of any one kind of bill, we represent a number's places, and can do addition and subtraction as with the beans. Our number 137 would be one 100, three 10's, and seven 1's. To add$84 to that, I would add 4 more $1's, but since I have 11 now and that's too many, I change 10 of them for a$10 and keep only 1 $1 in the pile. Now I have 3, plus 8, plus the new 1$10's, for a total of 12 $10's; again I change 10 of those for a$100, leaving me with a total of 2 $100's, 2$10's, and 1 \$1.
Now we don’t actually have 137 dollar bills in front of us, but something that represents it:
The restriction to no more than 9 per place is just a matter of using as few bills as possible.
(By the way, to introduce base 5 numerals, you can use pennies, nickels, and quarters.)
Another thing that was of interest was a mechanical counter, either one that works like an old odometer (if you can find anything like that any more), or the plastic counters they used to have where you could click any digit to add one. I don't know if anything like this can still be found easily!
On the right is what I described, actually used for keeping track of money; on the left is a hand tally counter you can still get:
The physical action can make the idea of counting, especially moving from 9 to 0 and “carrying” to the next digit, more concrete, even though the representation is now entirely abstract.
## Why stop at 9?
A 2002 question touched on that very thing: Why can’t a digit be more than 9?
Place Value and the Number Nine
I am asking this for my 10-year-old grandson. Why is 9 the largest number you can put into any place value spot?
Hi, Jacques.
The base-ten system is built around the idea that you can name every number by counting tens. Once you have ten tens, you give it a new name (hundreds) and write it using the hundreds place. When you get ten hundreds, you give that a new name (thousands) and use the thousands place. You never need to write a digit larger than 9, because ten of anything gets a new name.
Suppose we did use ten as a digit. We could talk about the number "tenty," meaning ten tens, and write it perhaps as X0, using X as the symbol for the digit ten. That would really be the same as one hundred; since we already have a name for it, it would be redundant. So we don't need an extra symbol for ten; X is written as 10, and X0 as 100.
We want to give every number a unique name, so we don’t need a symbol (like “X”) for ten, because “10” handles it. The counter might be a particularly good way to illustrate this.
But occasionally you will see such ideas used humorously. In _The Lord of the Rings_, the story begins with Bilbo Baggins' "eleventy-first birthday." That would mean eleven tens and one, or 110+1, which is really 111. Again, we have a way to say it without using "eleventy," so we don't need to use eleven as a digit.
The fun part is that we can actually understand what this would mean, even though we don’t use it!
It is not only children who need help understanding all this; adults can be helped by concrete explanations as well as abstract considerations. For this question, from 2000, I used the former:
Explanation of Place Values
I guess this is a bit philosophical, but here it goes. It is written that as you progress from the units place to the tens place to the hundreds place, the value is increasing ten times the numeral on its right. For example, in 589 the 8 is ten times greater than the 9 and the 5 is ten times greater than the 8. How does this increase by ten times work out? If you count from 1 to 10 I imagine that you are ten greater than nothing or 0 (which is a place holder). Then you count 10 ten times to reach 100 but you must also count from the number 10 up to 100. What that means is that 100, which is 10 times the number 10, also must include the 10 as the beginning of the counting. Can you explain to me on a deeper level what is going on?
It wasn’t quite clear how Enzo was thinking. I answered by first relating place value to units of measure:
Hi, Enzo.
What increases at each digit is the granularity of the count - the size of the unit we are counting with. It's similar to measuring a distance as, say, 5 miles, 8 feet, and 9 inches. As you go several miles, each mile includes every foot you have gone along the way, EXCEPT for the last few, which were not enough to make a full mile. In the final measurement you give, the 8 feet are not part of the 5 miles you counted, but are the leftover feet that were not part of any full mile.
This is much like those 7 leftover beans that were not in a cup (not part of the tens).
I next, in effect, used base ten blocks to illustrate, but expressing it more like cups and trays, counting individual objects:
Similarly, each digit counts a different size group. If I say I have 589 objects, it means I can count by 100's until I have 5 of them, with less than a hundred left; then count by 10's until I have 8 tens, with less than ten left; and then count what's left by 1's, finding 9 of them. We can picture this as putting them into packages of decreasing sizes; there are ten tens in a hundred, and ten units in a ten, package inside package; when there are not enough to fill another package of one size, we start using smaller packages to use up the remainder:
500
+-----------------------------------------+
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
+-----------------------------------------+
+-----------------------------------------+
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
+-----------------------------------------+
+-----------------------------------------+
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
+-----------------------------------------+
+-----------------------------------------+
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
+-----------------------------------------+
+-----------------------------------------+
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| |*| |*| |*| |*| |*| |*| |*| |*| |*| |*| |
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
+-----------------------------------------+
80
+-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
|*| |*| |*| |*| |*| |*| |*| |*|
+-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
9
*
*
*
*
*
*
*
*
*
Here, starting from the given number 589, I counted in decreasing sizes, using up digits from left to right, resulting in the static image of the final result. This is the opposite of the process of counting, which he used in his own example, so I described that more fully:
When we count by ones up to 589, as you discussed, we have to constantly change smaller groups for larger ones. Each time we reach a multiple of ten we repackage them as a single ten, with no individual units left. This is what happens when we change from 9 to 10 or from 289 to 290, setting the units place back to 0 and adding one to the tens place. Each time we reach a multiple of 100, we package our last ten 10's as a single hundred, clearing the other digits, as when we go from 99 to 100 or from 299 to 300.
Each place thus incorporates whatever was counted before, taking over what had been in the lower digits. This process can be seen in an odometer, where each numbered wheel counts the number of times the wheel to its right passed 9 and returned to 0.
The odometer, again, is a mechanical representation of the result, but considerably more abstract.
So yes, the 5, which represents 5 hundreds, includes the first ten you counted; it starts at zero, not after the first ten. Yet the 500 does not include the 80 or the 9. When you count starting with the larger units as I did above, you see more clearly that the smaller units are OUTSIDE the larger units, but it really works the same either way.
I hope this clarifies what place value is all about. Talking about counting and packaging doesn't sound like a "deeper level," but I think it shows what we mean more clearly than deeper math would. Let me know if you want a more advanced perspective on this.
Next, we’ll move on to what might be considered that deeper, abstract level, looking at a written number and seeing what it means without counting anything.
## Breaking a number down
Here is a question from a 12-year-old in 2003:
Decoding Place Value
What does it mean to be in base 10? What does the 1 represent? What does the 0 represent?
I think I understand the 1 to represent one 10, but I'm a little confused about the 0. Does it mean that there are no units left over from base 10?
Doctor Ian answered, using a more interesting example:
Hi Samantha,
When we write a number like
2034
each digit is multiplied by some power of 10, and the results are added up. That is,
2034 = 2*1000 + 0*100 + 3*10 + 4*1
^ ^ ^ ^
| | | |
2 groups 0 groups 3 groups 4 groups
of 1000 of 100 of 10 of 1
| | | |
| | | |
2000 <-- | | |
000 <------------- | |
30 <------------------------ |
+ 4 <-----------------------------------
------
2034
Does this help?
This is called expanded notation, and is another way to say “2 thousands, 0 hundreds, 3 tens, and 4 units”. But now it’s become an abstract process of writing a sum of multiples of powers of ten.
## Naming a place using powers of ten
This 2003 question is from another 12-year-old, who would be ready for a bigger picture, even though the question in itself doesn’t require it.
Which Place?
In which place is the digit 6 in the number 3164297 ?
In my book there is an answer that says it is in the 100000, and when I asked my mom she said 10000. I don't know which one is correct!
The quick answer would be, “You’re right and the book is wrong.” But that begs the question, “How do you know you’re right?” Doctor Ian answered:
Hi Kelly,
One way to solve a problem like this is to write down all the possible place values. We do that by starting with 1, and multiplying by 10.
10,000 1000 100 10 1
Those are the place values, in order (which is why he started at the right). The fifth, fourth, third, second, and first places from the right are, respectively, the
ten-thousands … thousands … hundreds … tens … ones (units)
places. To handle big numbers, he introduced the idea of exponents, in case Kelly wasn’t familiar with them, and then continued:
Why am I telling you all this? Because now we can find the place value of each digit in a number like 3164297 by using exponents:
6 5 4 3 2 1 0
10 10 10 10 10 10 10 <- place values
3 1 6 4 2 9 7 <- digits
So to find the place value of a digit, we count over from the right, starting at 0:
3164297 <- digits
^
| 0
| 1
| 2
|3
4 <- counting from zero
When we get to the digit 6, we're at place 4; so the place value of 6 in this number is 10^4, which is 1 with 4 zeros, or 10,000.
So you and your mom are correct.
To put it a little differently, to find the place value of the 6, you can write down a 1 (for the 6 itself) and a 0 for each digit following it, and we get 10000, which is the place value, ten thousand:
3164297
|
v
10000 --> 10,000's place
Using powers, our number can be expanded this way: $$31\underline{6}4297 = 3\cdot 10^6 +1\cdot 10^5 +\underline{6\cdot 10^4} +4\cdot 10^3 +2\cdot 10^2 +9\cdot 10^1 +7\cdot 10^0$$
Here’s a question from 1997 asking the same question in reverse:
Place Value - Units Digit
## Notes on terminology: What is place value?
I want to add one more answer here, from 2003, that was not put in the archive for publication:
I am a sixth grade mathematics teacher. My wonderful college professor/mentor told me that there is no such thing as place value, and that books/teachers are using this term in error. He taught that a number has its place and its value, but that the term place value is incorrect. Yet, every book I find talks about it as place value, but I see his thinking and understand. I even think I agree, but I cannot find anyone or any book that explains this. What does your panel say about this?
I replied, sort of agreeing:
I wouldn't say there is no such thing, but I'm not comfortable with the way teachers are taught to use the phrase. Perhaps my thoughts are similar to yours, if I understand you correctly.
In my mind, "place value" is a concept, not a number: the idea [also called "positional notation"] of writing numbers so that the contribution of each digit to the number's value depends on its place. Of course the term _is_ commonly used in a more concrete sense, as the actual value of the place; and I see no reason to deny the existence of such a thing. What bothers me is that it is so hard to express all the relevant "values" in a way that is not confusing to children (or at least to me). For example, the other day I answered a question from someone who was using these terms, which seem to be common:
face value: the value of the digit, e.g. 7 (for the 7 in 8765)
place value: the value of the place itself, e.g. 100 (hundreds)
value: the value of the digit in its place, e.g. 700 (7 hundreds)
I've seen various permutations of these words and others to cover the three values, but none seem to be really standard, and all can be easily mixed up. Why should the "place value" of a digit not be the value of the digit in that place, 700, rather than the value of the place itself? Why should the "value" of the digit not be the value of the digit itself? The worst usage is when people say (and this is a quote from one of the pages I found on a quick search) "the largest digit each place value can have is a nine", confusing "place value" with "place".
If the three terms listed there were standard, I could live with that. But even in talking to that other student, I struggled a bit with the fact that the “face value” of money is the number on the bill, which is not 7 but 700! Any terms you use are arbitrary, and therefore not really meaningful without general agreement.
I myself prefer to use descriptive phrases rather than brief names: the value of the digit itself, the value (or meaning) of the place, and the value of the digit in its place, or the value the digit contributes to the number. (By the way, did you use the word "number" to mean "digit"? That's another of my pet peeves...)
The trouble, of course, is that nobody bothers to talk about these things except educators, and they assume what they are saying (from whatever book they use) is correct. If "correct" just means what is used most often, then in that sense they are right; but that doesn't mean it's the best terminology they could use. I suspect many mathematicians might feel as I do, but the issue is below their radar, since we have no occasion to discuss it.
I have generally had the sense that math educators are far less careful about definitions, and about general agreement, than mathematicians! And it’s in that sense (as a term used by mathematicians in particular) that the “place value” of a digit does not exist.
### 1 thought on “Place Value: Whole Numbers”
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.519052267074585, "perplexity": 4835.853398362117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00244.warc.gz"} |
https://www.bionicturtle.com/forum/threads/computation-of-the-standard-error-of-a-coherent-risk-measure.9891/ | What's new
# Computation of the standard error of a coherent risk measure
#### Michael Mayer
##### New Member
Dear all,
studying the computation of se(q) for the confidence interval of a coherent risk measure (here VaR) in the GARP books, I noticed two inconsistencies.
1. f(q) is indicated as "= 1-0.9446-0.0450" while I believe it would only make sense to compute it as "f(q)=1-(0.9446-0.0450)", i.e. "f(q)=1-0.9446+0.0450" in order to get the probability to be in the tails of the distribution.
2. In the computation of se(q), p = 0.0450 for both the upper and the lower bounds of VaR. I believe it should be p=0.9446 for the lower bound of this distribution, since it refers to the probability of the lower bound of the interval (1.6) rather than to the probability of the upper bound of the interval (1.7). Taking twice the same p (for upper and lower bound) seems very counter-intuitive.
Any explanation for these two inconsistencies / questions?
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @Michael Mayer Since we don't have an XLS for this (Dowd's 3.5.1 Standard Errors of Quantile Estimators) , I just started one here at https://www.dropbox.com/s/6yulo5zsnibrfub/dowd-3-5-se-quantiles.xlsx?dl=0
Re your first question, his approach looks correct to me. He appears to be solving for f(q) which is the probability of 1.595 < z < 1.695; i.e., it's a small slice, not tails. So it could be given by =NORM.S.DIST(1.695, TRUE) - NORM.S.DIST(1.595, TRUE) = 0.010321; his 0.0104 is rounded due to the imprecision of 0.045. This is Pr(1.595 < z < 1.695) = Pr(z < 1.695) - Pr (z < 1.595) = 0.9549 - 0.9446. What he's showing is the equivalent because Pr(z < 1.695) = 1 - Pr(z > 1.695) = 1 - 0.0451, so Pr(1.595 < z < 1.695) = Pr(z < 1.695) - Pr (z < 1.595) = [1 - Pr(z > 1.695)] - Pr (z < 1.595) = 1 - 0.0451 - 0.9446. I don't quite understand your second point, sorry ....
#### SalinaMiao
##### New Member
In Dowd's example under evaluating estimators of risk measures in chapter 3, in the first bullet point, I don't understand how did he came up with "the probability of a loss exceeding 1.695 is 4.5%", and also "probability of a profit or loss less than 1.595 is 94.46%"?
Can someone care to explain? Thank you
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @SalinaMiao I moved your question to this thread, see above. The assumptions to which you refer are simply the normal probabilities associated quantiles; just like Prob(Z > 1.645) = 5.0%, except Dowd is retrieving this for bin edges at 1.645 +/- (h/2). He defined bin width h = 0.10 (top of page 70), around 1.645, so we have 1.645 - 0.10/2 = 1.595... and 1.645 + 0.10/2 = 1.695. Then NORM.S.DIST(1.595,TRUE) = 94.46%; i.e. Pr(Z < 1.645 - 0.10/2) = 94.46%, and also, 1 - NORM.S.DIST(1.695, TRUE) = 1 - 95.49% = 0.0451%; i.e. Pr(Z > 1.645 + 0.10/2) = 0.0451%. I hope that clarifies, | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9241320490837097, "perplexity": 1977.1867411738472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00121.warc.gz"} |
https://brilliant.org/problems/this-is-just-trailing-zeroes/ | This is just trailing zeroes
Number Theory Level 2
$\large \dfrac{2002!}{(1001!)^2}$
How many trailing zeroes does the number above have?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.702117383480072, "perplexity": 7729.207317410472}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00128-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://nrich.maths.org/1002 | ### Times
Which times on a digital clock have a line of symmetry? Which look the same upside-down? You might like to try this investigation and find out!
### Clock Hands
This investigation explores using different shapes as the hands of the clock. What things occur as the the hands move.
### Ten Green Bottles
Do you know the rhyme about ten green bottles hanging on a wall? If the first bottle fell at ten past five and the others fell down at 5 minute intervals, what time would the last bottle fall down?
# Wonky Watches
##### Stage: 2 Challenge Level:
Mandeep's watch loses two minutes every hour.
Adam's watch gains one minute every hour.
They both set their watches from the radio at 6:00 a.m. then start their journeys to the airport. When they arrive (at the same time) their watches are $10$ minutes apart.
At what time (the real time) did they arrive at the airport? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2739896774291992, "perplexity": 3793.1580333073466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00251-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.jiskha.com/questions/1601324/limit-x-approches-1-sqrt-1-2x-sqrt3x-sqrt-3-x-2sqrt-x | # Maths
Limit x approches 1(sqrt(1+2x)-(sqrt3x))/(sqrt(3+x)-2sqrt(x))
1. 👍 0
2. 👎 0
3. 👁 179
1. lim (x->1) (√(1+2x)-√(3x))/(√(3+x)-2√x)
The fraction at x=1 is 0/0, so we have to try to resolve that. Multiplying by the conjugate of the denominator we have
(√(1+2x)-√(3x))/(√(3+x)-2√x)) * (√(3+x)+2√x)/(√(3+x)+2√x)
(√(1+2x)-√(3x))*(√(3+x)+2√x) / (3+x - 4x)
as x->1 that becomes
((√3-√3)(√3+2))/(3+1-4)
Hmmm. STill 0/0. I guess we have to try l'Hospital's Rule. Taking derivatives top and bottom we get
1/√(1+2x) - √3/(2√x)
-------------------------
1/(2√(3+x) - 1/√x
-> (1/√3 - √3/2)/(1/2√4 - 1/√1)
= 2/(3√3)
1. 👍 0
2. 👎 0
## Similar Questions
1. ### Calculus
Hi! My question is: Given that f is a function defined by f(x) = (2x - 2) / (x^2 +x - 2) a) For what values of x is f(x) discontinuous? b) At each point of discontinuity found in part a, determine whether f(x) has a limit and, if
2. ### algebra
Simplify: 2 sqrt (3) + 6 sqrt(2) - 4 sqrt(3) + sqrt (2) a) 8 sqrt(2) - 3 sqrt(3) b) 6 sqrt(2) - 8 sqrt(3) c) 5 sqrt(6) d) 7 sqrt(2) - 2 sqrt(3) the answer i picked was d
3. ### Differential Equations
The velocity v of a freefalling skydiver is well modeled by the differential equation m*dv/dt=mg-kv^2 where m is the mass of the skydiver, g is the gravitational constant, and k is the drag coefficient determined by the position
4. ### algebra
am I right? 1. Simplify radical expression sqrt 50 5 sqrt ^2*** 2 sqrt ^5 5 sqrt ^10 5 2. Simplify the radical expression sqrt 56x^2 28x 2x sqrt 14*** 2x sqrt 7 sqrt 14x2 3. Simplify the radical expression. sqrt 490y^5w^6 2 sqrt
1. ### Math
Which is the exact value of the expression sqrt 32- sqrt 50 + sqrt 128 2 sqrt 7 7 sqrt 2 22 sqrt 2 2 sqrt 55
2. ### Calculus
show lim x->3 (sqrt(x)) = (sqrt(c)) hint: 0< |sqrt(x)-sqrt(c)|= (|x-c|)/(sqrt(x)+sqrt(c))
3. ### Calculus
In these complex exponential problems, solve for x: 1)e^(i*pi) + 2e^(i*pi/4)=? 2)3+3=3i*sqrt(3)=xe^(i*pi/3) MY attempt: I'm not really sure of what they are asking. For the 1st one I used the e^ix=cos(x)+i*sin(x) and got
4. ### Calculus
show lim x->3 (sqrt(x)) = (sqrt(c)) hint: 0< |sqrt(x)-sqrt(c)|= (|x-c|)/(sqrt(x)+sqrt(c))
1. ### Algebra 2
Operations with Complex Numbers Simplify. 1. sqrt(-144) 2. sqrt(-64x^4) 3. sqrt(-13)*sqrt(-26) 4. (-2i)(-6i)(4i) 5. i13 6. i38 7. (5 – 2i) + (4 + 4i) 8. (3 – 4i) – (1 – 4i) 9. (3 + 4i)(3 – 4i) 10. (6 – 2i)(1 + i) 11.
2. ### Math:)
A person is on the outer edge of a carousel with a radius of 20 feet that is rotating counterclockwise around a point that is centered at the origin. What is the exact value of the position of the rider after the carousel rotates
3. ### MATH
For each set of numbers, draw your own number line on a piece of paper, taking care to plot each pair of irrational numbers. Then write a statement comparing the position of the two given numbers on a number line. Also write an
4. ### Algebra
Which of these expressions is in simplified form? A. a ^3 sqrt 4 - b ^3 sqrt 2 / 2 B. sqrt 1/2x + sqrt 1/2z C. x^2 - 3x sqrt y / sqrt 3 D. sqrt 125x - x^2 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937168121337891, "perplexity": 2956.0973944443335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00197.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=JHHHB@_2015_v18n4_143 | Phase Formation Behavior and Charge-discharge Properties of Carbon-coated Li2MnSiO4 Cathode Materials for Lithium Rechargeable Batteries
Title & Authors
Phase Formation Behavior and Charge-discharge Properties of Carbon-coated Li2MnSiO4 Cathode Materials for Lithium Rechargeable Batteries
Sun, Ho-Jung; Chae, Suman; Shim, Joongpyo;
Abstract
Carbon-coated $\small{Li_2MnSiO_4}$ powders as the active materials for the cathode were synthesized by planetary ball milling and solid-state reaction, and their phase formation behavior and charge-discharge properties were investigated. Calcination temperature and atmosphere were controlled in order to obtain the $\small{{\beta}-Li_2MnSiO_4}$ phase, which was active electrochemically, and the carbon-coated $\small{Li_2MnSiO_}$$\small{4}$ active material powders with near single phase $\small{{\beta}-Li_2MnSiO_4}$ could be fabricated. The particles of the synthesized powders were secondary particles composed of primary ones of about 100 nm size. The carbon incorporation was essential to enable the Li ions to be inserted and extracted from $\small{Li_2MnSiO_4}$ active materials, and the initial capacity of 192 mAh/g could be obtained in the $\small{Li_2MnSiO_4}$ active materials with 4.8 wt% of carbon.
Keywords
$\small{Li_2MnSiO_4}$;planetary ball mill;solid-state reaction;carbon-coating;cathode;lithium rechargeable battery;
Language
Korean
Cited by
References
1.
T.-H. Kim, J.-S. Park, S. K. Chang, S. Choi, J. H. Ryu, and H.-K. Song, "The current move of lithium ion batteries towards the next phase", Adv. Energy Mater., 2, 860 (2012).
2.
B. Xu, D. Qian, Z. Wang, and Y. S. Meng, "Recent progress in cathode materials research for advanced lithium ion batteries" Mater. Sci. Eng. R, 73, 51 (2012).
3.
M. S. Islam, R. Dominko, C. Masquelier, C. Sirisopanaporn, A. R. Armstrong, and P. G. Bruce, "Silicate cathodes for lithium batteries: alternatives to phosphates?", J. Mater. Sci., 21, 9811 (2011).
4.
R. J. Gummow and Y. He, "Recent progress in the development of $Li_2MnSiO_4$ cathode materials", J. Power Sources, 253, 315 (2014).
5.
V. V. Politaev, A. A. Petrenko, V. B. Nalbandyan, B. S. Medvedev, and E. S. Shvetsova, "Crystal structure, phase relations and electrochemical properties of monoclinic $Li_2MnSiO_4$", J. Solid State Chem., 180, 1045 (2007).
6.
L. Qu, S. Fang, L. Yang, and S. Hirano, "Synthesis and characterization of high capacity $Li_2MnSiO_4$/C cathode material for lithium-ion battery", J. Power Sources, 252, 169 (2014).
7.
F. Wang, J. Chen, C. Wang, and B. Yi, "Fast sol-gel synthesis of mesoporous $Li_2MnSiO_4$/C nanocomposite with improved electrochemical performance for lithiumion batteries", J. Electroanal. Chem., 688, 123 (2013).
8.
S. Liu, J. Xu, D. Li, Y. Hu, X. Liu, and K. Xie, "High capacity $Li_2MnSiO_4$/C nanocomposite prepared by sol-gel method for lithium-ion batteries", J. Power Sources, 232, 258 (2013).
9.
I. Belharouak, A. Abouimrane, and K. Amine, "Structural and electrochemical characterization of $Li_2MnSiO_4$ cathode material", J. Phys. Chem. C, 113, 20733 (2009).
10.
K. Gao, C.-S. Dai, J. Lv, and S.-D. Li, "Thermal dynamics and optimization on solid-state reaction for synthesis of $Li_2MnSiO_4$ materials", J. Power Sources, 211, 97 (2012).
11.
V. Aravindan, K. Karthikeyan, K. S. Kang, W. S, Yoon, W. S. Kim, and Y. S. Lee, "Influence of carbon towards improved lithium storage properties of $Li_2MnSiO_4$ cathodes", J. Mater, Chem., 21, 2470 (2011).
12.
M.P. Pechini, "Method of preparing lead and alkaline earth titanates and niobates and coating method using the same to form a capacitor", US Patent No 3,330,697 (1967).
13.
R. Dominko, "$Li_2MSiO_4$ (M = Fe and/or Mn) cathode materials", J. Power Sources, 184, 462 (2008).
14.
T. Muraliganth, K.R. Stroukoff, A. Manthiram, "On the energetic stability and electrochemistry of $Li_2MnSiO_4$ polymorphs", Chem. Mater., 22, 5754 (2010).
15.
S. Zhang, C. Deng, F.L. Liu, Q. Wu, M. Zhang, F.L. Meng, H. Gao, "Impacts of in situ carbon coating on the structural, morphological and electrochemical characteristics of $Li_2MnSiO_4$ prepared by a citric acid assisted sol-gel method", J. Electroanal. Chem., 689, 88 (2013).
16.
S. Won, K.-K. Lee, G. Park, H.-J. Sun, J.-C. An, J. Shim, "Physical and electrochemical characteristics of carbon content in carbon-coated $Li_2MnSiO_4$ for rechargeable lithium batteries", J. Appl. Electrochem., 45, 169 (2015).
17.
A.R. West and F.P. Glasser, "Preparation and crystal chemistry of some tetrahedral $LiFePO_4$-type compounds", J. Solid State Chem., 4, 20 (1972).
18.
C. Wang, B. Dou, Y. Song, H. Chen, M. Yang, Y. Xu, "Kinetic study on non-isothermal pyrolysis of sucrose biomass", Energy Fuels, 28, 3793 (2014).
19.
Z. Chen, J. R. Dahn, "Reducing carbon in $LiFePO_4/C$ composite electrodes to maximize specific energy, volumetric energy, and tap density", J. Electrochem. Soc., 149, A1184 (2002)
20.
J. Ying, M. Lei, C. Jiang , C. Wan, X. He, J. Li, L. Wang, J. Ren, "Preparation and characterization of high-density spherical $Li_{0.97}Cr_{0.01}FePO_4$/C cathode material for lithium ion batteries", J. Power Sources 158, 543 (2006).
21.
R. Dominko, M. Bele, A. Kokalj, M. Gaberscek, and J. Jamnik, "$Li_2MnSiO_4$ as a potential Li-battery cathode material", J. Power Sources, 174, 457 (2007). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 8, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6530621647834778, "perplexity": 26187.30583823846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321025.86/warc/CC-MAIN-20170627064714-20170627084714-00058.warc.gz"} |
http://mathlesstraveled.com/?cpage=1 | MaBloWriMo 28: Equivalence relations are partitions
Today we’ll take a brief break from group theory to prove a fact about equivalence relations, namely, that they are the same as partitions. A partition is a pretty intuitive concept: you take a big set, and cut it up into smaller sets so that every element of the big set belongs to exactly one of the small sets. That is, no elements are left out, and none of the small sets overlap with each other at all.
More formally, a partition of a set $S$ is a finite collection of nonempty sets $P_1, \dots, P_n$ such that
1. The union of all the $P_i$ is the original set: $\bigcup_i P_i = S$.
2. The $P_i$ do not overlap, i.e. they are pairwise disjoint: if $i \neq j$ then $P_i \cap P_j = \varnothing$.
So how does this relate to equivalence relations? Suppose we have a set $X$ and an equivalence relation $\sim$ on $X$. Then the equivalence class of an element $x \in X$ is defined as the set of all elements which are equivalent to $x$. We write
$[x] = \{ y \mid y \in X, x \sim y \}$
to denote the equivalence class of $x$. It turns out that if $\sim$ is an equivalence relation, the equivalence classes form a partition of $X$. Intuitively, each equivalence class consists of a bunch of things which are all equivalent to each other, and these necessarily include everything (since everything is at least equivalent to itself) and don’t overlap (because if two equivalence classes did overlap, they would collapse into one big class due to transitivity). Let’s prove it.
1. First we’ll prove a small lemma: if $y \sim z$ then $[y] = [z]$. As is usual, to prove two sets are equal, we’ll prove that each is a subset of the other. First, suppose $y'$ is any element in $[y]$. Then by definition $y \sim y'$. By symmetry, $y' \sim y$, and then by transitivity (since we assumed $y \sim z$), $y' \sim z$. By symmetry again $z \sim y'$, so by definition $y' \in [z]$. So every element of $[y]$ is also an element of $[z]$. An entirely analogous argument shows that $[z]$ is a subset of $[y]$ as well. Thus $[y] = [z]$.
2. Now we can prove that the set of distinct equivalence classes forms a partition of $X$. First, every $x \in X$ is at the very least included in its own equivalence class $[x]$, which must contain $x$ since $x \sim x$ by reflexivity. So the equivalence classes don’t miss any elements, that is, the union of all the equivalence classes is the entire set $X$.
3. We also have to prove that distinct equivalence classes don’t overlap. Let $P_i$ and $P_j$ be two different equivalence classes. We want to show they have no elements in common. Suppose, on the contrary, that there is some $x$ which is in both $[y]$ and $[z]$. Then by definition $y \sim x$ and $z \sim x$. So by symmetry $x \sim z$, and then by transitivity $y \sim z$. By our lemma above, this means $[y] = [z]$, contradicting our assumption that they are different.
Posted in algebra, proof | | 2 Comments
MaBloWriMo 27: From subgroups to equivalence relations
Again, let $G$ be a group and $H$ a subgroup of $G$. Then we can define a binary relation on elements of $G$, called $\sim_H$, as follows:
$x \sim_H y$ if and only if there is some $h \in H$ such that $x = yh$.
That is, for any two elements $x, y \in G$, either $x \sim_H y$, or not: yes, if you can get from $y$ to $x$ by combining (on the right) with some element in $H$, and otherwise, no. Note that given any two elements $x, y \in G$, it is always possible to get from $y$ to $x$ by combining with some element of $G$: in particular, $x = y(y^{-1}x)$. But this might not be an element of $H$.
Now, an equivalence relation on a set $X$ is a relation $x \sim y$ with the following three properties:
1. Reflexivity: every $x \in X$ is related to itself, that is, $x \sim x$.
2. Symmetry: If $x \sim y$, then also $y \sim x$.
3. Transitivity: If $x \sim y$ and $y \sim z$, then $x \sim z$.
The usual equality relation satisfies these properties: things are always equal to themselves; if $x = y$ then $y = x$; and if $x = y$ and $y = z$ then $x = z$. The notion of an equivalence relation is a way to talk about more general kinds of equality.
Let’s prove that $\sim_H$ is an equivalence relation. This is really cool because it turns out that the three properties of an equivalence relation each follow from one of the three properties of a group!
1. Reflexivity. We have to show that any $x \in G$ is related to itself, that is, $x \sim_H x$. By definition this means there is some $h \in H$ such that $x = xh$. Well, that’s easy: since $H$ is a group, it has to contain the identity element $e$, and $x = xe$.
2. Symmetry. Suppose $x \sim_H y$, that is, $x = yh$ for some $h \in H$. Then we have to show $y \sim_H x$, that is, there is some $h' \in H$ (which could be different from $h$) such that $y = xh'$. Well, since $H$ is a group, it has to contain inverses. We can combine both sides of $x = yh$ with $h^{-1}$ to obtain $xh^{-1} = y$—so the $h'$ we are looking for is precisely $h^{-1}$.
3. Transitivity. Suppose $x \sim_H y$ and $y \sim_H z$. That means there are $h_1, h_2 \in H$ for which $x = y h_1$ and $y = z h_2$. We want to show that $x \sim_H z$, that is, $x = zh$ for some $h \in H$. Substituting for $y$, we find that $x = y h_1 = (z h_2) h_1 = z (h_2 h_1)$ (note how we used the third property of a group, associativity). So the $h$ we are looking for is just $h_2 h_1$, which has to be in $H$ since $H$ is closed under the binary operation.
So for a given subgroup $H \leq G$, this relation defines a sort of “equality with respect to $H$” on the elements of $G$ (whatever that means!). As for an example—consider again the subgroup $\{0,4\} \leq \mathbb{Z}_8$. Which elements of $\mathbb{Z}_8$ are related to each other under $\sim_{\{0,4\}}$? What do you notice?
MaBloWriMo 26: Left cosets
Let $G$ be a group and $H$ a subgroup of $G$. Then for each element $a \in G$ we can define a left coset of $H$ by
$aH = \{ ah \mid h \in H \}$.
That is, $aH$ is the set we get by combining $a$ (on the left) with every element of $H$. For example, given the subgroup $\{0,4\} \leq \mathbb{Z}_8$ (this was the other subgroup of $\mathbb{Z}_8$—did you find it?), the left coset corresponding to $1 \in \mathbb{Z}_8$ is $1 +_8 \{0,4\} = \{1 +_8 0, 1 +_8 4\} = \{1,5\}$. A few observations:
• The cosets corresponding to different elements of $G$ might be the same. For example, the left coset of $\{0,4\}$ corresponding to $5 \in \mathbb{Z}_8$ is $\{5 +_8 0, 5 +_8 4\} = \{5, 1\} = \{1, 5\}$, just like the coset for $1$.
• Can you find the other possible (left) cosets of $\{0,4\}$ in $\mathbb{Z}_8$? What do you notice?
• As you may guess, there are also things called right cosets, denoted $Ha$, where we combine with an element on the right. $\mathbb{Z}_8$ is not such a good example anymore, since in $\mathbb{Z}_8$ the binary operation is commutative, that is, $a +_8 b = b +_8 a$, so left and right cosets are the same thing. In general, though, the binary operation of a group does not have to be commutative.
• There is nothing special about left cosets as opposed to right cosets. In our proof of Lagrange’s Theorem we will use left cosets, but we could equally well replace all the left cosets by right cosets (and flip a few other things around) to get a different but equally valid proof.
• As an interesting aside, when the left and right cosets of a subgroup coincide, we say that the subgroup is normal. (Hence every subgroup of a group with a commutative binary operation is normal; but this can also happen even when the binary operation is not commutative.) It turns out that these normal subgroups are very important. Normal subgroups of a group are kind of like the divisors of an integer; you can “divide” a group by one of its normal subgroups to get a “quotient group”. And yes, there are special groups called simple groups which don’t have any normal subgroups, and are kind of like prime numbers—there is a suitable sense in which every finite group can be uniquely decomposed into a “product” of simple groups, just like integers can be uniquely decomposed into a product of prime factors. But this is getting way off on a tangent! (I told you this proof would hint at some very cool, deeper group theory.)
Just one more observation for today. For any $a \in G$, consider the function $f_a(b) = a \odot b$ which combines its input with $a$ (on the left). This function is injective, that is, one-to-one: if $f_a(b) = f_a(c)$, then by definition $ab = ac$, and combining both sides with $a^{-1}$ on the left, $a^{-1}ab = a^{-1}ac$, hence $b = c$. (In a group we can always cancel things from both sides of an equation—though only from the end! For example, from $abc = xby$ it does not follow that $ac = xy$.) Conversely, this means that if $b \neq c$, then $ab \neq ac$.
When we form the left coset $aH$, we are applying the function $f_a$ to every element of $H$. The fact that this function is injective means it can’t “collapse” multiple elements of $H$ into the same element in the result. This shows that the coset $aH$ has to have the same size as $H$: there is exactly one element in $aH$ for each element of $H$, and they all have to be different.
MaBloWriMo 25: Subgroups
So in the remainder of the month, we’ll prove that in any group $G$, the order of each element $g \in G$ must evenly divide the order (size) of the group. I said in an earlier post that this is called Lagrange’s Theorem; actually, my memory was a tad off and it turns out that Lagrange’s Theorem is slightly more general than this, but implies it as a fairly straightforward corollary.
Today, we need to define the concept of a subgroup. If $G$ is a group, then $H$ is a subgroup of $G$ (written $H \leq G$) when
1. The set of elements of $H$ is a subset of the set of elements of $G$, and
2. $H$ is also a group, under the same binary operation as $G$.
If $H \leq G$ then you can think of $H$ as a group “hiding inside” a bigger group $G$.
Given some subset $H$ of the elements of $G$, to see whether $H$ forms a subgroup we need to check just three things:1
1. $H$ is nonempty.
2. $H$ is closed under the binary operation. This is a rather special property: if you pick any old subset of the elements of $G$, chances are that combining two elements from your subset might result in something outside the subset (though of course it will still be in $G$).
3. The inverse of any element of $H$ is also in $H$. This is a special property too, for the same reason.
We don’t need to recheck associativity, since $H$ uses the same binary operation as $G$. Note also that we don’t need to check whether $H$ has an identity element, since this is already implied by (1), (2) and (3): since $H$ is nonempty by (1), choose some $a \in H$. Then by (3), $a^{-1} \in H$ too, then by (2), $a a^{-1} = e$ is also in $H$. (Previously I’ve been using $\odot$ to denote the binary operation of a group, but this is going to get tedious fast: from now on I will just omit writing an explicit symbol at all, and write $ab$ instead of $a \odot b$. This is standard group theory notation.)
Let’s see some examples. Remember the example group $\mathbb{Z}_8$, which consists of the numbers $0$ through $7$, with a binary operation of addition $\pmod 8$. Let’s first consider the subset $\{0,1,2,3\}$. Is this is a subgroup of $\mathbb{Z}_8$? No, it isn’t: it’s not closed under the binary operation. For example, $2 +_8 3 = 5$ which is not in the subset.
$\{0,2,4,6\}$, on the other hand, is indeed a subgroup of $\mathbb{Z}_8$. It is obviously nonempty. We can check that combining any two elements of the set according to the binary operation lands us back in the set (adding two even numbers $\pmod 8$ always results in an even number again), and the inverse of each element is in the set ($0$ and $4$ are their own inverses, and $2$ and $6$ are inverses).
Any group is trivially a subgroup of itself (the “subset” in the definition does not have to be a strict subset). Also, the group with a single element is a subgroup of any group. So we have found three subgroups of $\mathbb{Z}_8$. There is one more—can you find it?
Now we can state Lagrange’s Theorem: if $H$ is a subgroup of $G$, then the order of $H$ evenly divides the order of $G$.
We’ll spend the rest of the month proving this. Here’s an outline for the rest of the proof, one blog post for each item below. (The proof will not actually take quite as long as I thought!) Supposing $H$ is a subgroup of $G$:
1. Define some subsets of $G$ called the left cosets of $H$, and show that they all have the same size as $H$.
2. Define (and prove) a certain equivalence relation on elements of $G$, defined in terms of $H$.
3. Show that the equivalence classes of any equivalence relation form a partition.
4. Show that the equivalence classes for the equivalence relation we defined are exactly the left cosets of $H$. Conclude that since every coset of $H$ is the same size, and they partition $G$, their common size (that is, the size of $H$) must evenly divide the size of $G$. This will conclude the proof of Lagrange’s Theorem.
5. In the final post, we will define the cyclic subgroup generated by an element $g \in G$ and show that the order of this subgroup is the same as the order of $g$—hence by Lagrange’s Theorem the order of $g$ must divide the order of $G$.
Onward!
1. This is usually called the two-step subgroup test (there are, of course, three conditions, but the condition that $H$ be nonempty is usually so trivial that it doesn’t count). There are other ways to check whether some subset is a subgroup, most notably the one-step subgroup test which, besides $H$ being nonempty, requires only that $ab^{-1} \in H$ for every $a, b \in H$. It is a nice exercise in basic group theory to prove that this is equivalent.
MaBloWriMo 24: Bezout’s identity
A few days ago we made use of Bézout’s Identity, which states that if $a$ and $b$ have a greatest common divisor $d$, then there exist integers $x$ and $y$ such that $ax + by = d$. For completeness, let’s prove it.
Consider the set of all linear combinations of $a$ and $b$, that is,
$\{ax + by \mid x, y \in \mathbb{Z} \}$,
and suppose $d = as + bt$ is the smallest positive integer in this set. For example, if $a = 10$ and $b = 6$, then you can check that, for example, $16 = a + b$, and $8 = 2a - 2b$, and $4 = a - b$ are all in this set, as is, for example, $-4 = -a + b$, but the smallest positive integer you can get is $2 = 2a - 3b$. We will prove that in fact, $d$ is the greatest common divisor of $a$ and $b$.
Consider dividing $a$ by $d$. This will result in some remainder $r$ such that $0 \leq r < d$. I claim the remainder is also of the form $ax + by$ for some integers $x$ and $y$: note that $a = a1 + b0$ is of this form, and $d$ is of this form by definition, and we get the remainder by subtracting some number of copies of $d$ from $a$. Subtracting two numbers of the form $ax + by$ works by subtracting coefficients, yielding another number of the same form again. But $d$ is supposed to be the smallest positive number of this form, and $r$ is less than $d$—which means $r$ has to be zero, that is, $d$ evenly divides $a$. The same argument shows that $d$ evenly divides $b$ as well. So $d$ is a common divisor of $a$ and $b$.
To see that $d$ is the greatest common divisor, suppose $c$ also divides $a$ and $b$. Then since $d = ax + by$, we can see that $c$ must divide $d$ as well—so it must be less than or equal to $d$.
Voila! This proof doesn’t show us how to actually compute some appropriate $x$ and $y$ given $a$ and $b$—that can be done using the extended Euclidean algorithm; perhaps I’ll write about that some other time. But this proof will do for today.
For the remainder of the month, as suggested by commented janhrcek, we’ll prove the thing I hinted at in an earlier post: namely, that the order of any group element is always a divisor of the order of the group. This is a really cool proof that hints at some much deeper group theory.
| Tagged , , , , , , | 2 Comments
So, where are we? We assumed that $s_{n-2}$ is divisible by $M_n$, but $M_n$ is not prime. We picked a divisor $q$ of $M_n$ and used it to define a group $X^*$, and yesterday we showed that $\omega$ has order $2^n$ in $X^*$. Today we’ll use this to derive a contradiction.
Recall that we picked $q$ so that $q^2 \leq M_n$—we can always pick a divisor of $M_n$ that is less than (or equal to) the square root of $M_n$. We then defined $X$ in terms of $q$ as
$X = \{ a + b\sqrt 3 \mid a, b \in \mathbb{Z}; 0 \leq a, b < q \}$.
So how many elements are in the set $X$? That’s not too hard: there are $q$ choices for the coefficient $a$, and $q$ choices for the coefficient $b$; each pair of choices gives a different element of $X$, which therefore contains $q^2$ elements.
So what about the order of $X^*$? We got $X^*$ by throwing away elements from $X$ without an inverse. At least we know that $0$ doesn’t have an inverse. There might be more, depending on $q$, but at least we can say that $|X^*| \leq q^2 - 1$.
$\omega$ is in the group $X^*$, and we showed that the order of an element cannot exceed the order of the group. But, check this out:
$|X^*| \leq q^2 - 1 \leq M_n - 1 < M_n + 1 = 2^n = |\omega|$
The order of the group $|X^*|$ is less than the order of $|\omega|$! This is a contradiction. So, our assumption that $M_n$ has a nontrivial divisor $q$ must be wrong—$M_n$ is prime!
It took 23 posts, but we have finally proved one direction of the Lucas-Lehmer test: if computing $s_{n-2}$ yields something divisible by $M_n$, then $M_n$ is definitely prime.
And now I have to decide what to do with the remaining week. Of course, there is another direction to prove: we have only shown that the Lucas-Lehmer test correctly identifies primes (if $s_{n-2}$ is divisible by $M_n$, then $M_n$ is prime), but it is possible to also prove the converse, that the Lucas-Lehmer test identifies all the Mersenne primes (if $M_n$ is prime, then $s_{n-2}$ will be divisible by $M_n$). I am still trying to figure out how difficult this proof is. I don’t think we’ll be able to fit it in the last 7 days of the month, but it still might be worth starting it and finishing it on a more relaxed schedule.
Of course, I’m also open to questions, suggestions, etc.!
| Tagged , , , , , , | 5 Comments
MaBloWriMo 22: the order of omega, part II
Yesterday, from the assumption that $s_{n-2}$ is divisible by $M_n$, we deduced the equations
$\omega^{2^{n-1}} = q-1$
and
$\omega^{2^n} = 1$
which hold in the group $X^*$. So what do these tell us about the order of $\omega$? Well, first of all, the second equation tells us that the order of $\omega$ must be a divisor of $2^n$, and the only divisors of $2^n$ are other powers of $2$. So the order of $\omega$ must be $2^k$ for some $k \leq n$.
Now suppose the order of $\omega$ is $2^k$, so $\omega^{2^k} = 1$. But then if we square both sides we get $\omega^{2^{k+1}} = 1$. Squaring again gives $\omega^{2^{k+2}} = 1$, and so on. So once we hit $1$, we are stuck there: raising $\omega$ to all bigger powers of $2$ will also yield $1$.
But now look at the first equation: $\omega^{2^{n-1}} = q-1$. Remember that the order of $\omega$ has to be a power of $2$. From this equation we can see that the order can’t be $2^{n-1}$. Could it be a smaller power of two? In fact, no, it can’t, by the argument in the previous paragraph: once you hit a power of two that yields $1$, all the higher powers also have to yield $1$. So if $\omega$ raised to any smaller power of $2$ were the identity, then $\omega^{2^{n-1}}$ would also have to be the identity—but it isn’t.
The inescapable conclusion is that the only possibility for the order of $\omega$ is exactly $2^n$.
So, how does that help? Hint: think about the order of the group $X^*$… the triumphant conclusion tomorrow!
| Tagged , , , , , , | 1 Comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 372, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613026976585388, "perplexity": 80.79813860492538}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460942.79/warc/CC-MAIN-20151124205420-00134-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://docs.eyesopen.com/toolkits/csharp/oechemtk/OEChemFunctions/OEClearPartialCharges.html | # OEClearPartialCharges¶
void OEClearPartialCharges(OEMolBase &mol)
Clears the partial charge property of all atoms in a molecule, setting the partial charge on each atom of a molecule to 0.0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5514694452285767, "perplexity": 1891.2398191445343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946688.88/warc/CC-MAIN-20180424115900-20180424135900-00405.warc.gz"} |
https://mran.revolutionanalytics.com/snapshot/2020-01-27/web/packages/hardhat/vignettes/forge.html | # Forging data for predictions
library(hardhat)
## Introduction
The counterpart to mold() (which you can read all about in vignette("mold", "hardhat")), is forge(). Where mold() is used to preprocess your training data, forge() is used to preprocess new data that you are going to use to generate predictions from your model.
Like mold(), forge() is not intended to be used interactively. Instead, it should be called from the predict() method for your model. To learn more about using forge() in a modeling package, see vignette("package", "hardhat"). The rest of this vignette will be focused on the many features that forge() offers.
## Connection with mold()
When mold() is used, one of the returned objects is an blueprint. This is the key to preprocessing new data with forge(). For instance, assume you’ve called mold() like so:
iris_train <- iris[1:100,]
iris_test <- iris[101:150,]
iris_form <- mold(
log(Sepal.Length) ~ Species + Petal.Width,
iris_train,
blueprint = default_formula_blueprint(indicators = FALSE)
)
formula_eng <- iris_form$blueprint formula_eng #> Formula blueprint: #> #> # Predictors: 2 #> # Outcomes: 1 #> Intercept: FALSE #> Novel Levels: FALSE #> Indicators: FALSE A formula blueprint is returned here, which knows about the predictors and outcomes that were used at training time, and knows that you don’t want to expand Species into dummy variables by setting indicators = FALSE. When it is time to predict() on new data, that data is passed on to forge() along with the blueprint we just created. forge(iris_test, formula_eng) #>$predictors
#> # A tibble: 50 x 2
#> Petal.Width Species
#> <dbl> <fct>
#> 1 2.5 virginica
#> 2 1.9 virginica
#> 3 2.1 virginica
#> 4 1.8 virginica
#> 5 2.2 virginica
#> 6 2.1 virginica
#> 7 1.7 virginica
#> 8 1.8 virginica
#> 9 1.8 virginica
#> 10 2.5 virginica
#> # … with 40 more rows
#>
#> $outcomes #> NULL #> #>$extras
#> $extras$offset
#> NULL
Note that in predictors, Species was not expanded because the blueprint knew about the preprocessing options that were set when mold() was called.
forge() always returns three things, and they should look familiar to you if you have used mold().
• predictors holds a tibble of the predictors.
• outcomes is returned as NULL by default, because most predict() methods assume you only have access to the new predictors. Alternatively, as you will read in a moment, this can contain a tibble of the new outcomes.
• extras varies per blueprint, but is a catch-all slot to hold the same kind of extra objects that were returned by the blueprint when mold() was called.
## Outcomes
Generally when generating predictions you only need to know about the new predictors. However, when performing resampling you will need to have the processed outcomes as well so you can compute cross validated performance statistics and decide between multiple models, or choose between hyperparameters.
You can easily request the outcomes as well with outcomes = TRUE. Just like with the predictors, these get processed using the same steps as done to the outcomes at fit time.
forge(iris_test, formula_eng, outcomes = TRUE)
#> $predictors #> # A tibble: 50 x 2 #> Petal.Width Species #> <dbl> <fct> #> 1 2.5 virginica #> 2 1.9 virginica #> 3 2.1 virginica #> 4 1.8 virginica #> 5 2.2 virginica #> 6 2.1 virginica #> 7 1.7 virginica #> 8 1.8 virginica #> 9 1.8 virginica #> 10 2.5 virginica #> # … with 40 more rows #> #>$outcomes
#> # A tibble: 50 x 1
#> log(Sepal.Length)
#> <dbl>
#> 1 1.84
#> 2 1.76
#> 3 1.96
#> 4 1.84
#> 5 1.87
#> 6 2.03
#> 7 1.59
#> 8 1.99
#> 9 1.90
#> 10 1.97
#> # … with 40 more rows
#>
#> $extras #>$extras$offset #> NULL ## Validation One of the most useful things about forge() is its robustness against malformed new data. It isn’t unreasonable to enforce that the new data a user provides at prediction time should have the same type as the data used at fit time. Type is defined in the vctrs sense, and for our uses essentially means that a number of checks on the test data have to pass, including: • The column names of the testing data and training data must be the same. • The type of each column of the testing data must be the same as the columns found in the training data. This means: • The classes must be the same (e.g. if it was a factor in training, it must be a factor in testing). • The attributes must be the same (e.g. the levels of the factors must also be the same). Almost all of this validation is possible through the use of vctrs::vec_cast(), and is called for you by forge(). ### Column existence The easiest example to demonstrate is missing columns in the testing data. forge() won’t let you continue until all of the required predictors used at training are also present in the new data. test_missing_column <- subset(iris_test, select = -Species) forge(test_missing_column, formula_eng) #> Error: The following required columns are missing: 'Species'. ### Column types After an initial scan for the column names is done, a deeper scan of each column is performed, checking the type of that column. For instance, what happens if the new Species column was a double, not a factor? test_species_double <- iris_test test_species_double$Species <- as.double(test_species_double$Species) forge(test_species_double, formula_eng) #> Error: Can't cast x$Species <double> to to$Species <factor<12d60>>. An error is thrown, indicating that a double can’t be cast to a factor. ### Lossless conversion The error message above suggests that in some cases you can automatically cast from one type to another, and in fact that is true! Rather than being a double, what if Species was just a character? test_species_character <- iris_test test_species_character$Species <- as.character(test_species_character$Species) forged_char <- forge(test_species_character, formula_eng) forged_char$predictors
#> # A tibble: 50 x 2
#> Petal.Width Species
#> <dbl> <fct>
#> 1 2.5 virginica
#> 2 1.9 virginica
#> 3 2.1 virginica
#> 4 1.8 virginica
#> 5 2.2 virginica
#> 6 2.1 virginica
#> 7 1.7 virginica
#> 8 1.8 virginica
#> 9 1.8 virginica
#> 10 2.5 virginica
#> # … with 40 more rows
class(forged_char$predictors$Species)
#> [1] "factor"
levels(forged_char$predictors$Species)
#> [1] "setosa" "versicolor" "virginica"
Interesting, so in this case we can actually convert to a factor, and the class and even the levels are all restored. The key here is that this was a lossless conversion. We lost no information when converting the character Species to a factor because the unique character values were a subset of the original levels.
An example of a conversion that would be lossy is if the character Species column had a value that was not a level in the training data.
test_species_lossy <- iris_test
test_species_lossy$Species <- as.character(test_species_lossy$Species)
test_species_lossy$Species[2] <- "im new!" forged_lossy <- forge(test_species_lossy, formula_eng) #> Warning: Novel levels found in column 'Species': 'im new!'. The levels have been #> removed, and values have been coerced to 'NA'. forged_lossy$predictors
#> # A tibble: 50 x 2
#> Petal.Width Species
#> <dbl> <fct>
#> 1 2.5 virginica
#> 2 1.9 <NA>
#> 3 2.1 virginica
#> 4 1.8 virginica
#> 5 2.2 virginica
#> 6 2.1 virginica
#> 7 1.7 virginica
#> 8 1.8 virginica
#> 9 1.8 virginica
#> 10 2.5 virginica
#> # … with 40 more rows
In this case:
• A lossy warning is thrown
• The Species column is still converted to a factor with the right levels
• The novel level is removed and its value is set to NA
## Recipes and forge()
Just like with the formula method, a recipe can be used as the preprocessor at fit and prediction time. hardhat handles calling prep(), juice(), and bake() for you at the right times. For instance, say we have a recipe that just creates dummy variables out of Species.
library(recipes)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
#>
#> Attaching package: 'recipes'
#> The following object is masked from 'package:stats':
#>
#> step
rec <- recipe(Sepal.Width ~ Sepal.Length + Species, iris_train) %>%
step_dummy(Species)
iris_recipe <- mold(rec, iris_train)
iris_recipe$predictors #> # A tibble: 100 x 3 #> Sepal.Length Species_versicolor Species_virginica #> <dbl> <dbl> <dbl> #> 1 5.1 0 0 #> 2 4.9 0 0 #> 3 4.7 0 0 #> 4 4.6 0 0 #> 5 5 0 0 #> 6 5.4 0 0 #> 7 4.6 0 0 #> 8 5 0 0 #> 9 4.4 0 0 #> 10 4.9 0 0 #> # … with 90 more rows The blueprint is a recipe blueprint. recipe_eng <- iris_recipe$blueprint
recipe_eng
#> Recipe blueprint:
#>
#> # Predictors: 2
#> # Outcomes: 1
#> Intercept: FALSE
#> Novel Levels: FALSE
When we forge(), we can request outcomes to have the predictors and outcomes separated like with the formula method.
forge(iris_test, recipe_eng, outcomes = TRUE)
#> $predictors #> # A tibble: 50 x 3 #> Sepal.Length Species_versicolor Species_virginica #> <dbl> <dbl> <dbl> #> 1 6.3 0 1 #> 2 5.8 0 1 #> 3 7.1 0 1 #> 4 6.3 0 1 #> 5 6.5 0 1 #> 6 7.6 0 1 #> 7 4.9 0 1 #> 8 7.3 0 1 #> 9 6.7 0 1 #> 10 7.2 0 1 #> # … with 40 more rows #> #>$outcomes
#> # A tibble: 50 x 1
#> Sepal.Width
#> <dbl>
#> 1 3.3
#> 2 2.7
#> 3 3
#> 4 2.9
#> 5 3
#> 6 3
#> 7 2.5
#> 8 2.9
#> 9 2.5
#> 10 3.6
#> # … with 40 more rows
#>
#> $extras #>$extras$roles #> NULL ### A note on recipes One complication with recipes is that, in the bake() step, the processing happens to the predictors and the outcomes all together. This means that you might run into the situation where the outcomes seem to be required to forge(), even if you aren’t requesting them. rec2 <- recipe(Sepal.Width ~ Sepal.Length + Species, iris_train) %>% step_dummy(Species) %>% step_center(Sepal.Width) # Here we modify the outcome iris_recipe2 <- mold(rec2, iris_train) recipe_eng_log_outcome <- iris_recipe2$blueprint
If our new_data doesn’t have the outcome, baking this recipe will fail even if we don’t request that the outcomes are returned by forge().
iris_test_no_outcome <- subset(iris_test, select = -Sepal.Width)
forge(iris_test_no_outcome, recipe_eng_log_outcome)
#> Error: Can't find column Sepal.Width in .data.
The way around this is to use the built-in recipe argument, skip, on the step containing the outcome. This skips the processing of that step at bake() time.
rec3 <- recipe(Sepal.Width ~ Sepal.Length + Species, iris_train) %>%
step_dummy(Species) %>%
step_center(Sepal.Width, skip = TRUE)
iris_recipe3 <- mold(rec3, iris_train)
recipe_eng_skip_outcome <- iris_recipe3$blueprint forge(iris_test_no_outcome, recipe_eng_skip_outcome) #>$predictors
#> # A tibble: 50 x 3
#> Sepal.Length Species_versicolor Species_virginica
#> <dbl> <dbl> <dbl>
#> 1 6.3 0 1
#> 2 5.8 0 1
#> 3 7.1 0 1
#> 4 6.3 0 1
#> 5 6.5 0 1
#> 6 7.6 0 1
#> 7 4.9 0 1
#> 8 7.3 0 1
#> 9 6.7 0 1
#> 10 7.2 0 1
#> # … with 40 more rows
#>
#> $outcomes #> NULL #> #>$extras
#> $extras$roles
#> NULL
There is a tradeoff here that you need to be aware of.
• If you are just interested in generating predictions on completely new data, you can safely use skip = TRUE because you will almost never have access to the corresponding true outcomes to preprocess and compare against.
• If you know you need to do resampling, you will likely have access to the outcomes during the resampling step so you can cross-validate the performance. In this case, you can’t set skip = TRUE because then the outcomes won’t be processed, but since you have access to them, you shouldn’t need to.
For example, if we used iris_test with the above recipe (which has the outcome), Sepal.Width wouldn’t get centered when forge() is called. But we probably would not have skipped that step if we knew that our test data would have the outcome.
forge(iris_test, recipe_eng_skip_outcome, outcomes = TRUE)$outcomes #> # A tibble: 50 x 1 #> Sepal.Width #> <dbl> #> 1 3.3 #> 2 2.7 #> 3 3 #> 4 2.9 #> 5 3 #> 6 3 #> 7 2.5 #> 8 2.9 #> 9 2.5 #> 10 3.6 #> # … with 40 more rows # Notice that the outcome values haven't been centered # and are the same as before head(iris_test$Sepal.Width)
#> [1] 3.3 2.7 3.0 2.9 3.0 3.0 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18705014884471893, "perplexity": 7665.196037557298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00415.warc.gz"} |
https://gamedev.meta.stackexchange.com/questions/2175/is-it-even-possible-to-improve-complex-questions | # Is it even possible to improve complex questions?
How to improve games programming questions that need to take many aspects into account before being answered?
I asked How to program destructibles? and it has been (more or less fairly) put on hold due to "too many questions". It requires an expertise of someone experienced in the field of data structures who worked with many of those "destructible" objects (or similar stuff). I implemented only two of those over the last month and mentioned some (because there were many) of the issues that arose during that time in hope of avoiding committing further mistakes. If I don't mention them, all I'll get are answers like this one which is obviously not what I came for.
Basing on the community reaction, I realized questions regarding complex programming problems are above the level of advancement this community is willing to analyze. Should I take such issues to SO even though game development related?
As far as I can see the main problem with your question (in last edit) is the last line: Is this the same as with a destructible terrain in other games?
The original question was actually a combination of multiple smaller question. I do agree that a comprehensive answer might (should) answer all those smaller questions, but then again, with new format even answers that are not that detailed are welcome.
In general, as far as I can tell, you can ask complex questions, but you shouldn't force any formatting to answers. But I usually am accused of being to permissive, and I'm not a mod, so we should wait for higher ups to respond.
Sure it is.
Basing on the community reaction, I realized questions regarding complex programming problems are above the level of advancement this community is willing to analyze. Should I take such issues to SO even though game development related?
Complexity isn't an issue, but broadness is. StackExchange isn't a discussion forum, and overly broad questions don't tend to have single objective reasonably-concise answers. Instead, they tend to generate discussion, which is excellent, but not something SE wants to host (and indeed, not something the resulting site software is well-suited for).
Generally if one could write a book (or several chapters thereof) on the topic of the question, it's going to be too broad. The original revision of your question certainly fits that bill.
• You're asking if things are a "good choice." It is a rare thing in game development that is always objectively the best option, and so a good answer to this kind of question usually involves back-and-forth discussion of a lot of the nit-picky details of one's projects, and/or an extensive breakdown of the list of pros and cons to a particular choice in a vacuum and long discussion of potential alternatives.
• You're asking multiple unrelated questions. Each of your original four main questions could be a question on its own, and in your subsequent paragraphs you often pose one or two more questions.
• You're asking us to see the future. "Will this bite me later?" for example, is not something we can objectively answer in a way that improves the site's index of knowledge for posterity (we can only really say "maybe, maybe not").
• You don't scope the questions well. You don't provide a lot of detail that narrow down the problem space; without doing so, answerers need to contend with any possible scenario.
Addressing these issues will improve the question without really reducing its complexity (destructible objects are still a complex topic not matter how you slice them). Focus on explaining what you tried, what you want, what you got instead. The more specific and directed you can make your question, the better, as long as it doesn't so specific as to be "here's all my code, fix the problem please."
You'd likely run into the same barriers on SO. You're certainly free to try to ask there if you don't want to improve the question here, but I suspect you'll be met with the same result. In the original form you posted it, your question just isn't well-suited to the SE model. A site like GDNet, though, would work well because it is a discussion forum.
• I guess I wanted to start a discussion because I find the subject interesting and was trying to get another perspective before I commit to a final design. So... yes, you're absolutely right. I still think questions like: "will this bite me?", "is this a good choice?" are always implied but then, since they are, they can be omitted. Is the final edit of the question more or less okay? – cprn May 31 '16 at 9:22
• @CyprianGuerra It's fine to imply those questions if there is also some other concrete question there. Questions like "here is my plan, I will do X Y and Z, will that hurt me in the future?" just aren't a great fit here. I think with the edits the question is more reasonably scoped, and it already had three reopen votes, so I reopened it. – Josh Jun 3 '16 at 16:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4153694808483124, "perplexity": 795.6874354956798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00178.warc.gz"} |
https://ask.openstack.org/en/questions/79451/revisions/ | # Revision history [back]
### Glance with SSL: sslv3 alert handshake failure
Hi, I'm currenty trying to reconfigure a working OpenStack test environment that I've set up using the OpenStack Guide for Ubuntu 14.04 [1]. I want each service so use SSL so the traffic between the nodes is encrypted. Keystone already works using SSL (tested using keystone --insecure endpoint-list). I've used keystone-manage ssl_setup to generate the certs and keys. For now I want to use the same certs and keys for every service. Unfortunately I'm getting the following error with glance:
curl https://ControllerNode.sdn:9292 -k
glance --insecure --debug image-list
curl -i -X GET -H ´'User-Agent: python-glanceclient' -H 'Content-Type: application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: ***' -k --cert None --key None https://ControllerNode.sdn:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20
glance-api.conf
...
cert_file = /etc/glance/ssl/certs/keystone.pem
key_file = /etc/glance/ssl/private/keystonekey.pem
ca_file = /etc/glance/ssl/certs/ca.pem
...
registry_client_protocol = https
registry_client_key_file = /etc/glance/ssl/private/keystonekey.pem
registry_client_cert_file = /etc/glance/ssl/certs/keystone.pem
registry_client_ca_file = /etc/glance/ssl/certs/ca.pem
registry_client_insecure = True
...
glance-registry.conf
cert_file = /etc/glance/ssl/certs/keystone.pem
key_file = /etc/glance/ssl/private/keystonekey.pem
ca_file = /etc/glance/ssl/certs/ca.pem
Does anyone happen to know what the problem could be in this case? I'm assuming it is a Glance related problem because Keystone seems to work fine.
Python 2.7.6
curl 7.35.0
[1] http://docs.openstack.org/juno/install-guide/install/apt/content/ch_preface.html | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44076237082481384, "perplexity": 27884.887394830894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541315293.87/warc/CC-MAIN-20191216013805-20191216041805-00160.warc.gz"} |
https://webot.org/info/en/?search=Schwarz_lantern | # Schwarz lantern Information
https://en.wikipedia.org/wiki/Schwarz_lantern
Schwarz lantern on display in the German Museum of Technology, Berlin
In mathematics, the Schwarz lantern is a polyhedral approximation to a cylinder, used as a pathological example of the difficulty of defining the area of a smooth (curved) surface as the limit of the areas of polyhedra. It is formed by stacked rings of isosceles triangles, arranged within each ring in the same pattern as an antiprism. The resulting shape can be folded from paper, and is named after mathematician Hermann Schwarz and for its resemblance to a cylindrical paper lantern. [1] It is also known as Schwarz's boot, [2] Schwarz's polyhedron, [3] or the Chinese lantern. [4]
As Schwarz showed, for the surface area of a polyhedron to converge to the surface area of a curved surface, it is not sufficient to simply increase the number of rings and the number of isosceles triangles per ring. Depending on the relation of the number of rings to the number of triangles per ring, the area of the lantern can converge to the area of the cylinder, to a limit arbitrarily larger than the area of the cylinder, or to infinity—in other words, the area can diverge. The Schwarz lantern demonstrates that sampling a curved surface by close-together points and connecting them by small triangles is inadequate to ensure an accurate approximation of area, in contrast to the accurate approximation of arc length by inscribed polygonal chains.
The phenomenon that closely sampled points can lead to inaccurate approximations of area has been called the Schwarz paradox. [5] [6] The Schwarz lantern is an instructive example in calculus and highlights the need for care when choosing a triangulation for applications in computer graphics and the finite element method.
## History and motivation
The staircase paradox: polygonal chains of length ${\displaystyle 2}$ converge in distance to a diagonal segment of length ${\displaystyle {\sqrt {2}}}$, without converging to the same length.
Archimedes approximated the circumference of circles by the lengths of inscribed or circumscribed regular polygons. [7] [8] More generally, the length of any smooth or rectifiable curve can be defined as the supremum of the lengths of polygonal chains inscribed in them. [1] However, for this to work correctly, the vertices of the polygonal chains must lie on the given curve, rather than merely near it. Otherwise, in a counterexample sometimes known as the staircase paradox, polygonal chains of vertical and horizontal line segments of total length ${\displaystyle 2}$ can lie arbitrarily close to a diagonal line segment of length ${\displaystyle {\sqrt {2}}}$, converging in distance to the diagonal segment but not converging to the same length. The Schwarz lantern provides a counterexample for surface area rather than length, [9] and shows that for area, requiring vertices to lie on the approximated surface is not enough to ensure an accurate approximation. [1]
Hermann Schwarz
German mathematician Hermann Schwarz (1843–1921) devised his construction in the late 19th century [a] as a counterexample to the erroneous definition in J. A. Serret's 1868 book Cours de calcul differentiel et integral, [12] which incorrectly states that:
Soit une portion de surface courbe terminée par un contour ${\displaystyle C}$; nous nommerons aire de cette surface la limite ${\displaystyle S}$ vers laquelle tend l'aire d'une surface polyédrale inscrite formée de faces triangulaires et terminee par un contour polygonal ${\displaystyle \Gamma }$ ayant pour limite le contour ${\displaystyle C}$.
Il faut démontrer que la limite ${\displaystyle S}$ existe et qu'elle est indépendante de la loi suivant laquelle décroissent les faces de la surface polyedrale inscrite.
Let a portion of curved surface be bounded by a contour ${\displaystyle C}$; we will define the area of this surface to be the limit ${\displaystyle S}$ tended towards by the area of an inscribed polyhedral surface formed from triangular faces and bounded by a polygonal contour ${\displaystyle \Gamma }$ whose limit is the contour ${\displaystyle C}$.
It must be shown that the limit ${\displaystyle S}$ exists and that it is independent of the law according to which the faces of the inscribed polyhedral surface shrink.
Independently of Schwarz, Giuseppe Peano found the same counterexample. [10] At the time, Peano was a student of Angelo Genocchi, who, from communication with Schwarz, already knew about the difficulty of defining surface area. Genocchi informed Charles Hermite, who had been using Serret's erroneous definition in his course. Hermite asked Schwarz for details, revised his course, and published the example in the second edition of his lecture notes (1883). [11] The original note from Schwarz to Hermite was not published until the second edition of Schwarz's collected works in 1890. [13] [14]
An instructive example of the value of careful definitions in calculus, [5] the Schwarz lantern also highlights the need for care in choosing a triangulation for applications in computer graphics and for the finite element method for scientific and engineering simulations. [6] [15] In computer graphics, scenes are often described by triangulated surfaces, and accurate rendering of the illumination of those surfaces depends on the direction of the surface normals. A poor choice of triangulation, as in the Schwarz lantern, can produce an accordion-like surface whose normals are far from the normals of the approximated surface, and the closely-spaced sharp folds of this surface can also cause problems with aliasing. [6]
The failure of Schwarz lanterns to converge to the cylinder's area only happens when they include highly obtuse triangles, with angles close to 180°. In restricted classes of Schwarz lanterns using angles bounded away from 180°, the area converges to the same area as the cylinder as the number of triangles grows to infinity. The finite element method, in its most basic form, approximates a smooth function (often, the solution to a physical simulation problem in science or engineering) by a piecewise-linear function on a triangulation. The Schwarz lantern's example shows that, even for simple functions such as the height of a cylinder above a plane through its axis, and even when the function values are calculated accurately at the triangulation vertices, a triangulation with angles close to 180° can produce highly inaccurate simulation results. This motivates mesh generation methods for which all angles are bounded away from 180°, such as nonobtuse meshes. [15]
## Construction
Antiprism based on a regular 17-gon. Omitting the two 17-gon faces produces a Schwarz lantern with parameters ${\displaystyle m=1}$ and ${\displaystyle n=17}$. Other Schwarz lanterns with ${\displaystyle n=17}$ can be obtained by stacking ${\displaystyle m}$ copies of this antiprism.
The discrete polyhedral approximation considered by Schwarz can be described by two parameters: ${\displaystyle m}$, the number of rings of triangles in the Schwarz lantern; and ${\displaystyle n}$, half of the number of triangles per ring. [16] [b] For a single ring (${\displaystyle m=1}$), the resulting surface consists of the triangular faces of an antiprism of order ${\displaystyle n}$. For larger values of ${\displaystyle m}$, the Schwarz lantern is formed by stacking ${\displaystyle m}$ of these antiprisms. [6] To construct a Schwarz lantern that approximates a given right circular cylinder, the cylinder is sliced by parallel planes into ${\displaystyle m}$ congruent cylindrical rings. These rings have ${\displaystyle m+1}$ circular boundaries—two at the ends of the given cylinder, and ${\displaystyle m-1}$ more where it was sliced. In each circle, ${\displaystyle n}$ vertices of the Schwarz lantern are spaced equally, forming a regular polygon. These polygons are rotated by an angle of ${\displaystyle \pi /n}$ from one circle to the next, so that each edge from a regular polygon and the nearest vertex on the next circle form the base and apex of an isosceles triangle. These triangles meet edge-to-edge to form the Schwarz lantern, a polyhedral surface that is topologically equivalent to the cylinder. [16]
Origami crease pattern for a Schwarz lantern with ${\displaystyle m=16}$ and ${\displaystyle n=4}$
Detail of a boot from the painting Saint Florian (1473) by Francesco del Cossa, showing Yoshimura buckling
Ignoring top and bottom vertices, each vertex touches two apex angles and four base angles of congruent isosceles triangles, just as it would in a tessellation of the plane by triangles of the same shape. As a consequence, the Schwarz lantern can be folded from a flat piece of paper, with this tessellation as its crease pattern. [18] This crease pattern has been called the Yoshimura pattern, [19] after the work of Y. Yoshimura on the Yoshimura buckling pattern of cylindrical surfaces under axial compression, which can be similar in shape to the Schwarz lantern. [20]
## Area
The area of the Schwarz lantern, for any cylinder and any particular choice of the parameters ${\displaystyle m}$ and ${\displaystyle n}$, can be calculated by a straightforward application of trigonometry. A cylinder of radius ${\displaystyle r}$ and length ${\displaystyle \ell }$ has area ${\displaystyle 2\pi r\ell }$. For a Schwarz lantern with parameters ${\displaystyle m}$ and ${\displaystyle n}$, each band is a shorter cylinder of length ${\displaystyle \ell /m}$, approximated by ${\displaystyle 2n}$ isosceles triangles. The length of the base of each triangle can be found from the formula for the edge length of a regular ${\displaystyle n}$-gon, namely [16]
${\displaystyle 2r\sin {\frac {\pi }{n}}.}$
The height ${\displaystyle h}$ of each triangle can be found by applying the Pythagorean theorem to a right triangle formed by the apex of the triangle, the midpoint of the base, and the midpoint of the arc of the circle bounded by the endpoints of the base. The two sides of this right triangle are the length ${\displaystyle \ell /m}$ of the cylindrical band, and the sagitta of the arc, [c] giving the formula [16]
${\displaystyle h^{2}=\left({\frac {\ell }{m}}\right)^{2}+\left(r\left(1-\cos {\frac {\pi }{n}}\right)\right)^{2}.}$
Combining the formula for the area of each triangle from its base and height, and the total number ${\displaystyle 2mn}$ of the triangles, gives the Schwarz lantern a total area of [16]
${\displaystyle A(m,n)=2mn\left(r\sin {\frac {\pi }{n}}\right){\sqrt {\left({\frac {\ell }{m}}\right)^{2}+r^{2}\left(1-\cos {\frac {\pi }{n}}\right)^{2}}}.}$
## Limits
Animation of Schwarz-lantern convergence (or lack thereof) for various relations between its two parameters
The Schwarz lanterns, for large values of both parameters, converge uniformly to the cylinder that they approximate. [21] However, because there are two free parameters ${\displaystyle m}$ and ${\displaystyle n}$, the limiting area of the Schwarz lantern, as both ${\displaystyle m}$ and ${\displaystyle n}$ become arbitrarily large, can be evaluated in different orders, with different results. If ${\displaystyle m}$ is fixed while ${\displaystyle n}$ grows, and the resulting limit is then evaluated for arbitrarily large choices of ${\displaystyle m}$, one obtains [16]
${\displaystyle \lim _{m\to \infty }\lim _{n\to \infty }A(m,n)=2\pi r\ell ,}$
the correct area for the cylinder. In this case, the inner limit already converges to the same value, and the outer limit is superfluous. Geometrically, substituting each cylindrical band by a band of very sharp isosceles triangles accurately approximates its area. [16]
On the other hand, reversing the ordering of the limits gives [16]
${\displaystyle \lim _{n\to \infty }\lim _{m\to \infty }A(m,n)=\infty .}$
In this case, for a fixed choice of ${\displaystyle n}$, as ${\displaystyle m}$ grows and the length ${\displaystyle \ell /m}$ of each cylindrical band becomes arbitrarily small, each corresponding band of isosceles triangles becomes nearly planar. Each triangle approaches the triangle formed by two consecutive edges of a regular ${\displaystyle 2n}$-gon, and the area of the whole band of triangles approaches ${\displaystyle 2n}$ times the area of one of these planar triangles, a finite number. However, the number ${\displaystyle m}$ of these bands grows arbitrarily large; because the lantern's area grows in approximate proportion to ${\displaystyle m}$, it also becomes arbitrarily large. [16]
It is also possible to fix a functional relation between ${\displaystyle m}$ and ${\displaystyle n}$, and to examine the limit as both parameters grow large simultaneously, maintaining this relation. Different choices of this relation can lead to either of the two behaviors described above, convergence to the correct area or divergence to infinity. For instance, setting ${\displaystyle m=cn}$ (for an arbitrary constant ${\displaystyle c}$) and taking the limit for large ${\displaystyle n}$ leads to convergence to the correct area, while setting ${\displaystyle m=cn^{3}}$ leads to divergence. A third type of limiting behavior is obtained by setting ${\displaystyle m=cn^{2}}$. For this choice,
${\displaystyle \lim _{n\to \infty }A(cn^{2},n)=2\pi r{\sqrt {\ell ^{2}+{\frac {r^{2}\pi ^{4}c^{2}}{4}}}}.}$
In this case, the area of the Schwarz lantern, parameterized in this way, converges, but to a larger value than the area of the cylinder. Any desired larger area can be obtained by making an appropriate choice of the constant ${\displaystyle c}$. [16]
• Kaleidocycle, a chain of tetrahedra linked edge-to-edge like a degenerate Schwarz lantern with ${\displaystyle n=2}$
• Runge's phenomenon, another example of failure of convergence
## Notes
1. ^ Gandon & Perrin (2009) place the timing more precisely as the early 1890s, [10] but this is contradicted by Hermite's use of this example in 1883. Kennedy (1980) dates Schwarz's communication to Genocchi on this topic to 1880, and Peano's rediscovery to 1882. [11]
2. ^ Other sources may use different parameterizations; for instance, Dubrovsky (1991) uses ${\displaystyle k}$ instead of ${\displaystyle m}$ to denote the number of cylinders. [17]
3. ^ The sagitta of a circular arc is the distance from the midpoint of the arc to the midpoint of its chord.
## References
1. ^ a b c Makarov, Boris; Podkorytov, Anatolii (2013). "Section 8.2.4". Real analysis: measures, integrals and applications. Universitext. Berlin: Springer-Verlag. pp. 415–416. doi: 10.1007/978-1-4471-5122-7. ISBN 978-1-4471-5121-0. MR 3089088.
2. ^ Bernshtein, D. (March–April 1991). "Toy store: Latin triangles and fashionable footwear" (PDF). Quantum: The Magazine of Math and Science. Vol. 1, no. 4. p. 64.
3. ^ Wells, David (1991). "Schwarz's polyhedron". The Penguin Dictionary of Curious and Interesting Geometry. New York: Penguin Books. pp. 225–226. ISBN 978-0-14-011813-1.
4. ^ Berger, Marcel (1987). Geometry I. Universitext. Berlin: Springer-Verlag. pp. 263–264. doi: 10.1007/978-3-540-93815-6. ISBN 978-3-540-11658-5. MR 2724360.
5. ^ a b Atneosen, Gail H. (March 1972). "The Schwarz paradox: An interesting problem for the first-year calculus student". The Mathematics Teacher. 65 (3): 281–284. doi: 10.5951/MT.65.3.0281. JSTOR 27958821.
6. ^ a b c d Glassner, A. (1997). "The perils of problematic parameterization". IEEE Computer Graphics and Applications. 17 (5): 78–83. doi: 10.1109/38.610212.
7. ^ Traub, Gilbert (1984). The Development of the Mathematical Analysis of Curve Length from Archimedes to Lebesgue (Doctoral dissertation). New York University. p. 470. MR 2633321. ProQuest 303305072.
8. ^ Brodie, Scott E. (1980). "Archimedes' axioms for arc-length and area". Mathematics Magazine. 53 (1): 36–39. doi: 10.1080/0025570X.1980.11976824. JSTOR 2690029. MR 0560018.
9. ^ Ogilvy, C. Stanley (1962). "Note to page 7". Tomorrow's Math: Unsolved Problems for the Amateur. Oxford University Press. pp. 155–161.
10. ^ a b Gandon, Sébastien; Perrin, Yvette (2009). "Le problème de la définition de l'aire d'une surface gauche: Peano et Lebesgue" (PDF). Archive for History of Exact Sciences (in French). 63 (6): 665–704. doi: 10.1007/s00407-009-0051-4. JSTOR 41134329. MR 2550748. S2CID 121535260.
11. ^ a b Kennedy, Hubert C. (1980). Peano: Life and works of Giuseppe Peano. Studies in the History of Modern Science. Vol. 4. Dordrecht & Boston: D. Reidel Publishing Co. pp. 9–10. ISBN 90-277-1067-8. MR 0580947.
12. ^ Serret, J. A. (1868). Cours de calcul différentiel et intégral, Tome second: Calcul intégral (in French). Paris: Gauthier-Villars. p. 296.
13. ^ Schwarz, H. A. (1890). "Sur une définition erronée de l'aire d'une surface courbe". Gesammelte Mathematische Abhandlungen von H. A. Schwarz (in French). Verlag von Julius Springer. pp. 309–311.
14. ^ Archibald, Thomas (2002). "Charles Hermite and German mathematics in France". In Parshall, Karen Hunger; Rice, Adrian C. (eds.). Mathematics unbound: the evolution of an international mathematical research community, 1800–1945. Papers from the International Symposium held at the University of Virginia, Charlottesville, VA, May 27–29, 1999. History of Mathematics. Vol. 23. Providence, Rhode Island: American Mathematical Society. pp. 123–137. MR 1907173. See footnote 60, p. 135.
15. ^ a b Bern, M.; Mitchell, S.; Ruppert, J. (1995). "Linear-size nonobtuse triangulation of polygons". Discrete & Computational Geometry. 14 (4): 411–428. doi: 10.1007/BF02570715. MR 1360945. S2CID 120526239.
16. Zames, Frieda (September 1977). "Surface area and the cylinder area paradox". The Two-Year College Mathematics Journal. 8 (4): 207–211. doi: 10.2307/3026930. JSTOR 3026930.
17. ^ Dubrovsky, Vladimir (March–April 1991). "In search of a definition of surface area" (PDF). Quantum: The Magazine of Math and Science. Vol. 1, no. 4. pp. 6–9.
18. ^ Lamb, Evelyn (30 November 2013). "Counterexamples in origami". Roots of unity. Scientific American.
19. ^ Miura, Koryo; Tachi, Tomohiro (2010). "Synthesis of rigid-foldable cylindrical polyhedra" (PDF). Symmetry: Art and Science, 8th Congress and Exhibition of ISIS. Gmünd.
20. ^ Yoshimura, Yoshimaru (July 1955). On the mechanism of buckling of a circular cylindrical shell under axial compression. Technical Memorandum 1390. National Advisory Committee for Aeronautics.
21. ^ Polthier, Konrad (2005). "Computational aspects of discrete minimal surfaces" (PDF). In Hoffman, David (ed.). Global theory of minimal surfaces: Proceedings of the Clay Mathematical Institute Summer School held in Berkeley, CA, June 25 – July 27, 2001. Clay Mathematics Proceedings. Vol. 2. Providence, Rhode Island: American Mathematical Society. pp. 65–111. doi: 10.1016/j.cagd.2005.06.010. MR 2167256. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 75, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509916663169861, "perplexity": 1213.2958081217907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00070.warc.gz"} |
https://breathmath.com/2017/06/10/area-of-parallelograms-and-triangles-exercise-9-1-class-ix/ | # Area of Parallelograms and Triangles – Exercise 9.1 – Class IX
1. Which of the following figures lie on the same base and between the same parallels? In such a case, write the common base and the two parallels.
Solution:
(i) yes.It is observed that trapezium ABCD and triangle PCD have a common base and AB and CD are the parallels are AB and CD
(ii)No. It is observed that parallelogram PQRS and trapezium MNRS have common base RS . But they do not lie on the same line.
(iii) yes.It is observed that parallelogram PQRS and triangle TQR have a common base QR and the parallels are PS and QR
(iv)No. It is observed that parallelogram ABCD and triangle PQR are lying between the same parallel lines AD abd BC . But these do not have common base.
(v) yes. It can be observed that parallelogram ABCD and parallelogram APQD have a common base AD and these are lying between the same parallel lines AD and BQ.
(vi) No. It can be observed that parallelogram PBCS and PQRS are lying on the sa,e base PS. But these do not lie between the same parallels. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652427434921265, "perplexity": 963.4400804912009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00131.warc.gz"} |
http://aimsciences.org/article/doi/10.3934/ipi.2014.8.1151 | # American Institute of Mathematical Sciences
• Previous Article
Increasing stability for determining the potential in the Schrödinger equation with attenuation from the Dirichlet-to-Neumann map
• IPI Home
• This Issue
• Next Article
An inverse problem for the magnetic Schrödinger operator on a half space with partial data
November 2014, 8(4): 1151-1167. doi: 10.3934/ipi.2014.8.1151
## The nonlinear Fourier transform for two-dimensional subcritical potentials
1 Department of Mathematics, University of Kentucky, Lexington, KY 40506-0027, United States
Received November 2013 Revised June 2014 Published November 2014
The inverse scattering method for the Novikov-Veselov equation is studied for a larger class of Schrödinger potentials than could be handled previously. Previous work concerns so-called conductivity type potentials, which have a bounded positive solution at zero energy and are a nowhere dense set of potentials. We relax the conductivity type assumption to include logarithmically growing positive solutions at zero energy. These potentials are stable under perturbations. Assuming only that the potential is subcritical and has two weak derivatives in a weighted Sobolev space, we prove that the associated scattering transform can be inverted, and the original potential is recovered from the scattering data.
Citation: Michael Music. The nonlinear Fourier transform for two-dimensional subcritical potentials. Inverse Problems & Imaging, 2014, 8 (4) : 1151-1167. doi: 10.3934/ipi.2014.8.1151
##### References:
[1] K. Astala, T. Iwaniec and G. Martin, Elliptic Partial Differential Equations and Quasi-Conformal Mappings in the Plane,, volume 48 of Princeton Mathematical Series. Princeton University Press, (2009). Google Scholar [2] R. Beals and R. R. Coifman, Linear spectral problems, nonlinear equations and the $\overline\partial$-method,, Inverse Problems, 5 (1989), 87. doi: 10.1088/0266-5611/5/2/002. Google Scholar [3] M. Boiti, J. Leon, M. Manna and F. Pempinelli, On a spectral transform of a Korteweg-de Vries equation in two spatial dimensions,, Inverse Problems, 2 (1986), 271. doi: 10.1088/0266-5611/3/1/008. Google Scholar [4] R. Brown and G. Uhlmann, Uniqueness in the inverse conductivity problem for nonsmooth conductivities in two dimensions,, Communications in Partial Differential Equations, 22 (1997), 1009. doi: 10.1080/03605309708821292. Google Scholar [5] R. Croke, J. Mueller, M. Music, P. Perry, S. Siltanen and A. Stahel, The Novikov-Veselov Equation: Theory and Computation,, , (). Google Scholar [6] L. D. Faddeev, Increasing solutions of the Schrödinger equation,, Soviet Physics Doklady, 11 (1966), 209. Google Scholar [7] P. G. Grinevich and R. G. Novikov, Faddeev eigenfunctions for point potentials in two dimensions,, Phys. Lett. A, 376 (2012), 1102. doi: 10.1016/j.physleta.2012.02.025. Google Scholar [8] M. Lassas, J. L. Mueller and S. Siltanen, Mapping properties of the nonlinear Fourier transform in dimension two,, Communications in Partial Differential Equations, 32 (2007), 591. doi: 10.1080/03605300500530412. Google Scholar [9] M. Lassas, J. L. Mueller, S. Siltanen and A. Stahel, The Novikov-Veselov equation and the inverse scattering method, Part I: Analysis,, Physica D: Nonlinear Phenomena, 241 (2012), 1322. doi: 10.1016/j.physd.2012.04.010. Google Scholar [10] M. Murata, Structure of positive solutions to $(-\Delta+V)u=0$ in $R^n$,, Duke Math. J., 53 (1986), 869. doi: 10.1215/S0012-7094-86-05347-0. Google Scholar [11] M. Music, P. Perry and S. Siltanen, Exceptional circles of radial potentials,, Inverse Problems, 29 (2013). doi: 10.1088/0266-5611/29/4/045004. Google Scholar [12] A. I. Nachman, Global uniqueness for a two-dimensional inverse boundary value problem,, Ann. of Math. (2), 143 (1996), 71. doi: 10.2307/2118653. Google Scholar [13] P. Perry, Miura maps and inverse scattering for the Novikov-Veselov equation,, Analysis and Partial Differential Equations, 7 (2014), 311. doi: 10.2140/apde.2014.7.311. Google Scholar [14] S. Siltanen, Electrical impedance tomography and Faddeev's Green functions,, Ann. Acad. Sci. Fenn. Mathematica Dissertationes, 121 (1999). Google Scholar [15] T.-Y. Tsai, The associated evolution equations of the Schödinger operator in the plane,, Inverse Problems, 10 (1994), 1419. doi: 10.1088/0266-5611/10/6/015. Google Scholar [16] T.-Y. Tsai, The Schrödinger operator in the plane,, Inverse Problems, 9 (1993), 763. doi: 10.1088/0266-5611/9/6/012. Google Scholar
show all references
##### References:
[1] K. Astala, T. Iwaniec and G. Martin, Elliptic Partial Differential Equations and Quasi-Conformal Mappings in the Plane,, volume 48 of Princeton Mathematical Series. Princeton University Press, (2009). Google Scholar [2] R. Beals and R. R. Coifman, Linear spectral problems, nonlinear equations and the $\overline\partial$-method,, Inverse Problems, 5 (1989), 87. doi: 10.1088/0266-5611/5/2/002. Google Scholar [3] M. Boiti, J. Leon, M. Manna and F. Pempinelli, On a spectral transform of a Korteweg-de Vries equation in two spatial dimensions,, Inverse Problems, 2 (1986), 271. doi: 10.1088/0266-5611/3/1/008. Google Scholar [4] R. Brown and G. Uhlmann, Uniqueness in the inverse conductivity problem for nonsmooth conductivities in two dimensions,, Communications in Partial Differential Equations, 22 (1997), 1009. doi: 10.1080/03605309708821292. Google Scholar [5] R. Croke, J. Mueller, M. Music, P. Perry, S. Siltanen and A. Stahel, The Novikov-Veselov Equation: Theory and Computation,, , (). Google Scholar [6] L. D. Faddeev, Increasing solutions of the Schrödinger equation,, Soviet Physics Doklady, 11 (1966), 209. Google Scholar [7] P. G. Grinevich and R. G. Novikov, Faddeev eigenfunctions for point potentials in two dimensions,, Phys. Lett. A, 376 (2012), 1102. doi: 10.1016/j.physleta.2012.02.025. Google Scholar [8] M. Lassas, J. L. Mueller and S. Siltanen, Mapping properties of the nonlinear Fourier transform in dimension two,, Communications in Partial Differential Equations, 32 (2007), 591. doi: 10.1080/03605300500530412. Google Scholar [9] M. Lassas, J. L. Mueller, S. Siltanen and A. Stahel, The Novikov-Veselov equation and the inverse scattering method, Part I: Analysis,, Physica D: Nonlinear Phenomena, 241 (2012), 1322. doi: 10.1016/j.physd.2012.04.010. Google Scholar [10] M. Murata, Structure of positive solutions to $(-\Delta+V)u=0$ in $R^n$,, Duke Math. J., 53 (1986), 869. doi: 10.1215/S0012-7094-86-05347-0. Google Scholar [11] M. Music, P. Perry and S. Siltanen, Exceptional circles of radial potentials,, Inverse Problems, 29 (2013). doi: 10.1088/0266-5611/29/4/045004. Google Scholar [12] A. I. Nachman, Global uniqueness for a two-dimensional inverse boundary value problem,, Ann. of Math. (2), 143 (1996), 71. doi: 10.2307/2118653. Google Scholar [13] P. Perry, Miura maps and inverse scattering for the Novikov-Veselov equation,, Analysis and Partial Differential Equations, 7 (2014), 311. doi: 10.2140/apde.2014.7.311. Google Scholar [14] S. Siltanen, Electrical impedance tomography and Faddeev's Green functions,, Ann. Acad. Sci. Fenn. Mathematica Dissertationes, 121 (1999). Google Scholar [15] T.-Y. Tsai, The associated evolution equations of the Schödinger operator in the plane,, Inverse Problems, 10 (1994), 1419. doi: 10.1088/0266-5611/10/6/015. Google Scholar [16] T.-Y. Tsai, The Schrödinger operator in the plane,, Inverse Problems, 9 (1993), 763. doi: 10.1088/0266-5611/9/6/012. Google Scholar
[1] Yannis Angelopoulos. Well-posedness and ill-posedness results for the Novikov-Veselov equation. Communications on Pure & Applied Analysis, 2016, 15 (3) : 727-760. doi: 10.3934/cpaa.2016.15.727 [2] Anna Doubova, Enrique Fernández-Cara. Some geometric inverse problems for the linear wave equation. Inverse Problems & Imaging, 2015, 9 (2) : 371-393. doi: 10.3934/ipi.2015.9.371 [3] Rudong Zheng, Zhaoyang Yin. The Cauchy problem for a generalized Novikov equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3503-3519. doi: 10.3934/dcds.2017149 [4] Giuseppe Maria Coclite, Lorenzo di Ruvo. A note on the convergence of the solution of the Novikov equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2865-2899. doi: 10.3934/dcdsb.2018290 [5] Michael V. Klibanov. A phaseless inverse scattering problem for the 3-D Helmholtz equation. Inverse Problems & Imaging, 2017, 11 (2) : 263-276. doi: 10.3934/ipi.2017013 [6] John C. Schotland, Vadim A. Markel. Fourier-Laplace structure of the inverse scattering problem for the radiative transport equation. Inverse Problems & Imaging, 2007, 1 (1) : 181-188. doi: 10.3934/ipi.2007.1.181 [7] Priscila Leal da Silva, Igor Leite Freire. An equation unifying both Camassa-Holm and Novikov equations. Conference Publications, 2015, 2015 (special) : 304-311. doi: 10.3934/proc.2015.0304 [8] Yongye Zhao, Yongsheng Li, Wei Yan. Local Well-posedness and Persistence Property for the Generalized Novikov Equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 803-820. doi: 10.3934/dcds.2014.34.803 [9] Vladimir Georgiev, Atanas Stefanov, Mirko Tarulli. Smoothing-Strichartz estimates for the Schrodinger equation with small magnetic potential. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 771-786. doi: 10.3934/dcds.2007.17.771 [10] Anatoli Babin, Alexander Figotin. Newton's law for a trajectory of concentration of solutions to nonlinear Schrodinger equation. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1685-1718. doi: 10.3934/cpaa.2014.13.1685 [11] Maike Schulte, Anton Arnold. Discrete transparent boundary conditions for the Schrodinger equation -- a compact higher order scheme. Kinetic & Related Models, 2008, 1 (1) : 101-125. doi: 10.3934/krm.2008.1.101 [12] Georgios Fotopoulos, Markus Harju, Valery Serov. Inverse fixed angle scattering and backscattering for a nonlinear Schrödinger equation in 2D. Inverse Problems & Imaging, 2013, 7 (1) : 183-197. doi: 10.3934/ipi.2013.7.183 [13] Juan-Ming Yuan, Jiahong Wu. The complex KdV equation with or without dissipation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 489-512. doi: 10.3934/dcdsb.2005.5.489 [14] Farid Tari. Geometric properties of the integral curves of an implicit differential equation. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 349-364. doi: 10.3934/dcds.2007.17.349 [15] Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003 [16] Nguyen Huy Tuan, Mokhtar Kirane, Long Dinh Le, Van Thinh Nguyen. On an inverse problem for fractional evolution equation. Evolution Equations & Control Theory, 2017, 6 (1) : 111-134. doi: 10.3934/eect.2017007 [17] Sergei A. Avdonin, Sergei A. Ivanov, Jun-Min Wang. Inverse problems for the heat equation with memory. Inverse Problems & Imaging, 2019, 13 (1) : 31-38. doi: 10.3934/ipi.2019002 [18] Jongmin Han, Masoud Yari. Dynamic bifurcation of the complex Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 875-891. doi: 10.3934/dcdsb.2009.11.875 [19] Masaya Maeda, Hironobu Sasaki, Etsuo Segawa, Akito Suzuki, Kanako Suzuki. Scattering and inverse scattering for nonlinear quantum walks. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3687-3703. doi: 10.3934/dcds.2018159 [20] Francesco Demontis, Cornelis Van der Mee. Novel formulation of inverse scattering and characterization of scattering data. Conference Publications, 2011, 2011 (Special) : 343-350. doi: 10.3934/proc.2011.2011.343
2018 Impact Factor: 1.469 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7662803530693054, "perplexity": 4587.578763376398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524972.66/warc/CC-MAIN-20190716221441-20190717003441-00330.warc.gz"} |
http://tex.stackexchange.com/questions/134515/how-to-combine-latex-fonts-with-system-fonts-in-xelatex | # How to combine LaTeX fonts with system fonts in XeLaTeX
I am using URW Palladio L for my main text font. I have no math font of URW Palladio L on my system installed, but I like the math font of Bitstream Charter. Alternateively I could also use the URW Palladio font of LaTeX.
This is how I set my main font
\setmainfont{URW Palladio L}
What about the math font? Should I use \setmathfont, but to what value, it's not a system font. If I simply add the package, it looks like I get the behavior I want, except that the font for the Bibliography heading, title etc. becomes a font I have not seen yet.
\usepackage[bitstream-charter]{mathdesign}
This seems really bogus, how I solve this properly?
-
Welcome to TeX.SX! The TeX Gyre Pagella Math font for unicode-math is specifically designed for combining with Palladio (the TeX Gyre Pagella text font is based on URW Palladio). – egreg Sep 22 '13 at 10:42
@egreg Thanks for pointing out. So I would go to gust.org.pl/projects/e-foundry/tg-math/download/…, download the font, and add it through \setmathfont{TG Pagella Math}? – RevMoon Sep 22 '13 at 10:48
If you're running an update TeX Live distribution, you already have the font. – egreg Sep 22 '13 at 11:01
Asana Math (based on pxfonts) is also suitable. – Leo Liu Sep 22 '13 at 11:41
@RevMoon: You sould use \setmathfont{TeX Gyre Pagella Math}, not \setmathfont{TeX Gyre Pagella}. – Leo Liu Sep 22 '13 at 12:22
There're many possible solutions, for example:
• A modern solution, use TeX Gyre Pagella (a Palatino clone) together with TeX Gyre Pagella Math or Asana Math.
\documentclass{article}
\usepackage{fontspec}
\setmainfont{TeX Gyre Pagella} % or URW Palladio L
\usepackage{unicode-math}
\setmathfont{TeX Gyre Pagella Math}
\begin{document}
Let $f$ be holomorphic on a closed disc $\overline{D}(z_0, R)$, $R>0$. Let $C_R$ be the circle bounding the disc. Then $f$ has a power series expansion
$f(z) = \sum_{n=0}^{\infty} \frac{(z-z_0)^n}{2\pi\mathrm{i}} \int_{C_R} \frac{f(\zeta)}{(\zeta-z_0)^{n+1}} \mathrm{d}\zeta.$
\end{document}
......
\setmainfont{TeX Gyre Pagella} % or URW Palladio L
\setmathfont{Asana Math}
......
• A traditional solution, use pxfonts or newpxtext/newpxmath package. Why not? We don't always need fontspec to select the main font. Note that you may need to change the font encoding between T1/OT1 and EU1 manually sometimes.
\documentclass{article}
\usepackage{fontspec}
\newfontfamily\inconsolata{Inconsolatazi4}
\usepackage{pxfonts}
\usepackage[T1]{fontenc}
\begin{document}
Let $f$ be holomorphic on a closed disc $\overline{D}(z_0, R)$, $R>0$. Let $C_R$ be the circle bounding the disc. Then $f$ has a power series expansion
$f(z) = \sum_{n=0}^{\infty} \frac{(z-z_0)^n}{2\piup\mathrm{i}} \int_{C_R} \frac{f(\zeta)}{(\zeta-z_0)^{n+1}} \mathrm{d}\zeta.$
{\inconsolata Something special. 01234}
\end{document}
• What exactly you want — Palatino together with mathdesign. There is nothing magic:
......
\usepackage[charter]{mathdesign} % Use it before fontspec
\usepackage{fontspec}
\setmainfont{TeX Gyre Pagella} % or URW Palladio L
......
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977099299430847, "perplexity": 6747.776969342506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123840.94/warc/CC-MAIN-20160428161523-00197-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/equipotential-lines.243392/ | # Equipotential Lines
1. Jul 4, 2008
### conniechiwa
The figure shows the equipotential lines for another electric field. What is the approximate strength of the electric field at points A and B?
https://wug-s.physics.uiuc.edu/cgi/courses/shell/common/showme.pl?cc/DuPage/Phys1202/summer/homework/Ch-20-Potential/equipotential_lines/equ-lines-4.jpg
I used the equations:
Ex = -V/X
= -(25V-40V)/(0.005m-0m)
= 3000 V/m
Ey = -V/X
= ( 25V-40V)/(0.004m-0m)
= 3750 V/m
EA= sqrt (Ex squared + Ey squared)
= 4802 V/m
I'm not sure what else to do. Any pointers?
2. Jul 4, 2008
### Kurdt
Staff Emeritus
Its mainly graph reading I believe with no calculation involved. What do you think A and B will be accoriding to the graph you were given?
3. Jul 4, 2008
### conniechiwa
I thought point A would be 4802 V/m. The hint the problem gave me was: Remember that although the electrical potential, V, is a scalar, the electric field, E is a vector. You can break the vector E down into it's components. That is Ex = ΔV/Δx and Ey = ΔV/Δy. Now just add up the components of the electric field to get the total strength. Remember you have to add them as you normally would any vector components.
I broke up the problem into x and y components like it told me to do, but I still can't get the right answer.
4. Jul 4, 2008
### Kurdt
Staff Emeritus
Sorry I thought it said potential.
I'm not sure why you've chosen those potentials as they're seemingly not related to the distances you have. Have a think about that again.
5. Jul 4, 2008
### conniechiwa
I'm not really sure what you're saying but... I chose 25V-40V because point A is in the middle of 30V and 20V and the graph starts at 40V. Point A also seems to be at (0.005m, 0.004m)
6. Jul 5, 2008
### Kurdt
Staff Emeritus
The graph doesn't seem to start at 40V from what I'm looking at.
7. Jul 5, 2008
### conniechiwa
The x-axis starts at 40V.
8. Jul 6, 2008
### G01
That doesn't make any sense. There is no equipotential line for that 40 V marker. Also, the x-axis can't be an equipotential line since multiple equipotentials of different values pass through it.
I think that 40V marker is a mistake since there is no equipotential line corresponding to it. Try to work out your values using numbers for which there are obvious equipotential lines. Avoid using that number 40V and see what answer you get.
9. Jul 6, 2008
### conniechiwa
I tried it again for point A and this is what I got....
Ex = -V/X
= -(25V-0V)/(0.005m-0m)
= 5000 V/m
Ey = -V/X
= (25V-0V)/(0.004m-0m)
= 6250 V/m
EA= sqrt (Ex squared + Ey squared)
= 8003.905 V/m
Am I picking the right values this time? I still can't get the right answer.
10. Jul 6, 2008
### Kurdt
Staff Emeritus
At x=0 and the same y value as the point, what is the potential. It isn't zero.
11. Jul 6, 2008
### conniechiwa
I'm not really sure what you're asking me....
12. Jul 7, 2008
### Kurdt
Staff Emeritus
I'm saying if you move a little x-distance from the point then what is the difference in potential.
13. Jul 7, 2008
### conniechiwa
Nevermind...I solved it. Thanks. =)
14. Jul 2, 2011
### violeta240
I have the same problem how do you know what numbers to use? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383651971817017, "perplexity": 2421.170242852329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00250-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/100660/needing-a-nudge-with-a-commutative-algebra-question | # Needing a nudge with a Commutative Algebra Question
I have a commutative ring with identity $R$, and an $R$-module $M$. Next I have an $R$-submodule $N$ of $M$. Finally, I have a multiplicatively closed subset $S$ of $R$.
I am asked to show that $S^{-1}(M/N)$ is isomorphic to $(S^{-1}M)/(S^{-1}N)$.
I guessed at the following mapping:
Define $\phi:S^{-1}(M/N)\to (S^{-1}M)/(S^{-1}N)$ by $\phi(\frac{m+N}{s}) = \frac{m}{s} + S^{-1}N$.
I'm stuck trying to show that my map is well defined. Let's assume that $\frac{m_1+N}{s_1} = \frac{m_2+N}{s_2}$. Then I want to show that $\frac{m_2}{s_2} + S^{-1}N = \frac{m_2}{s_2} + S^{-1}N$
By the definition of the equivalence of elements of $S^{-1}(M/N)$, there is an $s\in S$ such that
$s[s_2(m_1 + N) - s_1(m_2 + N)] = N$ (since $N$ is the $0$ element of $M/N$).
Now I get from this that
$s[\{(s_2m_1) + N\} - \{(s_1m_2) + N\}] = N$
and thus
$s[(s_2m_1 - s_1m_2) + N] = N$
Then
$s(s_2m_1 - s_1m_2) + N = N$
which implies
$s(s_2m_1 - s_1m_2) \in N$
or
$ss_2m_1 - ss_1m_2 \in N$
which is the definition of
$ss_2m_1 + N = ss_1m_2 + N$.
Now somehow from here I need to get to the conclusion that $\frac{m_1}{s_1} + S^{-1}N = \frac{m_2}{s_2} + S^{-1}N$. That is, $\frac{m_1}{s_1} - \frac{m_2}{s_2} \in S^{-1}N$.
I've been stuck for a while now, any advice? or have I gone wrong somewhere?
-
Do you know that $S^{-1}A \otimes_A M \cong S^{-1}M$? The proof using that is nice too. – Dylan Moreland Jan 20 '12 at 4:13
Remember that $$\frac{m_1}{s_1}-\frac{m_2}{s_2} = \frac{s_2m_1-s_1m_2}{s_1s_2} = \frac{s(s_2m_1-s_1m_2)}{ss_1s_2}.$$ And you know something about $s(s_2m_1-s_1m_2)$, right?
You've clearly given me the hint I asked for. I'm embarassed to say that I don't quite see the point, though. But yes I know that the numerator is in $N$ at least. I'm a bit raw with this I will give it some more thought. Thanks very much. :) – roo Jan 20 '12 at 4:19
Oh... since $S$ is multiplicatively closed, I have $ss_1s_2\in S$ and therefore the RHS is in $S^{-1}N$, right? – roo Jan 20 '12 at 4:21
@borninthe80s: Since $\frac{m_1}{s_1}-\frac{m_2}{s_2}$ can be written in the form $\frac{n}{t}$ with $n\in N$ and $t\in S$, then by definition it lies in $S^{-1}N$, so yes, this shows that $\frac{m_1}{s_1}-\frac{m_2}{s_2}\in S^{-1}N$, as needed. – Arturo Magidin Jan 20 '12 at 4:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835757613182068, "perplexity": 127.71614932233365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929411.0/warc/CC-MAIN-20150521113209-00265-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://journal.kenss.or.kr/journal/article.php?code=68865 | • For Contributors +
• Journal Search +
Journal Search Engine
ISSN : 1225-4517(Print)
ISSN : 2287-3503(Online)
Journal of Environmental Science International Vol.28 No.11 pp.1019-1025
DOI : https://doi.org/10.5322/JESI.2019.28.11.1019
# Analysis of Light Traits in a Solar Light-collector Device and its Effects on Lettuce Growth at an Early Growth Stage
Sanggyu Lee,Jaesu Lee,Jinho Won
## Abstract
The aim of this study was to analyze the light traits in a solar light-collector device and its effects on lettuce growth at an early growth stage. The three hyper parameters used were the reflector diameter (2 cm and 4 cm), coating inside the reflector (chrome-coated, non-coated) and distance from the light fiber (15 cm and 20 cm). The results showed that light efficiency, which is the ratio of light intensity inside the fiber to the solar intensity, improved by 41.1 % when using a 2 cm diameter chrome-coated reflector at a distance of 15 cm from the light fiber; whereas it only improved by 20.6% when a non-coated reflector was used. As the reflector size was increased to 4 cm, the light efficiency for the coated and non-coated reflectors increased by 28.5 % and 26.4 %, respectively, hence, no significant difference was observed. When the light fiber was placed at a distance of 20 cm, the increase in light efficiency with coating treatment was 8 % higher than without coating treatment. We also compared the efficiency of light-fiber treatment with that of LED treatment in our lettuce nursery, and observed that the plants exhibited better growth with light-fiber treatment. We observed an average increase of 1.7 cm in leaf height, 7 cm2/plant increase in leaf area, and 32 mm increase in root length upon light-fiber treatment as opposed to those observed with LED treatment. These findings indicate that the collector light-fiber is economically feasible and it improves lettuce growth compared with the LED treatment.
이상규,이재수,원진호 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150161743164062, "perplexity": 3237.2617491909145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00043.warc.gz"} |
http://10000tb.org/python/blog-series-regex-Demystifying-regex-with-python.html | # Demystifying regex with python
Preface
When you to extract a valid IP from a string, or when you have a string with a know pattern and want to separate it into a list, or when you have a string and you simply want to check if it falls into a pattern, how would you approach ?
I bet there are tons of creative solutions out there for approaching each problem as mentioned above, but here I would like to share my thoughts and exploration on the beauty of using re regular expression module in python in solving problem as such.
Introduction
It is said that regular expression is also a language, but relatively small and restricited.
I have linked the official documentation on regular expression from python.org, and there are exhausted details there. This post, instead, will summarize regular expression in bullet points and leave pointers randomly to official doc where there are exhausted details on everything.
1. Learn basic matching with metacharacters
To summarize regular expression, one should first introduce metacharacters and how to do simple matching. From there, more syntax introductions and advanced matching techniques come in naturally.
• Metacharacters are special characters used in regular expression or special purposes. A complete list of metacharacters and what they do in brief are as follow:
• [ and ] - open and enclosing square brackets - They are used to specify a character class, which is a way to specify what character(s) to match in regular expression.
• Characters can be listed individually or by range with -: [abcs], [a-c]
• Metacharacters are not active inside [].
• Reverse matching with ^: putting ^ as first character inside a character class will match non-matching character(s): [^5](match any non-5 character), [^abc](match any character except a, b, and c)
• ^ - for now, lets say it does reversing matching when placed inside [] as first character. In addition, when placed outside character class, and usually beginning of a regular expression, it matches start of a line.
• \ - It can be used to escape metacharacter so that can match metacharacter itself. In addition, some special sequences beginning with \ represent predefined set of characters:\d, \D, \s, \S, \w, \W.
• . - Dot - It matches anything except a newline character.
• Repeating.
With character class, you can now match any character. But that is not enough to solve practical problems as there are scenarios where we need to match repeatitions of characters. Then how to specify matching pattern need to be resolved. In regular expression, ways to specify repeatitions are:
* - zero or more.
+ - one or more.
? - zero or one.
{m,n} - at least m times and at most n times.
Important notes:
1. * matching is greedy. It will match as many as it can until it runs into first mismatch; If later portion of the pattern doesn’t match, it will back up and try again with a fewer repeatition.
2. {1, } equivalent to +.
3. When omitting upper bound, it will result in an upper bound of infinity.
4. When omitting lower bound, it will be interpreted as zero.
• Use raw string to represent regular expression.
To avoid backslash plague, use raw string to represent regular expression for concise representation.
• Examples of using Regular expression.
One way is to first compile regular expression into pattern object, and perform match functions from the pattern object against target string.
import re
p = re.compile(r'^[a-z]+$') p.match("IamAnExample") • Functions available under pattern object for performaing matches. After we compile regular ex, we can perform matching functins under the pattern: match() - If RE matches the beginning of the string. search() - search through string, and look for matches at any location. findall() - find all matching substring(s), and return as a list. finditer() - finall all matching substring(s), and return as a list. In [155]: test = "I am a doctor hu a ha, meaning I am a real" In [156]: import re In [157]: p = re.compile(r'[a]{1,1}') In [158]: p.findall(test) Out[158]: ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a'] In [159]: it = p.finditer(test) In [160]: for match in it: ...: print match.span() ...: (2, 3) (5, 6) (17, 18) (20, 21) (25, 26) (33, 34) (36, 37) (40, 41) In [161]: • Functions available under match object. After we get a match object, we can then query info about match(es): group() - return the string matched. start() - starting position. end() - ending position. span() - return tuple containing (start, end). • Compilation flags. We can specify compilation flags to modify the effects of our regular exoression when we compile it. p = re.compile(r'JHadjad', re.IGNORECASE) # to ignore case when match. note that multiple options are allowed by using | to join them. Flags available: Flags list • Advanced/More Metacharacters. | - olternation, low precendence. adad|TFSS, it will be interpreted as adad or TFSS. ^ - match the beginning of lines. It will also match immediatelt after new line character in multiline mode when giving such compilation option during compilation time. $ - match at the end of a line -> either end of a string or any location followed by a newline character.
\A - start of a string. \b - word boundary. \B - opposite of \b.
• Grouping.
marked by ( and ).
reteievd by m.groups(), m.group(1)( with group number as arg)
Notes:
1. Identify group by its opening paranthesis since groups are numered from left to right.
2. group zero is by default for the full string matched.
3. groups() return all groups except group zero.
• Non-capturing and named groups.
Perl supports (?...) as ex extehsion syntac as it for sure wasn’t being used by any old regular expression since it is considered a syntax error.
Python re module adds a P after question mark to denote that it is an extension specific to python.
For example:
(?P<name>...) represents a named group.
(?P=name) back reference to a named group.
non-capturing group
with (?:...) where ... can be any other regular expression.
When we are interested in collecting part of a regular expression, but aren’t interested in the contents of it.
In [1]: import re
In [2]: p = re.compile(r'[abc]+')
In [3]: m = p.match("aaaaaac")
In [4]: m.groups()
Out[4]: ()
In [5]: p = re.compile(r'([abc]+)')
In [6]: m = p.match("aaaaaac")
In [7]: m.groups()
Out[7]: ('aaaaaac',)
In [8]: m.group(0, 1)
Out[8]: ('aaaaaac', 'aaaaaac')
In [9]: p = re.compile(r'(?:[abc]+)')
In [10]: m = p.match("aaaaaac")
In [11]: m.groups()
Out[11]: ()
In [12]: m.group(0, 1)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-12-f65d9b02528f> in <module>()
----> 1 m.group(0, 1)
IndexError: no such group
In [13]: m.group(0)
Out[13]: 'aaaaaac'
In [14]:
Notes:
1. There is no difference between a capturing grouping and non-capturing grouping except that non-capturing group does not return captured content.
2. There is no performance difference between a capturing group and non-capturing group.
3. With this we can add a new group without how existing groups are numbered.
named group
In [21]: p = re.compile(r"(?P<alpha>[a-zA-Z]+)(?P<num>[0-9]+)(?P<other>[#$!!@#@]*)(?P=alpha)") In [22]: m = p.match("iamdavid01234!iam") In [23]: m In [24]: print m None In [25]: m = p.match("iamdavid01234!iamdavid") In [26]: print m <_sre.SRE_Match object at 0x1032a8880> In [27]: m.groups() Out[27]: ('iamdavid', '01234', '!') In [28]: The (?P=name) indicates that content matched by the group called name should also be matched at current location. • Lookahead Assertions. With lookahead assertions, we can tell match engine only move forward for further matching when assertion is successfull or failed. (?=...) - Positive lookahead. It will succeed if contained regular expression ... match at the current location. Matching engine will not continue moving further after that, but start matching again from current position. If assertion failed, the whole match will fail immediately. So the assertion here is really like an if condition to tell matching engine to move forward or not. (?!...) - Negative lookahead. For example, to match a file name, and exclude file suffix like bat, and exe, csv e can write following expression: .*[.](?!bat$|exe$|csv$)[^.]*\$
So for this regular expression, when it matches til the .(dot), matching engine will only move forward when the suffix is not ant of the bat, exe, or csv.
• Modify strings.
split strings into a list. Split wherever there is a match.
pattern.split(string[. maxsplit=0])
search and replace(left-most non-overlapping ocurrences of RE).
pattern.sub(replacement, string[, count=0])
(subn() does the same thing, but return an extra count of total replacements together with first string in a tuple.)
references: https://docs.python.org/2.7/howto/regex.html#search-and-replace
• Common problems of regex.
Greedy vs Non-greedy:
Use *?, +?, ??, or {m,n}? as non-greedy qualifier to match as little as possible.
Use re.VERBOSE during compilation to enable verbose writing of regular expression.
regular expression can be very compact due its syntax is based on all these metacharacters, parathesis, brackets. Given a regular expression of moderate complexity, it can therefore be hard to read.
With re.VERBOSE, we can write regex with comments.
In [5]: p = re.compile(r"""
...: ^
...: # match from beginning.
...:
...: (?P<name>[a-zA-Z0-9]+)
...: # name group: match more than one alphanumeric letter.
...:
...: (?=@)
...: # look ahead assert that there is one @ before continuing.
...:
...: @(?P<provider>[a-z]+)
...: # match email provide name.
...:
...: .
...: # match a dot.
...:
...: (?P<domainsuffix>com|net|cn|org|gov)
...: # match a TLD name.
...:
...: """, re.VERBOSE)
In [6]: m = p.match("xuehaohu@gmail.com")
In [7]: print m.groups()
('xuehaohu', 'gmail', 'com')
In [8]: m = p.match("xuehaohu#gmail.com")
In [9]: print m
None
In [10]:
reference: https://docs.python.org/2.7/howto/regex.html#common-problems
Reference:
1. https://docs.python.org/2.7/howto/regex.html | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2935265004634857, "perplexity": 4693.816708521599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00508.warc.gz"} |
https://en.wikipedia.org/wiki/Pyrgeometer | # Pyrgeometer
Example of a pyrgeometer
A pyrgeometer is a device that measures the atmospheric infra-red radiation spectrum that extends approximately from 4.5 µm to 100 µm.
## Pyrgeometer components
Example of a pyrgeometer showing the principal components
A pyrgeometer consists of the following major components:
• A thermopile sensor which is sensitive to radiation in a broad range from 200 nm to 100 µm
• A silicon dome or window with a solar blind filter coating. It has a transmittance between 4.5 µm and 50 µm that eliminates solar shortwave radiation.
• A sun shield to minimize heating of the instrument due to solar radiation.
Typical combined window and solar blind filter transmittance for CGR 4 model pyrgeometer
## Measurement of long wave downward radiation
The atmosphere and the pyrgeometer (in effect its sensor surface) exchange long wave IR radiation. This results in a net radiation balance according to:
$\ E_{\mathrm{net}} = { \ E_{\mathrm{in}} - \ E_{\mathrm{out}} }$
Where:
$E_{\mathrm{net}}$ - net radiation at sensor surface [W/m²]
$E_{\mathrm{in}}$ - Long-wave radiation received from the atmosphere [W/m²]
$E_{\mathrm{out}}$ - Long-wave radiation emitted by the sensor surface [W/m²]
The pyrgeometer's thermopile detects the net radiation balance between the incoming and outgoing long wave radiation flux and converts it to a voltage according to the equation below.
$\ E_{\mathrm{net}} = \frac{\ U_{\mathrm{emf}}}{S}$
Where:
$E_{\mathrm{net}}$ - net radiation at sensor surface [W/m²]
$U_{\mathrm{emf}}$ - thermopile output voltage [V]
$S$ - sensitivity/calibration factor of instrument [V/W/m²]
The value for $S$ is determined during calibration of the instrument. The calibration is performed at the production factory with a reference instrument traceable to a regional calibration center.[1]
To derive the absolute downward long wave flux, the temperature of the pyrgeometer has to be taken into account. It is measured using a temperature sensor inside the instrument, near the cold junctions of the thermopile. The pyrgeometer is considered to approximate a black body. Due to this it emits long wave radiation according to:
$\ E_{\mathrm{out}} = { \sigma T^4}$
Where:
$E_{\mathrm{out}}$ - Long-wave radiation emitted by the earth surface [W/m²]
$\sigma$ - Stefan-Boltzmann constant [W/(m²·K4)]
$T$ - Absolute temperature of pyrgeometer detector [kelvins]
From the calculations above the incoming long wave radiation can be derived. This is usually done by rearranging the equations above to yield the so-called pyrgeometer equation by Albrecht and Cox.
$\ E_{\mathrm{in}} = \frac{U_{\mathrm{emf}}}{S}+ {\sigma T^4}$
Where all the variables have the same meaning as before.
As a result, the detected voltage and instrument temperature yield the total global long wave downward radiation.
## Usage
Pyrgeometers are frequently used in meteorology, climatology studies. The atmospheric long-wave downward radiation is of interest for research into long term climate changes.
The signals are generally detected using a data logging system, capable of taking high resolution samples in the millivolt range. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097399473190308, "perplexity": 2894.544586619663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00104-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://vinafree.net/draf/ | # Draf
Consider the linear system
Suppose there exists (x, y) solution of the system above. Then we have that 3x – 2y = 4 and 6x-4y = 9 which gives us 2.(3x – 2y) – (6x – 4y) =2.4 – 9. Since 2(3x – 2y) – (6x – 4y) = 6x – 4y – 6x + 4y = 0 and 2.4 – 9 = 8 – 9 =-1 we have that 0 = -1 which is a contradiction. Hence the system has no solution and therefore the lines whose equation are 3x – 2y = 4 and 6x – 4y = 9 have no intersection. Here is sketch of the lines 3x – 2y = 4 and 6x – 4y = 9. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021040797233582, "perplexity": 278.7820658770629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00030.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Electro-Optics/Book%3A_Electromagnetics_II_(Ellingson)/01%3A_Preliminary_Concepts | Skip to main content
# 1: Preliminary Concepts
• 1.1: Units
The term “unit” refers to the measure used to express a physical quantity. For example, the mean radius of the Earth is about 6,371,000 meters; in this case, the unit is the meter.
• 1.2: Notation
The list in in this section describes the notation used in this book.
• 1.3: Coordinate Systems
The coordinate systems most commonly used in engineering analysis are the Cartesian, cylindrical, and spherical systems.
• 1.4: Electromagnetic Field Theory- A Review
This section presents a summary of electromagnetic field theory concepts presented in the previous volume. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951793551445007, "perplexity": 1244.44194477004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00124.warc.gz"} |
http://mathhelpforum.com/trigonometry/117486-multiple-angles.html | 1. ## multiple angles
$cos\frac {x}{4} = 0$
What do I do here?
And how do I know when to use $2n\pi$ or $n\pi$
2. Originally Posted by >_<SHY_GUY>_<
$cos\frac {x}{4} = 0$
What do I do here?
And how do I know when to use $2n\pi$ or $n\pi$
you should know where $\cos(u) = 0$ from your knowledge of the unit circle ...
$\frac{x}{4} = \frac{\pi}{2} \, , \, \frac{3\pi}{2} \, , \, \frac{5\pi}{2} \, , \, ...$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4500496983528137, "perplexity": 321.173082221859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686465.34/warc/CC-MAIN-20170920052220-20170920072220-00675.warc.gz"} |
http://mathoverflow.net/questions/43201/high-dimensional-analogue-of-skew | # High-dimensional analogue of Skew
Suppose G and A are abelian groups. Suppose f is a 2-cocycle for the trivial group action of G on A. In other words, we have that:
$$f(g_1,g_2 + g_3) + f(g_2,g_3) = f(g_1 + g_2,g_3) + f(g_1,g_2)$$
for all $g_1,g_2,g_3 \in G$. Then we can show that the map:
$$\operatorname{Skew}(f) = (g_1,g_2) \mapsto f(g_1,g_2) - f(g_2,g_1)$$
is an alternating biadditive map from $G \times G$ to $A$. The proof is straightforward but doesn't seem entirely obvious. This is related to the fact that, in a group of nilpotency class two, the commutator of two elements is a homomorphism in each input.
It is true that any function $G^n \to A$ that is additive in each of the inputs is a n-cocycle. Question: Is there some analogue of Skew for higher n that starts with an arbitrary n-cocycle and outputs a map that's additive in each coordinate? Ideally, the map should have the additional property that when applied a second time, it acts just like multiplication by n.
In the Skew case, all the outputs are additionally restricted to being alternating; a similar restriction in the general case is fine.
If such maps do not exist for higher n, is there an easy reason/explanation for the fact?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642877578735352, "perplexity": 208.413555353975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163994768/warc/CC-MAIN-20131204133314-00008-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/79261 | ## Files in this item
FilesDescriptionFormat
application/pdf
303876.pdf (4MB)
PresentationPDF
application/pdf
828.pdf (17kB)
AbstractPDF
## Description
Title: DISPERSED FLUORESCENCE SPECTRA OF JET COOLED SiCN Author(s): Fukushima, Masaru Contributor(s): Ishiwata, Takashi Subject(s): Small molecules Abstract: The laser induced fluorescence (~LIF~) spectrum of $tilde{A}$ $^2Delta$ -- $tilde{X}$ $^2Pi$ transition was obtained for SiCN generated by laser ablation under supersonic free jet expansion. The vibrational structure of the dispersed fluorescence (~DF~) spectra from single vibronic levels (~SVL's~) was analyzed with consideration of Renner-Teller (~RT~) interaction. The usual analysis based on the perturbation approachfootnote{J.~M.~Brown and F.~Jo rgensen, Advances in Chemical Physics 52, 117 (1983).}, indicated considerably different spin splitting for the $mu$ and $kappa$ levels of the $tilde{X}$ $^2Pi$ state of SiCN, in contrast to identical spin splitting for general species based on the usual RT analysis. Further analysis of the vibrational structure is being carried out via direct RT diagonalization. Issue Date: 22-Jun-15 Publisher: International Symposium on Molecular Spectroscopy Citation Info: ACS Genre: Conference Paper / Presentation Type: Text Language: English URI: http://hdl.handle.net/2142/79261 Date Available in IDEALS: 2016-01-05
| {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6412287354469299, "perplexity": 11827.75421415739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058450.44/warc/CC-MAIN-20210927120736-20210927150736-00253.warc.gz"} |
https://stacks.math.columbia.edu/tag/03DT | Lemma 35.8.1. Let $S$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_ S$-module. Let $\tau \in \{ Zariski, \linebreak[0] fpqc, \linebreak[0] fppf, \linebreak[0] {\acute{e}tale}, \linebreak[0] smooth, \linebreak[0] syntomic\}$. The functor defined in (35.8.0.1) satisfies the sheaf condition with respect to any $\tau$-covering $\{ T_ i \to T\} _{i \in I}$ of any scheme $T$ over $S$.
Proof. For $\tau \in \{ Zariski, \linebreak[0] fppf, \linebreak[0] {\acute{e}tale}, \linebreak[0] smooth, \linebreak[0] syntomic\}$ a $\tau$-covering is also a fpqc-covering, see the results in Topologies, Lemmas 34.4.2, 34.5.2, 34.6.2, 34.7.2, and 34.9.6. Hence it suffices to prove the theorem for a fpqc covering. Assume that $\{ f_ i : T_ i \to T\} _{i \in I}$ is an fpqc covering where $f : T \to S$ is given. Suppose that we have a family of sections $s_ i \in \Gamma (T_ i , f_ i^*f^*\mathcal{F})$ such that $s_ i|_{T_ i \times _ T T_ j} = s_ j|_{T_ i \times _ T T_ j}$. We have to find the correspond section $s \in \Gamma (T, f^*\mathcal{F})$. We can reinterpret the $s_ i$ as a family of maps $\varphi _ i : f_ i^*\mathcal{O}_ T = \mathcal{O}_{T_ i} \to f_ i^*f^*\mathcal{F}$ compatible with the canonical descent data associated to the quasi-coherent sheaves $\mathcal{O}_ T$ and $f^*\mathcal{F}$ on $T$. Hence by Proposition 35.5.2 we see that we may (uniquely) descend these to a map $\mathcal{O}_ T \to f^*\mathcal{F}$ which gives us our section $s$. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9939479827880859, "perplexity": 191.67475180085344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00026.warc.gz"} |
https://gmatclub.com/forum/if-x-and-y-are-integers-what-is-the-value-of-2x-6y-253313.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 10 Dec 2018, 11:55
# Dec 10th is GMAT Club's BDAY :-)
Free GMAT Club Tests & Quizzes for 24 hrs to celebrate together!
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Free lesson on number properties
December 10, 2018
December 10, 2018
10:00 PM PST
11:00 PM PST
Practice the one most important Quant section - Integer properties, and rapidly improve your skills.
• ### Free GMAT Prep Hour
December 11, 2018
December 11, 2018
09:00 PM EST
10:00 PM EST
Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST.
# If x and y are integers, what is the value of 2x^(6y) - 4?
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 51072
If x and y are integers, what is the value of 2x^(6y) - 4? [#permalink]
### Show Tags
11 Nov 2017, 07:01
00:00
Difficulty:
35% (medium)
Question Stats:
71% (01:23) correct 29% (02:00) wrong based on 94 sessions
### HideShow timer Statistics
If x and y are integers, what is the value of $$2x^{(6y)} - 4$$?
(1) $$x^{(2y)} = 16$$
(2) $$xy = 4$$
_________________
PS Forum Moderator
Joined: 25 Feb 2013
Posts: 1217
Location: India
GPA: 3.82
Re: If x and y are integers, what is the value of 2x^(6y) - 4? [#permalink]
### Show Tags
11 Nov 2017, 07:08
2
Bunuel wrote:
If x and y are integers, what is the value of $$2x^{(6y)} - 4$$?
(1) $$x^{(2y)} = 16$$
(2) $$xy = 4$$
Statement 1: $$x^{(2y)} = 16$$, cube both sides to get
$$x^{(6y)} = 16^3$$. we got the value of the variable. hence we can calculate the value of the equation. Sufficient
Statement 2: Two variables and one equation. Cannot be solved. Hence Insufficient
Option A
Intern
Joined: 30 Sep 2017
Posts: 38
Location: India
Concentration: Entrepreneurship, General Management
Schools: IIM Udaipur '17
GMAT 1: 700 Q50 V37
GPA: 3.7
WE: Engineering (Energy and Utilities)
Re: If x and y are integers, what is the value of 2x^(6y) - 4? [#permalink]
### Show Tags
14 Nov 2017, 20:13
Statement 1: x^(2y)=16, cube both sides to get
x^(6y)=16^3. Hence 2*x^(6y)-4 = 2*16^3-4. Sufficient
Statement 2: xy = 4; possible values are (2,2), (1,4), (4,1). Each gives different answers. So insufficient.
_________________
If you like my post, motivate me by giving kudos...
Intern
Joined: 25 Mar 2016
Posts: 40
Location: India
Concentration: Finance, General Management
WE: Other (Other)
Re: If x and y are integers, what is the value of 2x^(6y) - 4? [#permalink]
### Show Tags
16 Nov 2017, 10:03
Bunuel wrote:
If x and y are integers, what is the value of $$2x^{(6y)} - 4$$?
(1) $$x^{(2y)} = 16$$
(2) $$xy = 4$$
given expression an be deduced to 2*x^(2y)^3 -4
From statement 1 we can find the value of x^(2y)
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 6614
GMAT 1: 760 Q51 V42
GPA: 3.82
Re: If x and y are integers, what is the value of 2x^(6y) - 4? [#permalink]
### Show Tags
18 Nov 2017, 15:33
Bunuel wrote:
If x and y are integers, what is the value of $$2x^{(6y)} - 4$$?
(1) $$x^{(2y)} = 16$$
(2) $$xy = 4$$
Forget conventional ways of solving math questions. For DS problems, the VA (Variable Approach) method is the quickest and easiest way to find the answer without actually solving the problem. Remember that equal numbers of variables and independent equations ensure a solution.
Since we have 2 variables and 0 equations, C is most likely to be the answer and so we should consider both conditions 1) & 2) together first.
By CMT(Common Mistake Type) 4(A), we need to consider A or B as an answer.
Condition 1)
Since $$x^2y = 16$$, $$x^6y = (x^2y)^3 = 16^3$$ and so $$2x^6y - 4 = 2\cdot16^3-4$$.
This is sufficient.
Condition 2)
$$xy = 4$$.
From $$xy = 4$$, we have $$(2,2)$$, $$(-2,-2)$$, $$(1,4)$$, $$(-1,-4)$$, $$(4,1)$$ and $$(-4,-1)$$ as pairs of $$(x,y)$$.
$$x^{6y} = 2^{12} = 2048$$ for $$x=2$$, $$y=2$$
$$x^{6y} = 1^{24} = 1$$ for $$x=1$$,$$y=4$$.
Since we don't have unique solutions, this is not sufficient.
Normally, in problems which require 2 or more additional equations, such as those in which the original conditions include 2 variables, or 3 variables and 1 equation, or 4 variables and 2 equations, each of conditions 1) and 2) provide an additional equation. In these problems, the two key possibilities are that C is the answer (with probability 70%), and E is the answer (with probability 25%). Thus, there is only a 5% chance that A, B or D is the answer. This occurs in common mistake types 3 and 4. Since C (both conditions together are sufficient) is the most likely answer, we save time by first checking whether conditions 1) and 2) are sufficient, when taken together. Obviously, there may be cases in which the answer is A, B, D or E, but if conditions 1) and 2) are NOT sufficient when taken together, the answer must be E.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only \$99 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Re: If x and y are integers, what is the value of 2x^(6y) - 4? &nbs [#permalink] 18 Nov 2017, 15:33
Display posts from previous: Sort by
# If x and y are integers, what is the value of 2x^(6y) - 4?
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6464319825172424, "perplexity": 2898.308695208692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00584.warc.gz"} |
http://mathematica.stackexchange.com/users/69/brett-champion?tab=activity&sort=all&page=7 | Brett Champion
Reputation
12,903
Top tag
Next privilege 15,000 Rep.
Protect questions
May2 answered Why do all plots open in new windows when I use JavaGraphics? May2 revised Why do all plots open in new windows when I use JavaGraphics? grammar Apr27 comment How to obtain an approximate function for SmoothHistogram3D? I appreciate that you include the bandwidth and kernel to show how they map from one function to the other. Apr26 comment FrameLabel not printing completely What version and OS? Does it happen without the Style wrappers? Apr26 revised FrameLabel not printing completely fix typo. Apr19 revised Change position of mesh code formatting, grammar/capitalization Apr7 comment How to find out if invisible notebooks are still open @AlbertRetey Hmmm, you're right it isn't documented. I don't know why. WindowTitle is a good guess, but I can't say definitively whether that's correct. Apr6 comment How to find out if invisible notebooks are still open @Rojo: Perhaps Complement[Notebooks[], Notebooks["Messages"]] would help. Apr5 revised How to set PlotLegend Number Format formatting, mainly Apr5 answered How to set PlotLegend Number Format Mar28 awarded Nice Answer Mar27 comment How to create a progress bar? @Andrew I think for ParallelTable you'd want to do something different, since the fact that you're processing a particular value of i doesn't tell you as much because you won't have necessarily completed the calculations for lower values of i yet. Mar14 awarded Nice Answer Mar8 awarded Enlightened Mar8 awarded Nice Answer Mar7 answered How can I create a table of sliders? Mar5 comment How can I set the tick marks of the x axis in RectangleChart? @KarstenW. Thanks for pointing that out. Feb19 revised How to adjust the parameters on a Nyquist Plot? add images and tweak options and formatting Feb14 revised _?NumericQ equivalent for lists add some timings Jan30 revised Make the code shorter for solving Fizz buzz edited title | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4361710846424103, "perplexity": 5742.568785565539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988312.76/warc/CC-MAIN-20150728002308-00064-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=1813224 | MathSciNet bibliographic data MR1813224 (2002f:14029) 14F35 (19E08) Morel, Fabien; Voevodsky, Vladimir ${\bf A}\sp 1$${\bf A}\sp 1$-homotopy theory of schemes. Inst. Hautes Études Sci. Publ. Math. No. 90 (1999), 45–143 (2001). Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971598982810974, "perplexity": 7997.624188863523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992543.60/warc/CC-MAIN-20150728002312-00196-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.slavorum.org/forum/reply/349776/ | #349776
Anonymous
Quote:
Brennus, would you say then that Macedonians (sorry if you don't like this term, but this is how they are known as, so this is what I will use) are in fact originally Thracians who accepted Slavic tongue, exactly like Bulgarians?
Would you call them Bulgarians?
Anyway, I suppose your geographical argument has some right:
[img width=700 height=669]http://www.emersonkent.com/images/ancient_thrace_map.jpg”/>
BTW Weren't Paionian's Illyrian-speaking and not Tracian?
Anyway, but then again, there are other maps from other timelines:
So, I think it's safe to say that both sides have right depending on historical context, no?
the map that i posted was an ethnic map of the area (paionians macedonians illyrians thracians epirots other greeks etc.), nothing related with macedonian or other states etc. Ancient macedonia was a state that included many non greek nationalities (including paionians as well as persians), however the only sure as hell is that these nationalities were not macedonians.
Well i think paionians were thracians not illyrians,however no matters since both of them were very similar.
Anyway i think that they are in fact thraco-illyrians, mixed with slavs.And i personaly consider Bulgarians of Bulgaria a little more native and a little less slavs than them.
if they don't want to call themselves Bulgarians, ok.
And since this forum is slavic, i have to inform that they used to call themselves vardarski and their land vardarska banovina before 20th century (some of them especially aged people there still do not even know the term macedonian and use this term), a very slavic term, and sure as hell not macedonian term.This is something that many serbs have confirmed in YT.
### Slavorum
6 User(s) Online Join Server
• Lucifer Morningstar
• GOGA
• kony97
• кошка
• Tujev
• Piachu | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8240023851394653, "perplexity": 5993.831856312952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00341.warc.gz"} |
https://tutorme.com/tutors/47853/interview/ | TutorMe homepage
Subjects
PRICING
COURSES
Start Free Trial
Yuria U.
MIT Student: Mathematics Major
Tutor Satisfaction Guarantee
SAT II Mathematics Level 2
TutorMe
Question:
The number n is doubled, then tripled, then reduced by 5, then cube rooted. The value is now -5. What was the value of n?
Yuria U.
Step 1: Rewrite the problem as an expression "doubled" -> $$2n$$ "then tripled" -> $$(3)(2n)$$ "then reduced by 5" -> $$(6n)-5$$ "then cube rooted" -> $$\sqrt[3]{6n-5}$$ "value is now -5" -> $$\sqrt[3]{6n-5}=-5$$ Step 2: Work backwards and solve $$\sqrt[3]{6n-5}=-5$$ $$6n-5 = (-5)^3$$ $$6n = -125+5$$ $$n = -120/6$$ $$n = -20$$ Answer: $$n = -20$$
Calculus
TutorMe
Question:
An open box is made by cutting congruent squares from the corners of a 12" by 10" cardboard and folding up the sides. What are the dimensions of the box such that volume is maximized?
Yuria U.
Step 1: Define variables $$x$$ = side length of square that is cut out, height of box $$12-2x$$ = side length of box $$10-2x$$ = width of box Step 2: Set up equation $$Volume = (x)(12-2x)(10-2x)$$ $$Volume = 4x(x^2-11x+30)$$ Step 3: Minimize volume $$V' = (4)(x^2-11x+30)+(4x)(2x-11)$$ $$V' = 3x^2-22x+30$$ $$0 = 3x^2-22x+30$$ $$x = 5.52, 1.81$$ (Note: 5.52 is outside the domain) Step 4: Calculate dimensions $$height = 1.81$$ $$length = 12-2(1.81) = 8.38$$ $$width = 10-2(1.81) = 6.38$$ Answer: $$1.81''\times 8.38''\times 6.38''$$
ACT
TutorMe
Question:
A rectangular box has a square base. The box's height is 3 inches more than twice its width. Which of the following gives the box's volume in terms of its width (w)?
Yuria U.
Step 1: Find the expression for the box's volume in terms of height ($$h$$) and width ($$w$$) Because the box has a "SQUARE base" we know its dimensions are $$h$$ by $$w$$ by $$w$$. $$Volume = h\cdot w\cdot w$$ Step 2: Find the relationship between the height ($$h$$) and width ($$w$$) "box's height is 3 inches MORE than TWICE its width" $$h = 3+2w$$ Step 3: Rewrite volume equation in terms of "w" $$Volume = (3+2w)\cdot w\cdot w$$ Answer: $$(3+2w)\cdot w\cdot w$$
Send a message explaining your
needs and Yuria will reply soon.
Contact Yuria | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6678900718688965, "perplexity": 1400.4677084872394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00126.warc.gz"} |
http://math.stackexchange.com/questions/527507/why-does-there-seem-to-be-so-much-error-in-the-laws-of-sines-and-cosines | # Why does there seem to be so much error in the laws of sines and cosines?
I've been computing the angles of a triangle with sides a = 17, b = 6 and c = 15 using the law of cosines to find the first angle and then the law of sines to find the other 2. I follow the convention of naming the angles opposite these sides A, B and C respectively. Here are my results:
$C = \arccos( \frac {6^2+17^2-15^2}{2(6)(17)}) = 60.647$ degrees to 3 d.p.
$B = \arcsin( \frac {6 \sin C}{15}) = 20.405$ degrees to 3 d.p.
$A = \arcsin( \frac {17 \sin B}{6}) = 81.051$ degrees to 3 d.p.
Clearly, adding these should give $180$ degrees, but it gives 162 degrees to 3 s.f. Assuming I haven't made any mistakes, the error seems quite high and I'm just wondering if anyone knows why this is? It seems high enough to challenge the validity of the laws.
-
Maybe it's because of the ambiguous case, that arises when using the Law of sines. Why don't you do the Law of Cosines two times and subtract from 180° to find the third angle? No need for Law of Sines here. – imranfat Oct 15 '13 at 19:07
Because if I do that, it's not testing the accuracy of the law. +1 because info and question useful. – George Tomlinson Oct 15 '13 at 19:14
The laws are acurate and so is your calculator for the trig terms, but the ambiguous case is the issue – imranfat Oct 15 '13 at 19:16
Note that the error is about $18$ degrees, which is just the difference between $81$ degrees and $180-81=99$ degrees. Those angles have the same sine. You are picking the wrong one. – Ross Millikan Oct 15 '13 at 19:19
@imranfat: The arcsin function is defined (so as to be single valued) to return values between $-90$ and $+90$ degrees. The error was made in going from $\sin A= ()$ to $A=\arcsin ()$. Those are not equivalent. It is the same as going from $x^2=2$ to $x=\sqrt 2$ and missing the $\pm$ sign. The calculator is useful and returned the correct answer to the question it was asked. – Ross Millikan Oct 15 '13 at 19:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283270597457886, "perplexity": 230.06161580932917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637445.19/warc/CC-MAIN-20150417045717-00155-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://ai.stackexchange.com/users/37627/tmaric?tab=questions | tmaric
# 11 Questions
1
3
2
111
views
3
1
158
views
### How do we express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$?
Jun 8 at 23:18 nbro 21.1k
3
1
106
views
2
2
44
views
2
1
51
views
### How to express $v_\pi(s)$ in terms of $q_\pi(s,a)$?
Jun 17 at 9:30 nbro 21.1k
1
vote
1
57
views
### Connection between the Bellman equation for the action value function $q_\pi(s,a)$ and expressing $q_\pi(s,a) = q_\pi(s, a,v_\pi(s'))$
Oct 4 at 10:54 nbro 21.1k
1
vote
0
26
views
### Are policy-based methods better than value-based methods only for large action spaces?
Jun 23 at 10:45 nbro 21.1k
1
1
vote
0
34
views
### How to choose an RL algorithm for a Gridworld that models a much more complex problem
Jun 23 at 9:58 tmaric 302
1
1
vote
0
50
views
### Can reinforcement learning algorithms be applied on problems involving a very large number of possible actions?
Jun 17 at 7:50 tmaric 302
1
vote
1
68
views
0 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701109290122986, "perplexity": 2025.4951220108346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890586.57/warc/CC-MAIN-20201026061044-20201026091044-00336.warc.gz"} |
https://www.clinicaltherapeutics.com/article/S0149-2918(16)30843-8/fulltext | Original Research| Volume 39, ISSUE 4, P686-696, April 2017
# Confounding Variables and the Performance of Triggers in Detecting Unreported Adverse Drug Reactions
Published:November 29, 2016
## Abstract
### Purpose
This study explored the performance of trigger in detecting adverse drug reactions (ADRs), the confounding variables impairing the causal association of the ADRs, and the underreporting rate by hospital health professionals.
### Methods
A 6-month cross-sectional study was conducted in a public general hospital. Data collection was conducted in 2 stages: (1) screening of patient hospitalizations to identify suspected ADRs with 9 triggers developed by the Institute of Healthcare Improvement; and (2) chart review to perform the causality assessment of the suspected ADRs identified, to describe the confounding variables associated with detection of suspected ADRs that were not drug induced, and to analyze the positive predictive value of triggers in recognizing ADRs. To estimate the underreporting rate, ADRs detected by using the tool were compared with ADRs reported by health professionals during the same period.
### Findings
During the study period, 3318 hospitalizations were analyzed. A total of 837 suspected ADRs were identified. However, after causality assessment, 356 were definite ADRs. Confounding variables associated with the detection-suspected ADRs were related to the clinical conditions of inpatients. The use of triggers contributed to increased ADR detection by 10.5%. The performance ranged from 0.00 to 0.75, with an overall positive predictive value of 0.43. Six ADRs were spontaneously reported, of which just 1 was also detected by using the trigger tool. Only 1 of 356 potential ADRs was reported by health professionals.
### Implications
Findings show that the use of triggers contributes to detecting ADRs underreported by health professionals. However, confounding variables impaired the performance of the tool because they underestimated the causal association. Furthermore, both methods are complementary to early recognition of drug-induced harm and should be applied together in health institutions to contribute to policies of risk management, drug safety, and optimization of pharmacotherapy.
## Introduction
A systematic review found that only 6% of adverse drug reactions (ADRs) are spontaneously reported.
• Hazell L.
• Shakir S.A.W.
Under-reporting of adverse drug reactions : a systematic review.
This rate is a small percentage of the harm experienced by patients and is not representative of the total possibilities of occurrence of drug-induced harm.
• Kilbridge P.M.
• Classen D.C.
Automated surveillance for adverse events in hospitalized patients: back to the future.
Spontaneous reporting depends on the motivation of the reporters
• Pal S.N.
• Duncombe C.
• Falzon D.
• Olsson S.
WHO strategy for collecting safety data in public health programmes: complementing spontaneous reporting systems.
; however, poor information
• Yun I.S.
• Koo M.J.
• Park E.H.
• et al.
A comparison of active surveillance programs including a spontaneous reporting model for phamacovigilance of adverse drug events in a hospital.
• Coleman J.J.
• McDowell S.E.
An agenda for UK clinical pharmacology.
• Gerritsen R.
• Dijkers F.
• et al.
Effectiveness of pharmacovigilance training of general practitioners: a retrospective cohort study in the Netherlands comparing two methods.
and the presence of confounding variables
• Macedo A.F.
• Marques F.B.
• Ribeiro C.F.
• Teixeira F.
Causality assessment of adverse drug reactions: comparison of the results obtained from published decisional algorithms and from the evaluations of an expert panel, according to different levels of imputability.
also hinder causality assessment. Thus, risk communication related to drug use is ineffective.
• Hooper A.J.
• Tibballs J.
Comparison of a trigger tool and voluntary reporting to identify adverse events in a paediatric intensive care unit.
Several strategies have been developed to improve the detection of medication-related harm.
• Pal S.N.
• Duncombe C.
• Falzon D.
• Olsson S.
WHO strategy for collecting safety data in public health programmes: complementing spontaneous reporting systems.
• Call R.
• Burlison J.
• Robertson J.
• et al.
Adverse drug event detection in pediatric oncology and hematology patients: using medication triggers to identify patient harm in a specialized pediatric patient population.
• Pagotto C.
• Varallo F.
• Mastroianni P.
Impact of educational interventions on adverse drug events reporting.
• Rozich J.D.
• Resar R.K.
Adverse drug event trigger tool: a practical methodology for measuring medication related harm.
One approach, the use of triggers, has shown promise in improving the identification of drug safety problems. Classen et al
• Classen D.C.
• Resar R.
• Griffin F.
• et al.
“Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured.
noted that the use of the tool increased ADR detection by 10-fold. However, varying performances have been observed,
• Rozenfeld S.
• Giordani F.
• Coelho S.
[Adverse drug events in hospital: pilot study with trigger tool].
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
• Roque K.E.
• Melo E.C.
Adjustment of evaluation criteria of adverse drug events for use in a public hospital in the State of Rio de Janeiro.
• Rozenfeld S.
• Chaves S.M.C.
• Reis LGC
• et al.
Drug adverse effects in a public hospital in Rio de Janeiro: pilot study.
• Franklin B.D.
• Birch S.
• Schachter M.
• Barber N.
Testing a trigger tool as a method of detecting harm from medication errors in a UK hospital: a pilot study.
• Carnevali L.
• Krug B.
• Amant F.
• et al.
Performance of the adverse drug event trigger tool and the global trigger tool for identifying adverse drug events: experience in a Belgian hospital.
• Nwulu U.
• Nirantharakumar K.
• Odesanya R.
• et al.
Improvement in the detection of adverse drug events by the use of electronic health and prescription records: an evaluation of two trigger tools.
as well as poor sensitivity, compared with case note review for the identification of preventable ADRs.
• Franklin B.D.
• Birch S.
• Schachter M.
• Barber N.
Testing a trigger tool as a method of detecting harm from medication errors in a UK hospital: a pilot study.
The wide range of performance is not directly related to safety barriers, but it is instead due to the characteristics of hospitals,
• Roque K.E.
• Melo E.C.
Adjustment of evaluation criteria of adverse drug events for use in a public hospital in the State of Rio de Janeiro.
the design and aims of studies, the sample enrolled, settings,
• Carnevali L.
• Krug B.
• Amant F.
• et al.
Performance of the adverse drug event trigger tool and the global trigger tool for identifying adverse drug events: experience in a Belgian hospital.
and the presence of confounding variables. Confounding occurs when the estimate of association between drug exposure and health status is distorted by the effect of one or several other variables that are also risk factors for the outcome of interest.
Coppet J, Beivin J. Bias and Confounding in Pharmacoepidemiology. In: Textbook of Pharmacoepidemiology. 3rd ed. Chichester: John Wiley & Sons, Ltd.; 2000:261-275. http://dx. doi.org/10.1002/9781118707999.ch16.
Because confounding variables are a source of bias,
• Greenland S.
• Morgenstern H.
Confounding in health research.
it is critical to consider confounding variables when designing, analyzing, and interpreting studies intended to estimate causal effects. Confounding variables associated with poor performance of triggers are still unknown.
The intent of the present study was to explore and describe the relevant confounding variables, aiming to optimize the risk management of drugs in hospitals, as well as to improve safety care. Therefore, the objective of this study was as follows: to explore the performance of trigger tools in ADR; to identify the confounding variables associated with the detection of suspected ADRs that were not drug induced; and to estimate the underreporting rate by hospital health professionals.
## Patients and Methods
### Study Design, Setting, and Population
A cross-sectional study was performed in a medium-complexity public, nonteaching hospital; the hospital has 30 clinical and surgical specialties and 94 beds. The study was conducted over a period of 6 months. The institution has an electronic charts system (prescription, clinical outcomes, and laboratory parameters), in which all health professionals register their assessments. In 2012, a risk management policy was implemented, including an institutional program of pharmacovigilance.
Inclusion criteria comprised all inpatients aged ≥18 years who had been hospitalized from November 2011 to January 2012 and from May to July 2012. The exclusion criteria included inpatients whose charts were incomplete or unavailable for consultation.
### Variables
The primary outcome was the sensitivity of each trigger for ADRs. This study aimed to evaluate the association between the ADRs identified according to the trigger tool and the demographic characteristics of the inpatients enrolled (age and sex); ADR causality assessment; seriousness of ADRs; and the presence of confounding variables related to the activation of triggers.
The number of definite ADRs detected by using the trigger tool was compared with the number of ADRs reported by health professionals to estimate the percentage of improvement in safety reporting.
### Data Sources/Measurement
Data were extracted from a local electronic system. Nine triggers from the list developed by the Institute of Healthcare Improvement were applied to perform the active search of ADRs (Figure).
• Rozich J.D.
• Resar R.K.
Adverse drug event trigger tool: a practical methodology for measuring medication related harm.
Only 1 trigger (“rising serum creatinine”) was adapted (to “serum creatinine >1.2 mg/dL”).
Data collection occurred in 2 stages (Figure) and was performed with the aid of an instrument developed to guide the ADR evaluation process. The instrument had 5 sections with the following information: (1) reference ranges of laboratory parameters; (2) drugs associated with changes in laboratory parameters; (2) synonyms of triggers related to clinical conditions (rash, fall, lethargy, and somnolence); (3) brand names of sodium polystyrene; (4) clinical manifestations related to hyperkalemia, hypoglycemia, and renal failure; and (5) events that could have activated the triggers whose etiology was not related to the drug use.
The first stage of data collection corresponded to the screening of patient charts with the 9 triggers. When at least 1 trigger was identified, the second phase of data collection (close chart review and analysis) was conducted to verify the causal association between the suspected ADR and the drug used (Figure).
The causality assessment was performed by using clinical judgment. The clinical pharmacist of the hospital supervised the team responsible for ADR imputation. The judges were pharmacy undergraduate students who were previously trained to standardize the analysis of causal association. The training took into account the classes of triggers, with emphasis on drugs and health conditions that could be associated with them, as well as practical classes about causality assessment, with analysis of fictitious ADRs to be imputed.
Clinical judgment of suspected ADRs considered: (1) the temporal relationship between the occurrence of the event and drug use; (2) ADRs previously described in the scientific literature (UpToDate and Micromedex databases); (3) the pharmacologic plausibility (whether the mechanism of action of the drug may produce the event); (4) exclusion of confounding variables that may explain the case (clinical condition of the patient and other drug-related problems); and (5) objective and subjective impressions of physicians about the case.
Chart review and periodic discussions among the judges, clinical pharmacists, and physicians were conducted until consensus was achieved on causal association. The confounding variables were described for each trigger related to suspected ADRs that were not considered drug related according to clinical judgment. The confounding variables were described when the causality association was not observed.
The ADRs identified were reported to the department of risk management because it was considered they had the potential to be reported by health professionals. The risk manager reported the ADRs to the Brazilian Sanitary Agency (ANVISA).
We used the World Health Organization definition of ADR,
World Health Organization
International drug monitoring: the role of national centres. Report of a WHO meeting.
which is any noxious, unintended, or undesired effect of a drug occurring at doses used in humans for prophylaxis, diagnosis, or therapy. The concept of harm used was a temporary or permanent disorder in the physical or psychological functioning or structure of the human body.
• Rozenfeld S.
• Chaves S.M.C.
• Reis LGC
• et al.
Drug adverse effects in a public hospital in Rio de Janeiro: pilot study.
An abrupt stop in medication was considered as the unexpected discontinuation of a drug, excluding: replacement of a drug for another of the same chemical group with similar pharmacokinetic or pharmacodynamic properties; prescriptions to take “if necessary”; and drugs not administered due to administrative reasons,
• Rozenfeld S.
• Chaves S.M.C.
• Reis LGC
• et al.
Drug adverse effects in a public hospital in Rio de Janeiro: pilot study.
such as drugs not dispensed due to shortages or that were nonstandardized in the hospital.
Serious adverse effects were considered as any untoward medical occurrence that at any dose results in death, requires hospital admission or prolongation of existing hospital stay, results in persistent or significant disability/incapacity, or is life-threatening.
• Edwards I.R.
• Arinson J.K.
Adverse drug reactions: definitions, diagnosis, and management.
### Bias
Causality assessment was performed by using a chart review. The possible lack of information in patient charts may impair determination of causal associations and underestimate the detection of definite ADRs.
### Study Size
All hospitalizations (100.0%) were analyzed during the study period to verify the contribution of triggers in enhancing detection of ADRs.
### Quantitative Variables
An inpatient may have been hospitalized more than once in the period of study. Therefore, chart review of all hospitalizations was conducted to estimate the prevalence of ADRs detected by using the trigger tools and to compare that with the number of ADRs reported by health professionals in the same period.
### Analytical Methods
Data obtained from the likelihood of causality assessment, the presence of confounding variables, and demographic characteristics (sex and age) of hospitalizations with and without ADRs identified were expressed as frequencies and subjected to an analysis of descriptive statistics. The χ2 test was applied to the dichotomous variables age (elderly or nonelderly) and sex, to reveal statistically significant differences between overall hospitalizations; hospitalizations with suspected ADRs; and hospitalizations with definite ADRs. Patients aged ≥65 years were considered “elderly.” Odds ratios were calculated to analyze the association between dichotomous variables and the occurrence of ADRs.
After detection of ADRs, the positive predictive value (PPV) of each trigger was evaluated. PPV is the proportion of positive results in statistics and diagnostic tests that are true positive results.
• Fletcher R.H.
• Fletcher S.H.
Clinical Epidemiology: The Essentials.
ADR prevalence was detected by close review and analysis. Calculations were performed according to the following formulas:
$PPVtrigger=(no.ofADRsdetectedbytriggersintheperiod)(no.oftriggersdetectedintheperiod)$
$PrevalenceADR-trigger=(no.ofADRsdetectedbytriggersintheperiod)(no.ofhospitalizationsintheperiod)×100%$
Moreover, ADR prevalence detected according to spontaneous report was also estimated, while taking into account all ADRs reported by health professionals in the same period of data collection. The calculation was performed by using the following expression:
$PrevalenceADRreporting=(no.ofADRsreportedintheperiod)(no.ofhospitalizationsintheperiod)×100%$
A comparison between ADR prevalence estimated by using the trigger tools and by spontaneous reporting was performed to verify ADR underreporting rate according to the follow expression:
$ADRunderreportingrate=(no.ofADRsdetectedbytriggers−no.ofADRreports)(no.ofhospitalizationsintheperiod)×100%$
### Approval by Ethics Committee Research
The study (protocol E-015/10) was approved by the Ethics Committee in Research of the Instituto Lauro de Souza Lima.
## Results
### Participants
In the study period, there were 3318 hospitalizations, which corresponded to 2464 inpatients (the same patient could have been hospitalized more than once).
### Descriptive Data
According to the demographic characteristics, 67.9% of overall hospitalizations involved nonelderly people (aged <65 years) and women (59.1%) (P < 0.0001) (Table I) who were hospitalized due to infectious diseases (pneumonia) and with prescription of polypharmacy. At the first stage of data collection (screening of patient charts), 837 triggers were activated. Suspected ADRs were detected in most hospitalizations of men (59.3%) and elderly subjects (55.4%), although the differences were not statistically significant.
Table IDemographic characteristics of overall hospitalizations, according to sex, age, presence of suspected adverse drug reactions (ADRs), and definite ADRs.
HospitalizationsFemale Patients, No. (%)Male Patients, No. (%)Total, No. (%)Statistical Analysis
OR (95%) CIP
Overall
Elderly557 (28.4)509 (37.6)1066 (32.1)1.5 (1.3–1.7)<.0001
Nonelderly1406 (71.6)846 (62.4)2252 (67.9)
Total, no. (%)1963 (100.0)1355 (100.0)3318 (100.0)
Elderly177 (51.9)287 (57.9)464 (55.4)1.2 (0.9–1.6)0.09
Nonelderly164 (48.1)209 (42.1)373 (44.6)
Total, no. (%)341 (100.0)496 (100.0)837 (100.0)
Elderly68 (43.3)103 (51.8)171 (48.0)1.4 (0.9–2.1)0.11
Nonelderly89 (56.7)96 (48.2)185 (52.0)
Total, no. (%)157 (100.0)199 (100.0)356 (100.0)
OR = odds ratio.
There was no difference in ADR detection with triggers between each period evaluated. Furthermore, ADRs identified were considered as expected.
### Outcome Data
After causality assessment, of the 837 suspected ADRs, 356 (42.5%) were classified as definite, 328 (39.2%) as possible or probable, and 153 (18.3%) were improbable. None of the definite ADRs met the criteria of seriousness. Findings showed that nonelderly people (52.0%) and men (55.9%) were the most susceptible for occurrence of ADRs (Table I). However, there was no statistically significant difference. A total of 220 inpatients developed the 356 definite ADRs identified. A total of 128 had 1 occurrence; 63 had 2 occurrences; 19 had 3 occurrences; and 10 had >3 occurrences.
The prevalence of ADRs estimated according to triggers was 10.7% (356 of 3318). The overall performance of the triggers was 0.43, as the PPV of each trigger ranged widely from 0.00 to 0.75 (Table II).
Table IIPositive predictive value (PPV) (effectiveness) of the triggers applied to identify adverse drug reactions (ADRs) in patient records of hospitalized patients.
TriggerNo. of Times the Trigger Was DetectedNo. of Times the Trigger Was Associated With an ADRPPV
INR >61290.75
Abrupt medication stop2712010.74
Serum glucose <50 mg/dL29210.72
Sodium polystyrene28180.64
Rash33180.55
Fall, lethargy, somnolence97500.52
WBC count <3000/mL21100.48
Creatinine >1.2 mg/dL291290.10
Transfer to higher level of care5500.00
Total8373560.43
INR = international normalized ratio; WBC = white blood cell.
The drug classes and confounding variables mainly associated with the 9 triggers are described in Table III.
Table IIIDrug classes and main confounding variables associated with each of the 9 triggers used in the study.
TriggerDrug Classes Mainly AssociatedConfounding Variables
INR >6Anticoagulants: warfarin, enoxaparin, heparinClinical conditions: hepatic impairment (n = 2)
Laboratory test errors (n = 1)
Sodium polystyreneACE inhibitors: enalaprilClinical condition: hyperkalemia secondary to renal impairment (n = 10)
Anticoagulants: heparin, enoxaparin, warfarin
Potassium-sparing diuretics: spironolactone
Cardiotonics: digoxin
Fall, lethargy, somnolenceDrugs that act in the nervous system, with severely enhanced sedation after associated with drugs such as:Secondary to worsening patient conditions (n = 40)
Sedation scheme (n = 7)
Neuroleptics: haloperidol, risperidone
Anxiolytics: lorazepam, diazepam
Hypno-analgesics: morphine
Transfer to a higher level of careWorsening of clinical conditions (n = 55)
RashAntibiotics: amoxicillin, azithromycin, clindamycin; pyrazinamide, ciprofloxacinMechanical or bacterial phlebitis (n = 3)
Clinical conditions (n = 8)
Skin infections (n = 4)
Abrupt medication stopPsychotropic medicationImprovement of clinical condition/clinical observation (n = 52)
Insulin
Diuretics: furosemide and spironolactoneAbsence of benefit (n = 18)
ACE inhibitors: enalapril
Anticoagulants: heparin, enoxaparin, warfarin
Antibiotics
WBC <3000 × 106/μLAntivirals: aciclovirClinical conditions (n = 11)
Serum glucose <50 mg/dLInsulin: intermediate-acting, fast-acting, regularProlonged fastening (n = 5)
Clinical conditions (n = 3)
Serum creatinine >1.2 mg/dLACE inhibitors: captopril, enalaprilAcute renal failure with a prerenal component (n = 253)
Diuretics: furosemide
Acute renal failure with a postrenal component (n = 9)
ACE = angiotensin-converting enzyme; INR = international normalized ratio.
### Other Analysis
Regarding the spontaneous reporting, 6 ADRs were reported by the risk management of the hospital in the period analyzed. Only 1 ADR was also detected by using the triggers; this ADR was related to cutaneous rash and was possibly caused by clindamycin or ciprofloxacin. The 5 other ADRs reported were phlebitis (n = 3), whose causal relationship was mainly associated with the use of antibiotics (azithromycin, ceftriaxone, and meropenem), and dyspnea (n = 2), arising from azithromycin (unlikely) and formoterol. The ADR prevalence estimated by the spontaneous reporting was therefore 0.2% (6 of 3318).
Considering the ADR underreporting rate, it was noted that only 1 of 356 ADRs with the potential to be reported by health professionals was actually reported. Therefore, the use of triggers contributed to increasing ADR detection by 10.5% in the hospital under study.
## Discussion
### Key Results
Our study is the first to illustrate the confounding variables associated with different performances of triggers in detecting ADR. Most of them are related to clinical conditions of inpatients. Nevertheless, triggers enhanced the detection of definite ADRs that were not reported by health professionals. The improvement observed may support indicators of risk management of health care to contribute to polices of patient safety; communication of harm drug-induced and contribution of safe pharmacotherapy.
### Limitations
Data collection was conducted in a general, public, nonteaching hospital. The data may therefore not be generalizable to other types of institutions. Moreover, differences between the definitions of ADR used while developing the triggers and those used in our study may have contributed to the wide range of sensitivity found.
Data may also be underestimated because causality assessments were conducted by using chart review and clinical judgment. This approach could hinder the identification of potential confounding variables that were not described in patient charts, and results may change according to the complexity of the hospital, the judges who perform the causal association, and the design of the study (prospective or retrospective).
### Interpretation
According to a meta-analysis by Miguel et al,
• Miguel A.
• Azevedo L.F.
• Araújo M.
• Pereira A.C.
Frequency of adverse drug reactions in hospitalized patients: a systematic review and meta-analysis.
ADRs could occur in 16.8% of patients during hospitalization. Methods applied to recognition of ADRs were: spontaneous reporting, solicited reporting, close review and analysis, prospective monitoring, computerized system with investigation of every alert to validate ADRs, codification/codes, and chart review. Angamo et al
• Angamo M.T.
• Chalmers L.
• Curtain C.M.
• Bereznocki L.R.
Adverse-Drug-Reaction-Related Hospitalisations in Developed and Developing Countries: A Review of Prevalence and Contributing Factors.
observed that the prevalence of ADR-related hospitalizations in developed and developing countries was 6.3% and 5.5%, respectively. Consequently, to detect and manage drug-induced harm arising from primary health care is still a challenge to hospitals.
The recognition of ADRs according to triggers revealed a prevalence of 13.2%
• Karpov A.
• Parcero C.
• Mok C.P.Y.
• et al.
Performance of trigger tools in identifying adverse drug events in emergency department patients: a validation study.
and 14.6%
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
in hospitals of North and South America, respectively. Our data corroborate these findings. Furthermore,
• Aagaard L.
• Strandell J.
• Melskens L.
• et al.
Global patterns of adverse drug reactions over a decade: analyses of spontaneous reports to VigiBaseTM.
Classen et al
• Classen D.C.
• Resar R.
• Griffin F.
• et al.
“Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured.
suggest that triggers increase 10 times the identification of adverse events in hospitals. Considering that the development of pharmacovigilance is recent in Latin America, many challenges need to be addressed
• Olsson S.
Overview of pharmacovigilance in resource limited settings: challenges and opportunities.
to increase the recognition of ADRs, as South American countries have the lowest ADR reporting rates.
• Aagaard L.
• Strandell J.
• Melskens L.
• et al.
Global patterns of adverse drug reactions over a decade: analyses of spontaneous reports to VigiBaseTM.
Therefore, the close review and analysis with triggers is feasible to target this obstacle, once it is an efficient, robust method.
• Hooper A.J.
• Tibballs J.
Comparison of a trigger tool and voluntary reporting to identify adverse events in a paediatric intensive care unit.
In addition, this method is practical and less laborious
• Rozich J.D.
• Resar R.K.
Adverse drug event trigger tool: a practical methodology for measuring medication related harm.
• Resar R.K.
• Rozich J.D.
• Simmonds T.
A trigger tool to identify adverse events in the intensive care unit.
compared with retrospective analysis of patient chrts.
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
Regarding risk factors, studies show that nonelderly subjects and men often tend to be affected by ADRs detected by using triggers.
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
• Karpov A.
• Parcero C.
• Mok C.P.Y.
• et al.
Performance of trigger tools in identifying adverse drug events in emergency department patients: a validation study.
Varallo et al
• Varallo F.R.
• Costa M.A.
• Mastroianni P.C.
Potenciais interações medicamentosas responsáveis por internações hospitalares.
found that age is a protective factor for the occurrence of drug-induced harm. The investigators suggest that older people now receive greater care and health assistance, due to the physiological changes related to the aging process, which may favor the development of adverse effects. This scenario may explain why they had a lower frequency of ADRs in the present study.
A systematic review has shown that there may be differences between men and women in the occurrence of ADRs, depending on the therapeutic regimen used.
• Yu Y.
• Chen J.
• Li D.
• et al.
systematic analysis of adverse event reports for sex differences in adverse drug events.
However, when using triggers for the detection of ADRs, no statistical differences were observed for these variables,
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
• Karpov A.
• Parcero C.
• Mok C.P.Y.
• et al.
Performance of trigger tools in identifying adverse drug events in emergency department patients: a validation study.
as confirmed in the present study.
Concerning causality assessment, Sam et al
• Sam A.T.
• Lian Jessica L.L.
• Parasuraman S.
A retrospective study on the incidences of adverse drug events and analysis of the contributing trigger factors.
conducted a screening of charts with triggers and found that 61% of the ADRs detected were classified as possible or probable after being imputed with the World Health Organization–Uppsala Monitoring Centre algorithm. In our study, the active participation of risk management in the selection of triggers and the assessment according to clinical judgment might explain the higher frequency of ADRs obtained as definite. We suggest therefore that choosing triggers in accordance with the epidemiologic/nosologic profile is an effective strategy to increase signal detection, improve risk communication, and contribute to patient safety.
Another advantage of the application of triggers rises from their ability to recognize multiple ADRs in a single patient. These tools can therefore be used to prevent iatrogenic events. However, it is necessary to know the confounding variables that can activate these triggers and that hinder causal association, to enhance and improve their performance in the early identification of ADRs.
We observed that confounding variables are generally related to the clinical condition of inpatients, which comprise the same limitations described for causal assessment related to spontaneous reporting.
• Macedo A.F.
• Marques F.B.
• Ribeiro C.F.
• Teixeira F.
Causality assessment of adverse drug reactions: comparison of the results obtained from published decisional algorithms and from the evaluations of an expert panel, according to different levels of imputability.
Poor-quality information in patient charts then decreases the benefit of trigger tools and hinders the recognition of confounding variables. As a consequence, safety report and causality assessment will be impaired.
• Macedo A.F.
• Marques F.B.
• Ribeiro C.F.
• Teixeira F.
Causality assessment of adverse drug reactions: comparison of the results obtained from published decisional algorithms and from the evaluations of an expert panel, according to different levels of imputability.
Health professionals should be encouraged to report ADRs to increase the detection of harm associated with drug use, as well as to identify risk factors to prevent them.
The wide PPV range of triggers demonstrated in several studies
• Rozenfeld S.
• Giordani F.
• Coelho S.
[Adverse drug events in hospital: pilot study with trigger tool].
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
• Roque K.E.
• Melo E.C.
Adjustment of evaluation criteria of adverse drug events for use in a public hospital in the State of Rio de Janeiro.
• Rozenfeld S.
• Chaves S.M.C.
• Reis LGC
• et al.
Drug adverse effects in a public hospital in Rio de Janeiro: pilot study.
• Franklin B.D.
• Birch S.
• Schachter M.
• Barber N.
Testing a trigger tool as a method of detecting harm from medication errors in a UK hospital: a pilot study.
• Carnevali L.
• Krug B.
• Amant F.
• et al.
Performance of the adverse drug event trigger tool and the global trigger tool for identifying adverse drug events: experience in a Belgian hospital.
• Nwulu U.
• Nirantharakumar K.
• Odesanya R.
• et al.
Improvement in the detection of adverse drug events by the use of electronic health and prescription records: an evaluation of two trigger tools.
also might be explained by retrospective chart review in addition to the epidemiology profile of the institution, as well as the patient’s characteristics, the specialty of the wards, the drugs standardized in the hospital, and the method applied to ADR detection.
• Franklin B.D.
• Birch S.
• Schachter M.
• Barber N.
Testing a trigger tool as a method of detecting harm from medication errors in a UK hospital: a pilot study.
• Carnevali L.
• Krug B.
• Amant F.
• et al.
Performance of the adverse drug event trigger tool and the global trigger tool for identifying adverse drug events: experience in a Belgian hospital.
Therefore, knowing confounding variables may improve strategies to conduct prospective follow-up of inpatients, preventing negative clinical outcomes in real time, and contributing to patient safety and institutional policies of risk management, as well as optimizing the effectiveness and safety of pharmacotherapy.
Safety indicators associated with triggers state that they should not be used as a benchmarking tool at the tertiary health care level.
• Rozich J.D.
• Resar R.K.
Adverse drug event trigger tool: a practical methodology for measuring medication related harm.
We suggest that each health institution should select the most appropriate triggers to identify drug-induced harm. For example, our data showed that patient transfers to higher health care levels or institutions were ineffective triggers for identifying ADRs in the hospital under study. This fact can be explained by the complexity of the institution (medium complexity), which does not provide clinical care for serious conditions that require the most advanced health technologies.
Regarding the events related to creatinine levels >1.2 mg/mL, an important limitation should be considered: acute kidney failure was considered when an increase of 0.5 mg/dL was observed in 2 subsequent measurements of creatinine levels.
• Bellomo R.
• Ronco C.
• Kellum J.A.
• et al.
Acute Dialysis Quality Initiative workgroup
Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group.
Therefore, rising creatinine levels, rather than creatinine levels >1.2 mg/mL, should be considered as a trigger when evaluating acute renal failure associated with drugs, even when a patient’s creatinine level is <1.2 mg/mL. In fact, the original trigger tool, which was adapted in our study, listed a trigger of rising creatinine level.
• Rozich J.D.
• Resar R.K.
Adverse drug event trigger tool: a practical methodology for measuring medication related harm.
Furthermore, although creatinine levels >1.2 mg/mL could be activated by several confounding factors, as observed in our study, it is an important indicator for clinical assessment.
An international normalized ratio (INR) >6 demonstrated the best PPV. However, our finding may be underestimated, because in the sample analyzed, side effects related to anticoagulant drugs appeared with INR measurements <6. In several cases, physicians discontinued the treatment when INR was extrapolated over the therapeutic range of warfarin (2.0–4.0) due to the higher risk of bleeding and to avoid drug-related problems. Therefore, the customization of the trigger list should be considered, while mainly taking into account the nosology profile of the institution and the characteristics of the patient population in the hospital. In the hospital under study, a better parameter for the recognition of ADRs related to anticoagulant drugs could be INR >3.5.
Abrupt medication stops (PPV, 0.74) were usually detected in association with other triggers. The most frequent cases identified during the study were: (1) somnolence (discontinuation of psychotropic medications or insulin treatment); (2) worsening of kidney function (discontinuing diuretics and angiotensin-converting enzyme inhibitors); (3) changes in INR and clinical conditions related to bleeding (discontinuing heparin, enoxaparin, and warfarin); and (4) rash, which could characterize allergic cutaneous reactions (discontinuation of antibiotics, which were replaced by another different therapeutic class).
We noted that for patients with leukopenia (white blood cell count <3000 × 106/μL), the drugs responsible for the decrease in white blood cell count were not suspended, such as: acyclovir (n = 5), antiretroviral therapy (n = 2), prednisone (n = 1), azathioprine (n = 1), and azithromycin (n = 1). The discontinuation of antiretroviral therapy would only be justified by considering the CD4 lymphocyte count, which requires the association of antibiotic prophylaxis with sulfamethoxazole and trimethoprim.
Because 5 ADRs reported by health professionals were not detected with the trigger tool screening, the spontaneous reporting and close review and analysis with triggers are complementary. Therefore, both of them should be used in association, as recommended by the World Health Organization,
• Pal S.N.
• Duncombe C.
• Falzon D.
• Olsson S.
WHO strategy for collecting safety data in public health programmes: complementing spontaneous reporting systems.
to improve risk communication. According to del Campo et al,
• del Campo C.B.
• Jimenez C.R.
• Colomer M.G.S.
• et al.
the active search of ADR encourages the interaction with other hospital services and promotes a habit of reporting among health professionals. Furthermore, spontaneous reporting is a more specific method to detect ADRs because it enables the imputation of a high degree of causality.
Strategies to encourage the reporting of drug-related problems by health professionals are needed to change their attitudes regarding postmarketing surveillance.
• Pagotto C.
• Varallo F.
• Mastroianni P.
Impact of educational interventions on adverse drug events reporting.
Furthermore, it is important to know the confounding variables that may decrease the performance of trigger tools to optimize and improve the search strategy of drug-related problems according to the needs of health care, to contribute to patient safety, and to optimize policies of risk management and safety issues. Advanced multiprofessional collaboration, effective communication, adequate skills, and more systematic medication processes to increase medication safety should then be addressed in health care institutions.
• Härkänen M.
• Turunen H.
• Vehviläinen-Julkunen K.
Differences between methods of detecting medication errors: a secondary analysis of medication administration errors using incident reports, the global trigger tool method, and observations.
## Conclusions
Close review and analysis with triggers improved risk communication in pharmacovigilance, even when confounding variables for trigger detection were excluded, with only 1 of 356 potential ADRs once having been spontaneously reported. The data suggest that assessment of the performance of each trigger should be conducted to: (1) determine the confounding variables related to the method; (2) select the most effective trigger in ADR detection within different institutions; and (3) establish new parameters (customization of trigger list) to optimize the use of this tool in detecting and preventing drug-related problems.
## Conflicts of Interest
The authors have indicated that they have no conflicts of interest regarding the content of this article.
## AUTHOR CONTRIBUTION
Fabiana Rossi Varallo contributed to data collection, figure creation, literature search, data collection, data interpretation and writting. Caroline Pagotto contributed to data collection and literature seach. Tales Rubens de Nadai contributed to data interpretation. Carolina Dagli-Hernandez contributed to data interpretation, figure creation and writting. Maria Teresa Herdeiro contributed to interpretation and writting. Patricia de Carvalho Mastroianni contributed to study design, literature search, interpretation and writting.
## Acknowledgements
The CAPES Foundation, Ministry of Education of Brazil for the scholarship (PDSE) grant no. 014301/2013-00. The authors would also like to thank FAPESP for the financial support in this project, under the grant #no. 2013/12681-2, São Paulo Research Foundation (FAPESP) and the Programa de Apoio ao Desenvolvimento Científico da Faculdade de Ciências Farmacêuticas da UNESP-PADC. We are also thankful tothe Hospital Estadual Américo Brasiliense, which allowed its data to be collected.
## References
• Hazell L.
• Shakir S.A.W.
Under-reporting of adverse drug reactions : a systematic review.
Drug Saf. 2006; 29: 385-396
• Kilbridge P.M.
• Classen D.C.
Automated surveillance for adverse events in hospitalized patients: back to the future.
Qual Saf Health Care. 2006; 15: 148-149https://doi.org/10.1136/qshc.2006.018218
• Pal S.N.
• Duncombe C.
• Falzon D.
• Olsson S.
WHO strategy for collecting safety data in public health programmes: complementing spontaneous reporting systems.
Drug Saf. 2013; 36: 75-81https://doi.org/10.1007/s40264-012-0014-6
• Yun I.S.
• Koo M.J.
• Park E.H.
• et al.
A comparison of active surveillance programs including a spontaneous reporting model for phamacovigilance of adverse drug events in a hospital.
Korean J Intern Med. 2012; 27: 443-450https://doi.org/10.3904/kjim.2012.27.4.443
• Coleman J.J.
• McDowell S.E.
An agenda for UK clinical pharmacology.
Br J Clin Pharmacol. 2012; 73: 953-958https://doi.org/10.1111/j.1365-2125.2012.04245.x
• Gerritsen R.
• Dijkers F.
• et al.
Effectiveness of pharmacovigilance training of general practitioners: a retrospective cohort study in the Netherlands comparing two methods.
Drug Saf. 2011; 34: 755-762https://doi.org/10.2165/11592800-000000000-00000
• Macedo A.F.
• Marques F.B.
• Ribeiro C.F.
• Teixeira F.
Causality assessment of adverse drug reactions: comparison of the results obtained from published decisional algorithms and from the evaluations of an expert panel, according to different levels of imputability.
J Clin Pharm Ther. 2003; 28: 137-143
• Hooper A.J.
• Tibballs J.
Comparison of a trigger tool and voluntary reporting to identify adverse events in a paediatric intensive care unit.
Anaesth Intensive Care. 2014; 42: 199-206
• Call R.
• Burlison J.
• Robertson J.
• et al.
Adverse drug event detection in pediatric oncology and hematology patients: using medication triggers to identify patient harm in a specialized pediatric patient population.
J Pediatr. 2014; 165: 447-452https://doi.org/10.1016/j.jpeds.2014.03.033
• Pagotto C.
• Varallo F.
• Mastroianni P.
Impact of educational interventions on adverse drug events reporting.
Int J Technol Assess Health Care. 2013; 29: 410-417https://doi.org/10.1017/S0266462313000457
• Rozich J.D.
• Resar R.K.
Adverse drug event trigger tool: a practical methodology for measuring medication related harm.
Qual Saf Health Care. 2003; 12: 194-200
• Classen D.C.
• Resar R.
• Griffin F.
• et al.
“Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured.
Health Aff (Millwood). 2011; 30: 581-589https://doi.org/10.1377/hlthaff.2011.0190
• Rozenfeld S.
• Giordani F.
• Coelho S.
[Adverse drug events in hospital: pilot study with trigger tool].
Rev saúde pública. 2013; 47: 1102-1111
• Giordani F.
• Rozenfeld S.
• de Oliveira D.F.
• et al.
Surveillance of adverse drug events in hospitals: implementation and performance of triggers.
Rev Bras Epidemiol. 2012; 15: 455-467https://doi.org/10.1590/S1415-790X2012000300002
• Roque K.E.
• Melo E.C.
Adjustment of evaluation criteria of adverse drug events for use in a public hospital in the State of Rio de Janeiro.
Rev Bras Epidemiol. 2010; 13: 607-619https://doi.org/10.1590/S1415-790X2010000400006
• Rozenfeld S.
• Chaves S.M.C.
• Reis LGC
• et al.
Drug adverse effects in a public hospital in Rio de Janeiro: pilot study.
Rev Saude Publica. 2009; 43: 887-890https://doi.org/10.1590/S0034-89102009005000051
• Franklin B.D.
• Birch S.
• Schachter M.
• Barber N.
Testing a trigger tool as a method of detecting harm from medication errors in a UK hospital: a pilot study.
Int J Pharm Pract. 2010; 18: 305-311https://doi.org/10.1111/j.2042-7174.2010.00058.x
• Carnevali L.
• Krug B.
• Amant F.
• et al.
Performance of the adverse drug event trigger tool and the global trigger tool for identifying adverse drug events: experience in a Belgian hospital.
Ann Pharmacother. 2013; 47: 1414-1419https://doi.org/10.1177/1060028013500939
• Nwulu U.
• Nirantharakumar K.
• Odesanya R.
• et al.
Improvement in the detection of adverse drug events by the use of electronic health and prescription records: an evaluation of two trigger tools.
Eur J Clin Pharmacol. 2013; 69: 255-259https://doi.org/10.1007/s00228-012-1327-1
1. Coppet J, Beivin J. Bias and Confounding in Pharmacoepidemiology. In: Textbook of Pharmacoepidemiology. 3rd ed. Chichester: John Wiley & Sons, Ltd.; 2000:261-275. http://dx. doi.org/10.1002/9781118707999.ch16.
• Greenland S.
• Morgenstern H.
Confounding in health research.
Annu Rev Public Health. 2001; 22: 189-212https://doi.org/10.1146/annurev.publhealth.22.1.189
• World Health Organization
International drug monitoring: the role of national centres. Report of a WHO meeting.
World Health Organ Tech Rep Ser. 1972; 498: 1-25
• Edwards I.R.
• Arinson J.K.
Adverse drug reactions: definitions, diagnosis, and management.
Lancet. 2000; 356: 1255-1259
• Fletcher R.H.
• Fletcher S.H.
Clinical Epidemiology: The Essentials.
Lippincott Williams & Wilkins, 2005
• Miguel A.
• Azevedo L.F.
• Araújo M.
• Pereira A.C.
Frequency of adverse drug reactions in hospitalized patients: a systematic review and meta-analysis.
Pharmacoepidemiol Drug Saf. 2012; 21: 1139-1154
• Angamo M.T.
• Chalmers L.
• Curtain C.M.
• Bereznocki L.R.
Adverse-Drug-Reaction-Related Hospitalisations in Developed and Developing Countries: A Review of Prevalence and Contributing Factors.
Drug Saf. 2016; 39: 847-857
• Karpov A.
• Parcero C.
• Mok C.P.Y.
• et al.
Performance of trigger tools in identifying adverse drug events in emergency department patients: a validation study.
Br J Clin Pharmacol. 2016; 82: 1048-1057https://doi.org/10.1111/bcp.13032
• Olsson S.
Overview of pharmacovigilance in resource limited settings: challenges and opportunities.
Clin Ther. 2013; 35: e122-e123https://doi.org/10.1016/j.clinthera.2013.07.379
• Aagaard L.
• Strandell J.
• Melskens L.
• et al.
Global patterns of adverse drug reactions over a decade: analyses of spontaneous reports to VigiBaseTM.
Drug Saf. 2012; 35: 1171-1182https://doi.org/10.2165/11631940-000000000-00000
• Resar R.K.
• Rozich J.D.
• Simmonds T.
A trigger tool to identify adverse events in the intensive care unit.
Jt Comm J Qual Patient Saf. 2006; 32: 585-590
• Varallo F.R.
• Costa M.A.
• Mastroianni P.C.
Potenciais interações medicamentosas responsáveis por internações hospitalares.
Rev Ciências Farm Básica e Apl. 2013; 34: 79-85
• Yu Y.
• Chen J.
• Li D.
• et al.
systematic analysis of adverse event reports for sex differences in adverse drug events.
Sci Rep. 2016; 6: 24955https://doi.org/10.1038/srep24955
• Sam A.T.
• Lian Jessica L.L.
• Parasuraman S.
A retrospective study on the incidences of adverse drug events and analysis of the contributing trigger factors.
J Basic Clin Pharm. 2015; 6: 64-68https://doi.org/10.4103/0976-0105.152095
• Bellomo R.
• Ronco C.
• Kellum J.A.
• et al.
• Acute Dialysis Quality Initiative workgroup
Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group.
Crit Care. 2004; 8: R204-R212https://doi.org/10.1186/cc2872
• del Campo C.B.
• Jimenez C.R.
• Colomer M.G.S.
• et al. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21911829710006714, "perplexity": 20092.971803582044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00826.warc.gz"} |
https://physics.meta.stackexchange.com/questions/7299/ethics-redirecting-homework-solution-seeking-questions-to-a-website-built-for/7311 | # Ethics - Redirecting homework solution seeking questions to a website built for that purpose
My question is: Is it acceptable to notify users who ask "solve this for me" questions of a site built specifically for that purpose?
What I would like to do is post a comment on "solve this problem" questions referencing how to get help with homework on PSE the right way, and referencing the site where they may find and are encouraged to ask for a solution.
I believe it could be good for PSE by redirecting those questions from the site, and it would be good for the site by giving it traffic it would not otherwise see. I wanted to ask and make sure before doing so however, because I have built the site and do not believe it would be ethical to promote it here without the permission of the PSE community.
Here is some background about the project:
I'm a physics and computer science major from Indiana that created a website structured a bit like a stack exchange site (Question answer format, votes, comments) with a sole focus on generating and providing solutions to STEM major problems you might find in textbooks or in homework.
I spent a lot of time scrounging the web for solutions to problems for a variety of reasons over my year of physics courses and am a believer that seeing it done can be a great way to learn efficiently, check your work on problems without answers in the book, and to find some missing details you may have missed while trying to solve a problem without spending an hour trying to realize you forgot something like cm^2 = 10^-4 =/= 10^-2.
The website is ToughSTEM.com
What do you think?
• Yes, that seems quite reasonable to guide those who seek answers to their homework or check-my-work problems to such a website which is built specifically for that purpose. However, just guiding to one website would be a little discrimination. I would deem it apt enough that there must be a link provided in the reason cited for closing the question to this meta question:[contd.] – user36790 Dec 12 '15 at 4:32
• My question was closed on Phys.SE. Can you recommend me another internet site where my question might be on-topic? & then let OP decide what site he deems best to post his query. – user36790 Dec 12 '15 at 4:33
• @user36790 that should probably be an answer – David Z Dec 12 '15 at 9:30
• @user36790: Note that Vlad's posted an answer on that thread and you commented on it. – Kyle Kanos Dec 12 '15 at 11:27
• @Kyle Kanos: So,....? – user36790 Dec 12 '15 at 11:34
• @user36790: Just seems odd to me that you've directed him to a thread where he's already posted an answer that you have also commented on. – Kyle Kanos Dec 12 '15 at 11:42
• @Kyle Kanos: So, what? Have you got the point I'm referring to? I just said since it contain lists of sites beneficial to OP for their homework questions, it might be worthy to link that meta question along with the conventional closing reason homework-like questions should ask about a specific physics concept ..... How is it even related to my comments there? – user36790 Dec 12 '15 at 11:52
My observations are that it is actually rare for someone to post more than the copy & paste question & rebuttal that "It's not homework!" or "I just need the answer!" before they leave for good1.
In the event that someone does post more than that, I think leaving a comment containing a link to the Meta post My question was closed on Phys.SE. Can you recommend me another internet site where my question might be on-topic?, where you've already posted a link to your site, would suffice for those users who are really begging for help. A message I've saved on my Auto-comment (but almost never used) is something along the lines of,
Physics Stack Exchange isn't a homework help site, but if you do want that kind of help, you can take a look at this thread for a list of free online homework help resources.
As an aside, I would say that if you constantly promoted just your site on HW questions, it probably would end up being flagged as promotional content which can get the comment deleted and, if you're persistent enough, give you a suspension. I'm sure you would love traffic to your site, but whether risking a suspension here for that is worth it is up to you.
1 Whether this is a good or bad thing is a different story and I'd rather not derail this question on that.
• I think that might have been the fastest downvote on a Meta post I've received. – Kyle Kanos Dec 12 '15 at 11:45
• I don't think it deserves downvote. Probably he might be Barry Allen:P Cancelled the negative. – user36790 Dec 12 '15 at 11:47
• @user36790 well we would prefer you upvote things because they are good, not to cancel out other people's votes ;-) – David Z Dec 12 '15 at 11:54
• @David Z: I read, I liked, I voted, it cancelled. – user36790 Dec 12 '15 at 11:55
I think that for the occasional homework question if it remains without some hints in the answers or has no answers, it would be OK to refer to another site. The policy for homework here is that the site will help if effort is shown, but not to the point of solving the problem for them. But I am afraid adding your site for each homework tag will not be acceptable.
This is a problem for me too, the referencing to another site, because comments of mine have been deleted when referencing a theoretical site where I know many high level theorists are involved. I try to do that when after some time there are no answers or adequate answers, as it is a pity not to point out somebody to a resource that exists.
• Thanks for your input. I agree that if the asker is finding substantial response on PSE it would be inappropriate to refer them to an alternative resource. It should be left for questions that really do not have a place on PSE. – Ulad Kasach Dec 19 '15 at 0:35
• Seems to me that the first sentence of the HW FAQ summary is in contradiction with your second sentence. – Kyle Kanos Dec 19 '15 at 14:19
• @KyleKanos I am talking actually of observation. After all your ling is one moderators summary. If effort is shown HW is not deleted. – anna v Dec 19 '15 at 15:42
• Policies are what are written down, not what people do. Note also that it's not "one moderators summary" but the summary of what was decided by the community that was written up by a moderator. – Kyle Kanos Dec 19 '15 at 15:49
• @KyleKanos What community? I was not asked – anna v Dec 19 '15 at 16:12
• The Physics Stack Exchange community. And yes you were asked, as was every other member on this site, as you can clearly see from the list of questions involving HW on Meta. Perhaps you simply ignored those discussions because you weren't called out by name explicitly, but you most certainly were asked alongside everyone else. – Kyle Kanos Dec 19 '15 at 16:16
• @KyleKanos I am sorry, that is not being asked. We are asked to vote for moderators for example, and that is well advertized. – anna v Dec 19 '15 at 16:31
• @annav: Yes it is being asked, as it is the purpose of Meta. Questions that are popular on Meta (and most homework policy related questions become popular), they get posted on the front page as you can clearly see in this screenshot. That you ignored it does not mean it isn't well advertised or not asking you. It simply means you weren't paying attention to it. – Kyle Kanos Dec 19 '15 at 16:36
the other solution is to use the area 51.
This may help to move a question to another site with another policy, accepting both homeworks and answers of another kind. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24763323366641998, "perplexity": 963.655548428717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529007.88/warc/CC-MAIN-20190723064353-20190723090353-00258.warc.gz"} |
https://qiskit.org/documentation/locale/de_DE/stubs/qiskit.circuit.library.IntegerComparator.clbits.html | IntegerComparator.clbits¶
property IntegerComparator.clbits
Returns a list of classical bits in the order that the registers were added. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3512243926525116, "perplexity": 2596.4814450346644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884755.46/warc/CC-MAIN-20201024194049-20201024224049-00225.warc.gz"} |
http://www.jzus.zju.edu.cn/article.php?doi=10.1631/jzus.A1300267 | Full Text: <1820>
Summary: <1391>
CLC number: TV125
On-line Access: 2014-03-04
Revision Accepted: 2014-01-03
Crosschecked: 2014-02-21
Cited: 2
Clicked: 4585
Citations: Bibtex RefMan EndNote GB/T7714
Journal of Zhejiang University SCIENCE A 2014 Vol.15 No.3 P.219-230 http://doi.org/10.1631/jzus.A1300267
Evaluation of a multi-site weather generator in simulating precipitation in the Qiantang River Basin, East China*
Author(s): Yue-ping Xu, Chong Ma, Su-li Pan, Qian Zhu, Qi-hua Ran Affiliation(s): . Institute of Hydrology and Water Resources, Civil Engineering, Zhejiang University, Hangzhou 310058, China Corresponding email(s): ranqihua@zju.edu.cn Key Words: Climate change, Change factor method (CFM), Multi-site weather generator, Qiantang River Basin Share this article to: More <<< Previous Article|
Yue-ping Xu, Chong Ma, Su-li Pan, Qian Zhu, Qi-hua Ran. Evaluation of a multi-site weather generator in simulating precipitation in the Qiantang River Basin, East China[J]. Journal of Zhejiang University Science A, 2014, 15(3): 219-230.
@article{title="Evaluation of a multi-site weather generator in simulating precipitation in the Qiantang River Basin, East China",
author="Yue-ping Xu, Chong Ma, Su-li Pan, Qian Zhu, Qi-hua Ran",
journal="Journal of Zhejiang University Science A",
volume="15",
number="3",
pages="219-230",
year="2014",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.A1300267"
}
%0 Journal Article
%T Evaluation of a multi-site weather generator in simulating precipitation in the Qiantang River Basin, East China
%A Yue-ping Xu
%A Chong Ma
%A Su-li Pan
%A Qian Zhu
%A Qi-hua Ran
%J Journal of Zhejiang University SCIENCE A
%V 15
%N 3
%P 219-230
%@ 1673-565X
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.A1300267
TY - JOUR
T1 - Evaluation of a multi-site weather generator in simulating precipitation in the Qiantang River Basin, East China
A1 - Yue-ping Xu
A1 - Chong Ma
A1 - Su-li Pan
A1 - Qian Zhu
A1 - Qi-hua Ran
J0 - Journal of Zhejiang University Science A
VL - 15
IS - 3
SP - 219
EP - 230
%@ 1673-565X
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.A1300267
Abstract:
Recent years have seen a surge in assessment of potential impacts of climate change. As one of the most important tools for generating synthetic hydrological model inputs, weather generators have played an important role in climate change impact analysis of water management. However, most weather generators like statistical downscaling model (SDSM) and long Ashton research station weather generator (LARS-WG) are designed for single site data generation. Considering the significance of spatial correlations of hydro-meteorological data, multi-site weather data generation becomes a necessity. In this study we aim to evaluate the performance of a new multi-site stochastic model, geo-spatial temporal weather generator (GiST), in simulating precipitation in the qiantang River Basin, East China. The correlation matrix, precipitation amount and occurrence of observed and GiST-generated data are first compared for the evaluation process. Then we use the GiST model combined with the change factor method (CFM) to investigate future changes of precipitation (2071–2100) in the study area using one global climate model, Hadgem2_ES, and an extreme emission scenario RCP 8.5. The final results show that the simulated precipitation amount and occurrence by GiST matched their historical counterparts reasonably. The correlation coefficients between simulated and historical precipitations show good consistence as well. Compared with the baseline period (1961–1990), precipitation in the future time period (2071–2100) at high elevation stations will probably increase while at other stations decreases will occur. This study implies potential application of the GiST stochastic model in investigating the impact of climate change on hydrology and water resources.
## 1. Introduction
Global warming caused by increasing greenhouse gas emissions has become evident. Global average surface temperature increased 0.74 °C during the period of 100 years from 1906 to 2005 and global average precipitation increased 2% within the same period (Ju et al., ). Because of its heavy dependence on agriculture, rapid development of its economy, and urbanization, China is very sensitive to climate change. As a result, assessment of the impact of climate change will play an important role in making robust decisions. Many researchers have worked on investigating the impact of climate change on hydrology or water resources in China (Zhai et al., ; Wang et al., ; Xu et al., ).
General circulation models (GCMs) are common tools for projecting future climate chang, but there are scale gaps between the outputs of GCMs and basin-scale meteorological data. Much work has therefore been done to develop various downscaling approaches (Wilby and Wigley, ; Fowler et al., ; Salvi et al., ). Dynamic downscaling and statistical downscaling are two main approaches for downscaling GCMs outputs to basin or site scale data. Dynamic downscaling preserves spatial correlation as well as physically plausible relationships between variables, but it is often computationally intensive (Xu et al., ). Statistical downscaling approaches, including regression-based methods (Schoof and Pryor, ), weather classification methods (Khan et al., ), and stochastic weather generators (Semenov and Barrow, ), have been developed to simulate quantitative relationships between large-scale atmospheric variables and local surface variables. Weather generators, one of the statistical downscaling approaches, can create synthetic daily weather data for a long period of time and are important tools for creating synthetic inputs for hydrological models.
Nowadays, weather generators play an important role in analysis of the impact of climate change (Semenov and Barrow, ; Kilsby et al., ). There are some well-known weather generators like long Ashton research station weather generator (LARS-WG) (Racsko et al., ), climate generator (CLIGEN) (Nicks and Gander, ), and statistical downscaling model (SDSM) (Wilby and Dawson, ). They all have advantages, but are single-site based. When hydrological models need multi-site inputs, or spatial correlation is important for hydrologic simulation, a multi-site weather generator is a necessity. Several multi-site weather simulation methods have been developed. Wilks () was the first to present a method that reproduced the main statistics of multi-site precipitation data. Wilks ()’s approach drives the stochastic weather generator with serially independent and spatially correlated random numbers. The correlation matrices of both precipitation occurrence and amounts are used in this method to generate precipitation data (Brissette et al., ). The main defect of this method is that the correlations of simulated synthetic data are often weaker than the observed counterpart. A multi-site hidden Markov Chain model is one of the leading methods for generating synthetic precipitation data with spatial correlations. The climate is divided into two states in this model (wet or dry) and the persistence of a state is decided by transition probability (Thyer and Kuczera, ). The multi-site precipitation occurrence is decided by the weather state and the precipitation amount is based on both exogenous predictors and weather states. However, this model has a quite complicated process to calibrate the parameter and it is hard to evaluate its accuracy. Alternatively the spatial moving average process approach (Khalili et al., ), the K-nearest-neighbor (Mehrotra et al., ) approach, and the Schaake Shuffle approach (Clark et al., ) are often used in weather generation. All of the above methods have their own advantages and disadvantages. Their processes are normally very complex and hard to implement.
In this paper, a relatively new method, geo-spatial temporal weather generator (GiST) stochastic model is used to generate synthetic weather data and its performance is evaluated in the Qiantang River Basin, East China. The GiST model was developed by Baigorria and Jones (). It can not only reproduce synthetic precipitation data with appropriate temporal and spatial structure of observed precipitation but also is easy to implement. The GiST model is then combined with the change factor method (CFM) to downscale precipitation from one GCM model, Hadgem2_ES, and one emission scenario, RCP 8.5, to demonstrate the usefulness of the GiST model.
## 2. Data and methods
### 2.1. Study area
The Qiantang River Basin, located in the east of China, covers an area of 55 600 km2, including 48 000 km2 in Zhejiang Province. The catchment lies between longitudes 117°E and 122°E and latitudes 28°N and 31°N. The main stream, the Qiantang River, with a length of 688 km, is the longest river in Zhejiang Province. The river runs out of Xiuning in Anhui Province and empties into the Donghai Sea through Hangzhou Bay. Most of the basin is dominated by a sub-tropical humid monsoon climate. The mean annual precipitation in the Qiangtang River Basin is close to 1600 mm, and the mean annual temperature is 17 °C (Xu et al., ). Fig. 1 shows the location of the study area and the meteorological stations of the sub-basins. Table 1 shows details of relevant meteorological stations.
Fig.1
Location of the study area and meteorological stations used in the study
#### Table 1
Information on the stations in the Qiantang River Basin from 1961 to 1990
Station Longitude (°E) Latitude (°N) Hangzhou 120°10′ 30°14′ Huangshan 118°09′ 30°08′ Jinhua 119°39′ 29°07′ Quzhou 118°54′ 29°00′ Shengxian 120°49′ 29°36′ Tianmushan 119°25′ 30°21′ Tunxi 118°17′ 29°43′
### 2.2.1. Multi-site weather generator
GiST, a stochastic model developed by Baigorria and Jones (), is used to generate spatially and temporally correlated daily weather data in this study. The main difference between GiST and other traditional weather generators is that the spatial structure of the weather data is considered. The spatial structure is represented by Pearson’s correlation (ρij ) which is calculated by $${\rho _{ij}} = \frac{1}{\eta }\frac{{\sum\limits_{t = 1}^\eta {({\chi _{it}} - {\mu _i})({\chi _{jt}} - {\mu _j})} }}{{{\sigma _i}{\sigma _j}}}$$, where η represents the total number of pair-wise daily observations, χi and χj represent the pair-wise observations on day t for locations i and j, μi and μj represent the mean of daily values for locations i and j, respectively, and σi and σj represent the standard deviations of daily observations.
Two steps are included to generate precipitation occurrences in the GiST model. The first step is to calculate parameters and initial conditions, including Pearson’s correlation matrix, the Euclidean N-correlation distance, two-state orthogonal Markov transitional probabilities, and the spatially correlated total number of monthly precipitation occurrences at each location. The second step is to generate the spatially and temporally correlated precipitation occurrences, including resampling, iteratively ordering the total block of daily generated values in a month for the two most associated locations and using the two-state orthogonal Markov transitional probabilities to generate precipitation occurrences for the other locations (Baigorria and Jones, ).
The generation of precipitation amounts is calculated at the point where precipitation occurs. The equation for generation of precipitation amounts is given as follows (Baigorria and Jones, ): $${R_{\text{m}}} = r_{gam}^*{\beta _{\text{m}}}{\text{ln}}\left[ {\Gamma ({\alpha _{\text{m}}})} \right]$$, where $$r_{gam}^*$$ indicates a vector of spatially correlated random numbers following a Gamma distribution, and α and β are shape and scale parameters of the gamma function (Г).
### 2.2.2. Change factor method (CFM)
The CFM, or delta change factor method, is combined with GiST to obtain climate change projections in the future period 2071–2100 and to evaluate the future performance of GiST. CFM is a very common method for estimating climate change (Anandhi et al., ). Although more complicated methods exist, CFM is still widely applied in many studies.
In most conditions, CFMs are categorized by a temporal scale, mathematical formulation, and number of change factors. For a temporal scale, it means that the method can be classified by time scales, e.g., daily, monthly, seasonally, and yearly. Higher frequency usually gives results of lower reliability. In this study, the monthly scale is adopted. There are two forms of mathematical formulations of CFM, additive and multiplicative. Different mathematical formulations are used for different variables. For example, additive change factor (CF) is often used for temperature and multiplicative CF is often used for precipitation (Anandhi et al., ). Multiplicative is also used for standard deviation of temperature.
In this study, the following equation is used to calculate the CFs of precipitation: $${\text{C}}{{\text{F}}_{{\text{mul}}}}{\text{ = }}\frac{{\overline {{\text{GC}}{{\text{M}}_{\text{f}}}} }}{{\overline {{\text{GC}}{{\text{M}}_{\text{b}}}} }}$$, where $$\overline {{\text{GC}}{{\text{M}}_{\text{b}}}}$$ and $$\overline {{\text{GC}}{{\text{M}}_{\text{f}}}}$$ represent the mean values of GCM outputs from the baseline and for future periods, respectively, which can be calculated by $$\overline {{\text{GC}}{{\text{M}}_{\text{b}}}} {\text{ = }}\frac{{\sum\limits_{i = 1}^{{N_{\text{b}}}} {{\text{GC}}{{\text{M}}_{{{\text{b}}_i}}}} }}{{{N_{\text{b}}}}}$$, $$\overline {{\text{GC}}{{\text{M}}_{\text{f}}}} {\text{ = }}\frac{{\sum\limits_{i = 1}^{{N_{\text{f}}}} {{\text{GC}}{{\text{M}}_{{{\text{f}}_i}}}} }}{{{N_{\text{f}}}}}$$, where N b and N f are the numbers of values during the baseline period and the future period, respectively.
To obtain climate change projections for the future period 2071–2100, the parameters of the GiST are then adjusted using CFs calculated by Eq. (3). Details of adjustments can be found in (Qian et al., ).
### 2.3. Future climate change scenario and GCM
The newest representative concentration pathways (RCPs) emission scenarios are used in this study. Each RCP is named after the level of radiative forcing, or overall warming power of human activities, expected in 2100. Table 2 shows the details of four RCPs.
#### Table 2
Information of four RCPs
RCP Radiative force in 2100 (W/m2) CO2 concentration in 2100 RCP 2.6 3.0 490×10−6 RCP 4.5 4.5 650×10−6 RCP 6.0 6.0 850×10−6 RCP 8.5 8.5 1370×10−6
Future climate data come from one of the coupled model intercomparison project phase 5 (CMIP5) models, Hadgem2-ES. Hadgem2-ES is short for the Hadley Centre Global Environmental Model, version 2, with an added Earth-system component. It represents more physical elements of the climate than air, sunlight, and water. Hadgem2-ES has a grid of 192 evenly spaced points from east to west and 144 evenly spaced points from north to south. A box defined by about 1° of latitude and 1° of longitude represents an area of 100 km by 200 km. The atmosphere is divided into 38 unequal levels and the level near the land and ocean surface is about 20 m deep (Jones et al., ).
For evaluation purposes, one extreme emission scenario RCP 8.5 is used in this study for illustration. The RCP 8.5 scenario describes a future world of the largest population, a low technology innovation rate, slow energy development, more emissions, and absence of a policy of adaptation to climate change. Its radiative forcing can be found in Table 2.
## 3. Results
### 3.1. Evaluation results of multi-site weather generator
To evaluate the performance of the GiST model in the Qiantang River Basin, Pearson’s correlations, the precipitation amount and the occurrence of observed and GiST-generated data are computed. Pearson’s correlation is introduced in this study to represent the spatial correlations among multi-site in the Basin. Fig. 2 shows Pearson’s correlations of observed and generated daily and monthly precipitation (1961–1990) among all pairs of locations in the baseline period (1961–1990). The solid line is the 45° line. Fig. 2a shows that for daily precipitation, the correlations are underestimated. For monthly precipitation (Fig. 2b), for most stations, Pearson’s correlations of generated data are close to those of the observed data. This underlines the fact that GiST reproduces the spatial correlations for the given region well on a monthly scale. On a daily scale, although underestimated, the correlations are still reasonably simulated.
Fig.2
Pearson’s correlations of daily precipitation (a) and monthly precipitation (b) in the baseline period (1961–1990)
Fig. 3 shows the observed and generated annual precipitation amounts in the baseline period. From Fig. 3, it can be concluded that at Shengxian, Tianmushan and Tunxi, the annual precipitation amounts are very well simulated, with all errors smaller than 5% while at Hangzhou, Huangshan, and Jinhua the precipitation amounts are underestimated and less well simulated, with the largest error at Hangzhou.
Fig.3
Observed and generated annual precipitation amount in the baseline period (1961–1990)
A comparison of observed and generated monthly precipitation amounts in the baseline period (1961–1990) is presented in Fig. 4. Precipitation amounts at Quzhou, Shengxian, Tianmushan, and Tunxi are well simulated. Precipitation amounts are slightly underestimated only in summer. At Hangzhou, Huangshan and Jinhua stations, however, the precipitation amounts are less well simulated. Particularly at Hangzhou, large errors can be found. Such results may imply that the Gamma distribution fails to model the precipitation amounts at these three stations. In general, it can be concluded that the precipitation amounts at these stations are slightly underestimated.
Fig.4
Comparison of observed and generated monthly precipitation amounts in the baseline period (1961–1990) at Hangzhou (a), Huangshan (b), Jinhua (c), Quzhou (d), Shengxian (e), Tianmushan (f), and Tunxi (g)
Fig. 5 shows the comparison between observed and generated monthly precipitation occurrences in the baseline period. It can be observed that the monthly precipitation occurrences are well simulated at stations Huangshan, Jinhua, and Shengxian. At the other stations, the occurrences are also reasonably simulated except at Hangzhou in winter. At all stations, the precipitation occurrences are slightly underestimated.
Fig.5
Comparison of simulated and observed monthly precipitation occurrences at Hangzhou (a), Huangshan (b), Jinhua (c), Quzhou (d), Shengxian (e), Tianmushan (f), and Tunxi (g)
Compared with precipitation occurrences, the simulation of precipitation amounts is better using the GiST model.
### 3.2. Future precipitation in 2071–2100
Future precipitation from one GCM Hadgem2-ES and one emission scenario RCP 8.5 is downscaled using the GiST stochastic model combined with the CFM. Fig. 6 shows Pearson’s correlations of the baseline generated and future generated daily and monthly precipitation among all pairs of locations. It can be found that the Pearson’s correlations are well preserved on a daily scale. However, the correlations on a monthly scale are somewhat underestimated in the future period. The changes of monthly spatial correlations are mainly caused by the CFM which is implemented on a monthly scale.
Fig.6
Pearson’s correlations of daily precipitation (a) and monthly precipitation (b) in the future period
Fig. 7 shows the annual precipitation amounts in the baseline period (generated) and the future period. It is found that there are slight increases at Quzhou, Tianmushan, and Tunxi and decreases at Hangzhou, Huangshan, Jinhua, and Shengxian. However, such changes are not significant. The largest decrease occurs at Hangzhou, reaching 11%.
Fig.7
Future and baseline generated annual precipitation amounts
Fig. 8 shows the comparison of baseline and future generated monthly precipitation amounts. It can be found that, except at Hangzhou, the changes in monthly precipitation amounts are relatively small. At Hangzhou, large changes can be found in May and September. Slight decreases of precipitation can be found at Huangshan, Jinhua and Shengxian. Increases can be found at Quzhou, Tianmushan and Tunxi.
Fig.8
Comparison of baseline and future generated monthly precipitation amounts at Hangzhou (a), Huangshan (b), Jinhua (c), Quzhou (d), Shengxian (e), Tianmushan (f), and Tunxi (g)
Fig. 9 (p.227) shows the comparison of baseline and future generated precipitation occurrences at seven stations. It is found that, compared to the baseline period, the precipitation occurrences at all stations remain more or less stable. This indicates that the downscaling approach (GiST combined with the change factor model) preserves the precipitation occurrences very well. This is understandable since the CFM was used on a monthly scale and only affected the precipitation amounts at seven stations.
Fig.9
Comparison of baseline and future generated precipitation occurrences at Hangzhou (a), Huangshan (b), Jinhua (c), Quzhou (d), Shengxian (e), Tianmushan (f), and Tunxi (g)
To check more closely the future changes of precipitation in 2071–2100, Fig. 10 (p.228) shows the relative changes of monthly precipitation for Hadgem2_ES under RCP 8.5 at seven stations. Fig. 10 shows more clearly than Fig. 8 that the monthly precipitations decrease in most months at Hangzhou, Huangshan, Jinhua, and Shengxian. The largest decrease can be found at Hangzhou in May and approaches 30%. The decreases at the other three stations are much smaller than at Hangzhou. Most decreases are up to 12%. At Quzhou, Tianmushan and Tunxi, increases can be found in many months, particularly in March, the increases are up to 20% at Tunxi. From this figure, it implies that stations in relatively high elevations like Tunxi and Tianmushan often experience increases in precipitation while stations in plains often experience decreases in precipitation. Among these stations, Hangzhou (city) has the largest population and the most economic development. A large fall in precipitation in the future may add more stress to the water shortage problem that already exists.
Fig.10
Relative changes of monthly precipitation in 2071–2100 for Hadgem2_ES under RCP 8.5 at Hangzhou (a), Huangshan (b), Jinhua (c), Quzhou (d), Shengxian (e), Tianmushan (f), and Tunxi (g)
## 4. Conclusions and discussion
The main purpose of this paper is to evaluate the performance of a new multi-site weather generator GiST in the Qiantang River Basin, East China. Future changes in precipitation under RCP 8.5 and Hadgem2_ES for the future period 2071–2100 are also projected to illustrate the usefulness of the multi-site weather generator. The spatial structure of weather information for the given region was considered in this study. The final results indicate that the multi-site weather generator can model the spatial correlations of precipitation appropriately although slight underestimation for the daily correlations and slight overestimation for the monthly correlations can still be found. The results of this study can be used for hydrological modeling or providing implications for water resources management and extreme event risk assessment. This study also illustrates the potential of GiST applied to investigation of the impact of climate change on hydrology and water resources.
At some stations like Hangzhou, the GiST multi-site weather generator is found to underestimate the monthly precipitation amount. This is probably because the Gamma distribution adopted in the multi-site weather generator may fail to model the precipitation amount at Hanghzou. It is therefore proposed that a more reasonable probability distribution be adopted to further improve the accuracy of generated precipitation by GiST model. Moreover, the number of stations used in the orthogonal Markov chains may affect the final results although this is probably constrained by the number of parallel observations at all stations.
Only one GCM and one emission scenario were used in this study to obtain future precipitation projections in 2071–2100 for illustration of the usefulness of the multi-site weather generator. It must be kept in mind that the results based on a single GCM and emission scenario are full of uncertainty. The uncertainty in impact analysis of climate change often originates from GCM structures, emission scenarios, downscaling approaches, and impact analysis models (Wilby and Harris, ; Teng et al., ; Xu et al., ). A formal uncertainty analysis is therefore necessary before climate change impact analysis results are finally used in water management.
## Acknowledgements
We also thank Dr. Guillermo A. BAIGORRIA of the University of Florida, USA for providing the GiST model and the National Climate Center of China Meteorological Administration for providing meteorological data for the Qiantang River Basin.
* Project supported by the International Science & Technology Cooperation Program of China (No. 2010DFA24320), and the National Natural Science Foundation of China (Nos. 51379183 and 50809058)
## References
[1] Anandhi, A., Frei, A., Pierson, D.C., 2011. Examination of change factor methodologies for climate change impact assessment. Water Resources Research, 47(3):W03501
[2] Baigorria, G.A., Jones, J.W., 2010. GiST: A stochastic model for generating spatially and temporally correlated daily rainfall data. Journal of Climate, 23(22):5990-6008.
[3] Brissette, F.P., Khalili, M., Leconte, R., 2007. Efficient stochastic generation of multi-site synthetic precipitation data. Journal of Hydrology, 345(3-4):121-133.
[4] Clark, M., Gangopadhyay, S., Hay, L., 2004. The Schaake shuffle: a method for reconstructing space-time variability in forecasted precipitation and temperature fields. Journal of Hydrometeorology, 5(1):243-262.
[5] Fowler, H.J., Blenkinsop, S., Tebaldi, C., 2007. Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological modelling. International Journal of Climatology, 27(12):1547-1578.
[6] Jones, C.D., Hughes, J.K., Bellouin, N., 2011. The HadGEM2-ES implementation of CMIP5 centennial simulations. Geoscientific Model Development, 4(3):543-570.
[7] Ju, H., Lin, E.D., Wheeler, T., 2013. Climate change modelling and its roles to Chinese crops yield. Journal of Integrative Agriculture, 12(5):892-902.
[8] Khalili, M., Leconte, R., Brissette, F., 2007. Stochastic multisite generation of daily precipitation data using spatial autocorrelation. Journal of hydrometeorology, 8(3):396-412.
[9] Khan, M.S., Coulibaly, P., Dibike, Y., 2006. Uncertainty analysis of statistical downscaling methods. Journal of Hydrology, 319(1-4):357-382.
[10] Kilsby, C.G., Jones, P.D., Burton, A., 2007. A daily weather generator for use in climate change studies. Environmental Modeling & Software, 22(12):1705-1719.
[11] Mehrotra, R., Srikanthan, R., Sharma, A., 2006. A comparison of three stochastic multi-site precipitation occurrence generators. Journal of Hydrology, 331(1-2):280-292.
[12] Nicks, A.D., Gander, G.A., 1994. CLIGEN: A weather generator for climate inputs to water resource and other models. , Proceedings of Fifth International Conference on Computers in Agriculture, 903-909. :903-909.
[13] Qian, B., Hayhoe, H., Gameda, S., 2005. Evaluation of the stochastic weather generators LARS-WG and AAFC-WG for climate change impact studies. Climate Research, 29:3-21.
[14] Racsko, P., Szeidl, L., Semenov, M., 1991. A serial approach to local stochastic weather models. Ecological Modelling, 57(1-2):27-41.
[15] Salvi, K., Kannan, S., Ghosh, S., 2013. High-resolution multisite daily rainfall projections in India with statistical downscaling for climate change impacts assessment. Journal of Geophysical Research: Atmospheres, 118(9):3557-3578.
[16] Schoof, J.T., Pryor, S.C., 2001. Downscaling temperature and precipitation: a comparison of regression-based methods and artificial neural networks. International Journal of Climatology, 21(7):773-790.
[17] Semenov, M.A., Barrow, E.M., 1997. Use of a stochastic weather generator in the development of climate change scenarios. Climatic Change, 35(4):397-414.
[18] Teng, J., Jai, V., Chiew, F.H.S., 2012. Estimating the relative uncertainties sourced from GCMs and hydrological models in modeling climate change impact on runoff. Journal of Hydrometeorology, 13(1):122-139.
[19] Thyer, M., Kuczera, G., 2003. A hidden Markov model for modelling long-term persistence in multi-site rainfall time series. 2. Real data analysis. Journal of Hydrology, 275(1):27-48.
[20] Wang, G.Q., Zhang, J.Y., Jin, J.L., 2012. Assessing water resources in China using PRECIS projections and a VIC model. Hydrology and Earth System Sciences, 16(1):231-240.
[21] Wilby, R.L., Wigley, T.M.L., 1997. Downscaling general circulation model output: a review of methods and limitations. Progress in Physical Geography, 21(1-4):530-548.
[22] Wilby, R.L., Harris, I., 2006. A framework for assessing uncertainties in climate change impacts: low-flow scenarios for the River Thames, UK. Water Resources Research, 42(2):W02419
[23] Wilby, R.L., Dawson, C.W., 2013. The statistical downscaling model: insights from one decade of application. International Journal of Climatology, 33(7):1707-1719.
[24] Wilks, D.S., 1998. Multisite generalization of a daily stochastic precipitation generation model. Journal of Hydrology, 210(1):178-191.
[25] Xu, Y.P., Zhang, X.J., Tian, Y., 2012. Impact of climate change on 24-h design rainfall depth estimation in Qiantang River Basin, East China. Hydrological Processes, 26(26):4067-4077.
[26] Xu, Y.P., Zhang, X.J., Ran, Q.H., 2013. Impact of climate change on hydrology of upper reaches of Qiangtang River Basin, East China. Journal of Hydrology, 483:51-60.
[27] Zhai, P., Zhang, X., Wan, H., 2005. Trends in total precipitation and frequency of daily precipitation extremes over China. Journal of Climate, 18(7):1096-1108. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 8, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5803836584091187, "perplexity": 2855.447291509108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00522.warc.gz"} |
https://chemistry.stackexchange.com/questions/32686/is-it-possible-to-have-a-diatomic-molecule-of-sodium-in-gaseous-state | # Is it possible to have a diatomic molecule of sodium in gaseous state?
Already I know that hydrogen, all the halogens, nitrogen and oxygen forms diatomic molecules. But I am confused about $\ce{Na}$? So I would like to know about that.
• I haven't seen any cases approving $\ce{Na2}$'s existence, but $\ce{Li2}$ exists for sure. en.wikipedia.org/wiki/Dilithium Jun 10 '15 at 8:16
• The molecule $\ce{Na2}$ is well known in the gas phase (and $\ce{ Li2, K2}$), and has been studied since approx 1929! The ground state has a (long) bond length of $0.3078$ nm, vibrational frequency of $159 \pu{cm^{-1}}$ and rotational constant $0.1547 \pu{cm^{-1}}$. At least four excited state are known. Feb 9 '17 at 16:14
According to molecular orbital theory, disodium should be stable in the gas phase, with a bond order of one. The molecular orbital diagram is the same for all the alkali metals since they all have one valence electron in an $s$ orbital.
NIST chemistry webbook has a small page on disodium and quotes an enthalpy change of formation of $142.07~\mathrm{kJ~mol^{-1}}$ which is moderately endothermic.
The disodium molecule $$\ce{Na2}$$ has first been observed by M. Polanyi and collaborators in the diluted flame of sodium vapor and chlorine $$\ce{Cl2}$$. When sodium metal is heated in a vacuum, it gets vaporized. If now a tiny amount of gas $$\ce{Cl2}$$ is sent into this vapor, the famous yellow flame of $$\ce{Na}$$ is produced at the point where both gases are in contact to one another. This diffusion flame is due to three consecutive reactions. First : $$\ce{Na + Cl2 -> NaCl + Cl}$$ Then the chlorine atom cannot react with atomic sodium $$\ce{Na}$$ since there is no third body to remove the reaction energy. So the $$\ce{Cl}$$ atom does react with the dimer $$\ce{Na2}$$ in the reaction $$\ce{Cl + Na2 -> NaCl + Na'}$$ and this reaction is exothermic enough to produce an excited $$\ce{Na'}$$ atom. Sorry ! this $$\ce{Na'}$$ should have been printed as $$\ce{Na}$$ with a star, but my keyboard refuses to print this symbol (*) in index. Now this excitation energy ir reemitted as the yellow D-line of sodium ($$\pu{589.89 nm}$$) $$\ce{Na' -> Na + h\nu }$$ This was the first proof that the dimer $$\ce{Na2}$$ exists in the sodium vapor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7565824389457703, "perplexity": 719.9533414838712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057199.49/warc/CC-MAIN-20210921070944-20210921100944-00058.warc.gz"} |
https://proxieslive.com/tag/better/ | ## SVG animation , LottieFiles Or Gif , Which is better?
Please compare 3 method for animation:
1. SVG animation
2. LottieFiles
3. Gif
Which is better in:
1. SEO
3. Browsers support
4. File size
5. Better working futures for the web
## Is placing the folder after the page in the URL better for SEO?
I read on https://moz.com/learn/seo/url that google prioritizes the folder over the path/page. Which wold mean that a url such as:
coffeeshop.example/*home-brew-coffee*/**types**
would perform better than
coffeeshop.example/**types**/*home-brew-coffee*
This is something that could easily be done via a .htaccess URL redirect, but I’m wondering if it’s even worth the effort?
## Wich solutions is better for translate some data?
I’ve a little mobile app who need some translation. Currently, I just want to translate my app in one language, I don’t think I’ll need more. So I’m facing a problem, should I translate stuff in database, or in translations files on client side ?
The user will be able to select his language, but first i need to determine it with the locale variable for the registration (because he must select his country at registration).
First I tought about creating one translation table for each table who need it.
But it seems to not be the good solution if I need to add more language, I’ll have to alter all translations tables.
So, my second solution is to create a language table, keep the translation table for each table but as one-to-many relationship.
I think this solution is better, but I’m not sure about performance. I know it’s a little app and the question does not really arise, but we never know.
Now, I’m stuck because It means that every time I need translation, I have to query the database. There are tons of contries, and I’ll like to have auto completion. Fruits will be displayed in several pages and used in others tables.
So I thought about keep translations on client side within json files. For exemple translations/fr_FR.json
"fruits": [ { "id": 1, "original": "banana", "translation": "banane" }, { "id": 2, "original": "apple", "translation": "pomme" } ], "countries": [ { "id": "ES", "original": "spain", "translation": "espagne" } ]
Everytime I need translation, I can use a function to get the translation from the file.
It means more calculations, and user will have to update his app if translation has changed.
Wich solution is better ?
## Robust PCA (better Mathematica style)
I’ve parsed (almost verbatim) Python RPCA implementation to WM. Can it be rewritten using a better WM style? In particular, I’m not happy with Clip[..., {0, Infinity}] and While[] loop (can be replaced with Do[]).
ClearAll[shrink] ; shrink[matrix_, tau_] := Sign[matrix]*Clip[Abs[matrix] - tau, {0, Infinity}] ClearAll[threshold] ; threshold[matrix_, tau_] := Block[ {u, s, v}, {u, s, v} = SingularValueDecomposition[matrix] ; Dot[u, Dot[shrink[s, tau], Transpose[v]]] ] ; ClearAll[rpca] ; rpca[matrix_?MatrixQ, mu_Real, lambda_Real, tolerance_Real, limit_Integer] := Block[ {inverse, count, error, sk, yk, lk}, inverse = 1.0/mu ; count = 0 ; sk = yk = lk = ConstantArray[0.0, Dimensions[matrix]] ; While[ count < limit, lk = threshold[matrix - sk + inverse*yk, inverse] ; sk = shrink[matrix - lk + inverse*yk, inverse*lambda] ; error = matrix - lk - sk ; yk = yk + mu*error ; error = Norm[error, "Frobenius"] ; count++ ; If[error < tolerance, Break[]] ; ] ; {lk, sk, {count, error}} ] ;
Example:
(* https://github.com/dganguli/robust-pca *) (* "12.1.1 for Linux x86 (64-bit) (June 19, 2020)" *) (* generate test matrix *) n = 100 ; num$$groups = 3 ; num$$values = 40 ; matrix = N[ConstantArray[Flatten[Transpose[ConstantArray[10*Range[num$$groups], num$$values]]], n]] ; {n, m} = Dimensions[matrix] (* set selected elements to zero *) SeedRandom[1] ; ln = RandomInteger[{1, n}, 20] ; lm = RandomInteger[{1, m}, 20] ; Table[matrix[[ln[[i]], lm[[i]]]] = 0.0, {i, 1, 20}] ; matrix = DeveloperToPackedArray[matrix] ; (* -- python zeros ln = [81, 15, 1, 68, 4, 66, 24, 98, 69, 75, 16, 25, 5, 91, 84, 71, 2, 31, 49, 26] lm = [45, 74, 107, 70, 57, 48, 29, 69, 27, 69, 11, 87, 77, 44, 34, 45, 87, 94, 19, 39] for x, y in zip(ln, lm): D[x, y] = 0 *) (* set parameters *) mu = 1/4*1/Norm[matrix, 1]*Apply[Times, Dimensions[matrix]] ; lambda = 1/Sqrt[N[Max[Dimensions[matrix]]]] ; tolerance = 10.0^-7*Norm[matrix, "Frobenius"] ; limit = 1000 ; (* rpca *) result = rpca[matrix, mu, lambda, tolerance, limit] ; (* # of iterations and error *) Last[result] (* low rank *) +Table[result[[1]][[ln[[i]], lm[[i]]]], {i, 1, 20}] (* sparse *) -Table[result[[2]][[ln[[i]], lm[[i]]]], {i, 1, 20}] (* {100, 120} *) (* {39, 0.000167548} *) (* {20., 20., 30., 20., 20., 20., 10., 20., 10., 20., 10., 23.0449, 20., 20., 10., 20., 23.0449, 30., 10., 10.} *) (* {20., 20., 30., 20., 20., 20., 10., 20., 10., 20., 10., 23.0449, 20., 20., 10., 20., 23.0449, 30., 10., 10.} *)
## NSolve and NIntegrate, or a better approach
I need to define and plot the following function
$$a(t) := \exp\left(\int Z(t)\; dt\right)$$
where $$Z(t)$$ is the solution to the equation
$$0 = t – 2 \int^Z_1 F(x)\; dx$$
with $$F = F(x)$$ being a known (but complicated and non-integrable) function.
How do I define and plot the function $$a(t)$$ in Mathematica?
Here is my attempt with a particular function $$F(x)$$ that I need to work with:
A = 0; F[x_] = - ((4*A*x^(9/2) + 64*x^6 - 4*Sqrt[A*x^9*(32*x^(3/2) + A)])^(1/3)/(16*x^4 - 4*x^2*(4*A*x^(9/2) + 64*x^6 - 4*Sqrt[A*x^9*(32*x^(3/2) + A)])^(1/3) - (4*A*x^(9/2) + 64*x^6 - 4*Sqrt[A*x^9*(32*x^(3/2) + A)])^(2/3))); A = 0; Int[Z_?NumericQ] := NIntegrate[F[x], {x, 1, Z}] S[t_?NumericQ] := NSolve[t - 2*Int[Z] == 0, Z] a[t_] := Exp[Integrate[S[t], t]]
However, when trying to evaluate for example $$a(2)$$ I get the following error:
## How can I better show players that enemies are not friends?
I’m a new GM playing Pathfinder with other newbies.
The problem I’m facing is that the group tries to be friends with almost every humanoid that they meet. It’s not that they dislike combat, it’s just that they sometimes are able to talk their way out of situations.
Obviously if they’re having fun, and it doesn’t destroy the game – then I should let them talk their way out of as much as they like. But it’s become a bit cumbersome and is slowing the game down that I have to create full dialog etc for every single goblin or orc that they come across.
Any advice on how I can help suggest to the players that the encounter is a fight and not a negotiation?
## Better way of handling incorrect date format in a column with “char(10)” data type / TRY_CONVERT equivalent in PLSQL
I have a source table with below structure:
create table customer_info (customer_num number, birth_date char(10))
Unfortunately the birth_date column data type is char(10) instead of date. Some example data for this table would be like below:
customer_num | birth_date -------------------------------- 1 | 2001/01/01 1 | 2010/01/01 1 | 2021/01/01 1 | 12hhhhuu6 --> Incorrect date 1 | 2001/01/01 1 | 2001/23/01 --> Incorrect date
what I’ve done is writing a function to evaluate every single record and return it’s correct format but as you know , using a function for every single record is nod a good idea and it somehow kills performance. I was wondering if you could suggest a better way for this.
create or replace function new_to_date_en(d varchar2) return DATE is v_date date; begin select to_date(d,'yyyy/mm/dd' ) into v_date from dual; return to_date(d,'yyyy/mm/dd'); exception when others then return to_dateto_date('2021/03/07', 'yyyy/mm/dd'); end;
Using the function:
select customer_num, new_to_date_en(birth_date) from customer_info;
There is a way in T-SQL COALESCE(TRY_CONVERT(date, @date, 111), '2012-01-01')`. Is there a similar way in oracle plsql?
## Do Genasi tolerate low temperatures better than humans?
one of my D&D players just created the new character – The Fire Genasi Warrior. Our story probably will be placed in the Icewind Dale. I’m new in the D&D and you know – still learning and getting new things. My questions are:
Do Genasi tolerate low temperatures better than humans? And does their body give off any heat?
## (New to DND/Roleplaying) How to better RP A character struggling for power from an evil entity?
As stated I am fairly new to the game and have roughly six sessions under my belt in our campaign and two one shots and a guest appearance on another campaign. Mechanically I understand the game but I am fairly new to roleplaying….and clearly didn’t consider that before designing my first character. I won’t lie and say I made the character after understanding how the game worked…however our DM was very open to homebrewing something fun for me.
My character is a female half elf Hexblade Warlock. Brief backstory summary is her family and lineage is cursed for using magic in search of power. Saya (my character) was forbidden to use magic from birth in fears of the curse, she naturally uses magic and becomes interested in it. Her parents are killed by the fiend that cursed them for trying to stop her from using the magic that makes it more powerful blah blah blah flavortown. Her sword at some point in her search for this fiend become imbued with his power unbenounced to her. During a hard fight as she went down a voice appeared in her head as she blacked out asking if she wanted to live and have the power to do so. subconsciously she agreed (making her pact with the fiend and becoming a hexblade) she awoke with small horns (homebrew for later) frightened by then she hides them with a witches hat she finds in town before meeting with the rest of the party, keeping them hidden from the party.
Saya is NOT an evil character. The FIEND however IS. Her homebrew properties are her horns, they are designed around a custom wild magic table our DM made for her. There are 10 levels to the table increasing every time she rolls the level number or lower on cantrips or spells. Anytime this happens a random wild magic event tailored around her happens….as well as her horns growing an additional inch. Each time the fiend slowly gains power as Saya has to fight it off, eventually losing control temporarily or possibly permanently becoming feral and attacking anything in sight (including the party)
Now my question is….being new to roleplaying, how should I go about this journey? It is a struggle for power between her and fiend, the closer she gets to 10 the more….evil? I feel she will become before snapping. It is already a challenge for me to have picked a female character ( I am male) but to have also made a character with seemingly split personality issues as my first character.
note the rest of the group is VERY talented at roleplaying so I am trying extremely hard to do my best at this and want to be as interesting as possible with how amazing all of the other players are. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17731735110282898, "perplexity": 5334.831843288335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00374.warc.gz"} |
https://www.physicsforums.com/threads/comprehending-stated-logical-progression-in-thermodynamics.793009/ | Comprehending stated logical progression in thermodynamics
• Start date
• Tags
• #1
10
0
Homework Statement
The idea is to describe work with a magnetized body within a solenoid. You have the equation of energy with field H by 1/8π∫H2dV where H=hi and h is a vector function of position.
Then you have if the work is changed dW/dt=d/dt[1/8π∫H2dV]
Then there is work in the creation of an elementary dipole within the body dm. the dipole has electron loop with current i' and area a, and the solenoid produces field h at the loop. If i' changes, the e.m.f. generated in solenoid is (h⋅a)di'/dt, so the battery must work at rate i(h⋅a)di'/dt.
Since solenoid field at loop is hi and by Ampere's theorem ai' is the magnetic moment:
dW/dt=H⋅dm/dt
Now here is where I start to get lost, but it kind of holds a little.
removing time derivatives and integrating over all space:
dW=d[1/8π∫H2dV] + ∫(H⋅dJ)dV
and here is where I get REALLY lost: where J is the intensity of magnetization (the magnetic moment dm of an element dV is JdV)
Can someone please explain that last step to me?
• #2
18,678
8,686
Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
• Last Post
Replies
0
Views
983
• Last Post
Replies
2
Views
4K
• Last Post
Replies
5
Views
925
• Last Post
Replies
4
Views
3K
• Last Post
Replies
1
Views
3K
• Last Post
Replies
8
Views
2K
• Last Post
Replies
0
Views
1K
• Last Post
Replies
1
Views
7K
• Last Post
Replies
39
Views
2K
• Last Post
Replies
1
Views
668 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778566718101501, "perplexity": 2726.1071934726265}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00184.warc.gz"} |
https://en.wikipedia.org/wiki/Equal_transit-time_fallacy | # Lift (force)
(Redirected from Equal transit-time fallacy)
Boeing 747-8F landing
A fluid flowing past the surface of a body exerts a force on it. Lift is the component of this force that is perpendicular to the oncoming flow direction.[1] It contrasts with the drag force, which is the component of the surface force parallel to the flow direction. If the fluid is air, the force is called an aerodynamic force. In water, it is called a hydrodynamic force.
## Overview
Lift is defined as the component of the total aerodynamic force perpendicular to the flow direction, and drag is the component parallel to the flow direction
Lift is most commonly associated with the wing of a fixed-wing aircraft, although lift is also generated by propellers, kites, helicopter rotors, rudders, sails and keels on sailboats, hydrofoils, wings on auto racing cars, wind turbines, and other streamlined objects.
Lift is also exploited in the animal world, and even in the plant world by the seeds of certain trees.[2] While the common meaning of the word "lift" assumes that lift opposes weight, lift in the technical sense used in this article can be in any direction with respect to gravity, since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is flying straight and level (cruise) most of the lift opposes gravity.[3] However, when an aircraft is climbing, descending, or banking in a turn the lift is tilted with respect to the vertical.[4] Lift may also be entirely downwards in some aerobatic manoeuvres, or on the wing on a racing car. In this last case, the term downforce is often used. Lift may also be largely horizontal, for instance on a sail on a sailboat.
Aerodynamic/hydrodynamic lift is distinguished from other kinds of lift in fluids. It requires relative motion of the fluid which distinguishes it from aerostatic lift or buoyancy lift as used by balloons, blimps, dirigibles, boats and submarines. It also usually refers to situations in which the body is completely immersed in the fluid, and is thus distinguished from planing lift as used by motorboats, surfboards, and water-skis, in which only a lower portion of the body is immersed in the lifting fluid flow.
The lift will be explained here relative to airfoils, but hydrofoils and marine propellers basically share the same physics and works the same as airfoils, even-though differences between air and water (e.g.: density, compressibility, viscosity) do have effects.
## Simplified physical explanations of lift on an airfoil
A cross-section of a wing defines an airfoil shape
An airfoil is a streamlined shape that is capable of generating significantly more lift than drag.[5] A flat plate can generate lift, but not as much as a streamlined airfoil, and with somewhat higher drag.
There are several ways to explain how an airfoil generates lift. Some are more complicated or more mathematically rigorous than others; some have been shown to be incorrect.[6][7][8][9][10] For example, there are explanations based directly on Newton’s laws of motion and explanations based on Bernoulli’s principle. Either can be used to explain lift.[11][12]
### Flow deflection and Newton's laws
Newton's third law says that for every action there is an equal and opposite re-action. When an airfoil deflects air downwards, the air exerts an upward force on the airfoil.
An airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton's third law, the air must exert an equal and opposite (upward) force on the airfoil, which is the lift.[13][14][15][16]
The air flow changes direction as it passes the airfoil and follows a path that is curved downward. According to Newton's second law, this change in flow direction requires a downward force applied to the air by the airfoil. Then, according to Newton's third law, the air must exert an upward force on the airfoil. The overall result is that a reaction force, the lift, is generated opposite to the directional change. In the case of an airplane wing, the wing exerts a downward force on the air and the air exerts an upward force on the wing.[17][18][19][20][21][22][23][25]
The downward turning of the flow is not produced solely by the lower surface of the airfoil, and the air flow above the foil accounts for much of the downward-turning action.
This simple application of Newton's laws does not provide a complete theory of lift. In particular, it does not explain how the downforce on the air arises.[26]
### Increased flow speed and Bernoulli's principle
Bernoulli's principle states that within a steady airflow of constant energy, when the air flows through a region of lower pressure it speeds up and vice versa.[27] Thus, there is a direct mathematical relationship between the pressure and the speed, so if one knows the speed at all points within the airflow one can calculate the pressure, and vice versa. For any airfoil generating lift, there must be a pressure imbalance, i.e. lower average air pressure on the top than on the bottom. Bernoulli's principle states that this pressure difference must be accompanied by a speed difference.
#### Conservation of mass
Streamlines and streamtubes around a NACA 0012 airfoil at moderate angle of attack. Note the overall downward deflection of the air, as well as narrower streamtubes above and wider streamtubes below the foil.
Starting with the flow pattern observed in both theory and experiments, the increased flow speed over the upper surface can be explained in terms of streamtube pinching and conservation of mass.[24]
The streamlines divide the flow around the airfoil into streamtubes as depicted by the spaces between the streamlines in the adjacent diagram. By definition, fluid never crosses a streamline in a steady flow. Assuming that the air is incompressible, the rate of volume flow (e.g. liters or gallons per minute) must be constant within each streamtube since matter is not created or destroyed. If a streamtube becomes narrower, the flow speed must increase in the narrower region to maintain the constant flow rate. This is an application of the principle of conservation of mass.[28]
The picture shows that the upper stream tubes constrict as they flow up and around the airfoil. Conservation of mass says that the flow speed must increase as the stream tube area decreases.[24] Similarly, the lower stream tubes expand and the flow slows down.
From Bernoulli's principle, the pressure on the upper surface where the flow is moving faster is lower than the pressure on the lower surface where it is moving slower. This pressure difference creates a net aerodynamic force, pointing upward.
#### Limitations of explanations based on Bernoulli's principle
• The explanation above does not explain why the streamtubes change size. To see why the air flows the way it does requires more sophisticated analysis.[29][30][31]
• Sometimes a geometrical argument is offered to demonstrate why the streamtubes change size: it is asserted that the top "obstructs" or "constricts" the air more than the bottom, hence narrower streamtubes. For conventional wings that are flat on the bottom and curved on top this makes some intuitive sense. But it does not explain how flat plates, symmetric airfoils, sailboat sails, or conventional airfoils flying upside down can generate lift, and attempts to calculate lift based on the amount of constriction do not predict experimental results.[32][33][34][35]
• A common explanation using Bernoulli's principle asserts that the air must traverse both the top and bottom in the same amount of time and that this explains the increased speed on the (longer) top side of the wing. But this assertion is false; it is typically the case that the air parcels traveling over the upper surface will reach the trailing edge before those traveling over the bottom.[36]
## Basic attributes of lift
Lift is a result of pressure differences and depends on angle of attack, airfoil shape, air density, and airspeed.
### Pressure differences
Pressure is the normal force per unit area exerted by the air on itself and on surfaces that it touches. The lift force is transmitted through the pressure, which acts perpendicular to the surface of the airfoil. The air maintains physical contact at all points. Thus, the net force manifests itself as pressure differences. The direction of the net force implies that the average pressure on the upper surface of the airfoil is lower than the average pressure on the underside.[37]
These pressure differences arise in conjunction with the curved air flow. Whenever a fluid follows a curved path, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside.[38] This direct relationship between curved streamlines and pressure differences was derived from Newton's second law by Leonhard Euler in 1754:
${\displaystyle {\frac {\operatorname {d} p}{\operatorname {d} R}}=\rho {\frac {v^{2}}{R}}}$
The left hand side of this equation represents the pressure difference perpendicular to the fluid flow. On the right hand side ρ is the density, v is the velocity, and R is the radius of curvature. This formula shows that higher velocities and tighter curvatures create larger pressure differentials and that for straight flow (R → ∞) the pressure difference is zero.[39]
### Angle of attack
Angle of attack of an airfoil
The angle of attack is the angle between the chord line of an airfoil and the oncoming air. A symmetrical airfoil will generate zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles a symmetrical airfoil will generate a lift force roughly proportional to the angle of attack.[40][41]
As the angle of attack grows larger, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the wing; there is less deflection downward so the airfoil generates less lift. The airfoil is said to be stalled.[42]
### Airfoil shape
An airfoil with camber compared to a symmetrical airfoil
The lift force depends on the shape of the airfoil, especially the amount of camber (curvature such that the upper surface is more convex than the lower surface, as illustrated at right). Increasing the camber generally increases lift.[43][44]
Cambered airfoils will generate lift at zero angle of attack. When the chord line is horizontal, the trailing edge has a downward direction and since the air follows the trailing edge it is deflected downward.[45] When a cambered airfoil is upside down, the angle of attack can be adjusted so that the lift force is upwards. This explains how a plane can fly upside down.[46][47]
The wings of birds and most subsonic aircraft have spans much larger than their chords. For wings of this general shape (often referred to as having a high aspect-ratio), the most important features of the lifting flow can be explained in terms of the two-dimensional flow around an airfoil, which is just the shape of a cross-section of the wing, as illustrated in the drawing at right.[48] Most of the discussion in this article concentrates on two-dimensional airfoil flow. However, the flow around a three-dimensional wing involves significant additional issues, and these are discussed below under Lift of three dimensional wings. For a wing of low aspect ratio, such as a delta wing, two-dimensional airfoil flow is not relevant, and three-dimensional flow effects dominate.[49]
### Air speed and density
The flow conditions also affect lift. Lift is proportional to the density of the air and approximately proportional to the square of the flow speed. Lift also depends on the size of the wing, being generally proportional to the wing's area projected in the lift direction. In aerodynamic theory and engineering calculations it is often convenient to quantify lift in terms of a "Lift coefficient" defined in a way that makes use of these proportionalities.
### Lift coefficient
If the lift coefficient for a wing at a specified angle of attack is known (or estimated using a method such as thin airfoil theory), then the lift produced for specific flow conditions can be determined using the following equation:[50]
${\displaystyle L={\tfrac {1}{2}}\rho v^{2}SC_{L}}$
where
### Pressure integration
When the pressure distribution on the airfoil surface is known, determining the total lift requires adding up the contributions to the pressure force from local elements of the surface, each with its own local value of pressure. The total lift is thus the integral of the pressure, in the direction perpendicular to the farfield flow, over the entire surface of the airfoil or wing.[52]
${\displaystyle L=\oint p\mathbf {n} \cdot \mathbf {k} \;\mathrm {d} S,}$
where:
• L is the lift,
• S is the wing surface area
• p is the value of the pressure,
• n is the normal unit vector pointing into the wing, and
• k is the vertical unit vector, normal to the freestream direction.
The above lift equation neglects the skin friction forces, which typically have a negligible contribution to the lift compared to the pressure forces. By using the streamwise vector i parallel to the freestream in place of k in the integral, we obtain an expression for the pressure drag Dp (which includes the pressure portion of the profile drag and, if the wing is three-dimensional, the induced drag). If we use the spanwise vector j, we obtain the side force Y.
{\displaystyle {\begin{aligned}D_{p}&=\oint p\mathbf {n} \cdot \mathbf {i} \;\mathrm {d} S,\\[1.2ex]Y&=\oint p\mathbf {n} \cdot \mathbf {j} \;\mathrm {d} S.\end{aligned}}}
The validity of this integration generally requires the airfoil shape to be a closed curve that is piecewise smooth.
## A more comprehensive physical explanation
As described above, there are two main popular explanations of lift, one based on downward deflection of the flow combined with Newton's laws, and one based on changes in flow speed combined with Bernoulli's principle. Either of these, by itself, correctly identifies some aspects of the lifting flow but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both downward deflection and changes in flow speed, and requires looking at the flow in more detail.[53]
### Lift involves action and reaction at the airfoil surface and is felt as a pressure difference
The airfoil shape and angle of attack work together so that the airfoil exerts a downward force on the air as it flows past. According to Newton's third law, the air must then exert an equal and opposite (upward) force on the airfoil, which is the lift.[14][15]
The force is exerted by the air as a pressure difference on the airfoil's surfaces.[54] Pressure in a fluid is always positive in an absolute sense,[55] so that pressure must always be thought of as pushing, and never as pulling. The pressure thus pushes inward on the airfoil everywhere on both the upper and lower surfaces. The flowing air reacts to the presence of the wing by reducing the pressure on the wing's upper surface and increasing the pressure on the lower surface. The pressure on the lower surface pushes up harder than the reduced pressure on the upper surface pushes down, and the net result is upward lift.[54]
The pressure difference that exerts lift acts directly on the airfoil surfaces. But understanding how the pressure difference is produced requires understanding what the flow does over a wider area.
### The airfoil affects the flow over a wide area around it
Flow around an airfoil: the dots move with the flow. Note that the velocities are much higher at the upper surface than at the lower surface. The black dots are on timelines, which split into two – an upper and lower part – at the leading edge. Seeing the speed difference in the animation correctly requires keeping track of corresponding columns of markers on the upper-and lower-surface streamlines. Over the length of the airfoil the upper markers nearly catch up with the lower markers one column ahead, showing that the air columns do not rejoin as they were before separation. Colors of the dots indicate streamlines. The airfoil is a Kármán–Trefftz airfoil, with parameters μx = −0.08, μy = +0.08 and n = 1.94. The angle of attack is 8°, and the flow is a potential flow.
Pressure distribution with isobars around a lifting airfoil. The plus sign indicates pressure higher than ambient, and the minus sign indicates pressure lower than ambient (not negative pressure in the absolute sense). The block arrows indicate the directions of net forces on fluid parcels in different parts of the flowfield.
An airfoil affects the speed and direction of the flow over a wide area. When an airfoil produces lift, the flow ahead of the airfoil is deflected upward, the flow above and below the airfoil is deflected downward, and the flow behind the airfoil is deflected upward again, leaving the air far behind the airfoil flowing in the same direction as the oncoming flow far ahead. The flow above the upper surface is always sped up, and the flow below the airfoil is usually slowed down. The downward deflection and the changes in flow speed are pronounced and extend over a wide area, as can be seen in the flow animation on the right. These differences in the direction and speed of the flow are greatest close to the airfoil and decrease gradually far above and below. All of these features of the velocity field also appear in theoretical models for lifting flows.[56][57]
The pressure is also affected over a wide area. When an airfoil produces lift, there is always a diffuse region of low pressure above the airfoil, and there is usually a diffuse region of high pressure below, as illustrated by the isobars (curves of constant pressure) in the drawing. The pressure difference that acts on the surface is just part of this spread-out pattern of non-uniform pressure.[53]
### The pressure differences and the changes in flow speed and direction support each other in a mutual interaction
The non-uniform pressure exerts forces on the air in the direction from higher pressure to lower pressure. The direction of the force is different at different locations around the airfoil, as indicated by the block arrows in the pressure distribution with isobars figure. Air above the airfoil is pushed toward the center of the low-pressure region, and air below the airfoil is pushed outward from the center of the high-pressure region.
According to Newton's second law, a force causes air to accelerate in the direction of the force. Thus the vertical arrows in the pressure distribution with isobars figure indicate that air above and below the airfoil is accelerated, or turned downward, and that the non-uniform pressure is thus the cause of the downward deflection of the flow visible in the flow animation. To produce this downward turning, the airfoil must have a positive angle of attack or have its rear portion curved downward as on an airfoil with camber. Note that the downward turning of the flow over the upper surface is the result of the air being pushed downward by higher pressure above it than below it.
The arrows ahead of the airfoil indicate that the flow ahead of the airfoil is deflected upward, and the arrows behind the airfoil indicate that the flow behind is deflected upward again, after being deflected downward over the airfoil. These deflections are also visible in the flow animation.
The arrows ahead of the airfoil and behind also indicate that air passing through the low-pressure region above the airfoil is sped up as it enters, and slowed back down as it leaves. Air passing through the high-pressure region below the airfoil sees the opposite: It is slowed down and then sped back up. Thus the non-uniform pressure is also the cause of the changes in flow speed visible in the flow animation. The changes in flow speed are consistent with Bernoulli's principle, which states that in a steady flow without viscosity, lower pressure means higher speed, and higher pressure means lower speed.
Thus changes in flow direction and speed are directly caused by the non-uniform pressure. But this cause-and-effect relationship is not just one-way; it works in both directions simultaneously. The air's motion is affected by the pressure differences, but the existence of the pressure differences depends on the air's motion. The relationship is thus a mutual, or reciprocal, interaction: Air flow changes speed or direction in response to pressure differences, and the pressure differences are sustained by the air's resistance to changing speed or direction.[58] A pressure difference can exist only if something is there for it to push against. In the case of an aerodynamic flow, what a pressure difference pushes against is the inertia of the air, as the air is accelerated by the pressure difference.[53] And this is why the mass of the air is important, and why lift depends on air density.
In summary, sustaining the pressure difference that exerts the lift force on the airfoil surfaces requires sustaining a pattern of non-uniform pressure spread over a wide area around the airfoil. This requires maintaining pressure differences in both the vertical and horizontal directions, and thus requires both downward turning of the flow and changes in flow speed according to Bernoulli's principle. The pressure differences and the changes in flow direction and speed sustain each other in a mutual interaction. The pressure differences follow naturally from Newton's second law and from the fact that the flow along the surface naturally follows the predominantly downward-sloping contours of the airfoil. And the fact that the air has mass is crucial to the interaction.[53]
## The understanding of lift as a physical phenomenon
The scientific understanding of lift is based on mathematical theories of continuum fluid mechanics[59][60][61] that are in turn based on established principles of physics, as discussed below under "Mathematical theories of lift". The applications of these theories to aerodynamic flows have been agreed upon by the scientific and engineering communities since the early 20th century.[62][63] Starting with just the shape of the lifting surface and the general flow conditions (airspeed, density, and angle of attack), the existence of lift, the amount of lift, and all of the important details of the lifting flow have been predicted successfully by the theories. Thus lift can be considered to be thoroughly understood in a scientific sense. Furthermore, the quantitative theories provide useful quantitative information for engineering purposes.
Simplified physical explanations of lift, without mathematics, are also useful for purposes of understanding, especially by non-technical audiences. These qualitative explanations are by their nature less rigorous and are thus not as well established as the mathematical theories, and they cannot provide quantitative information for engineering. A difficulty in devising such explanations is finding a satisfactory balance between completeness on one hand, and simplicity and brevity on the other. Fluid flows in general are complex phenomena, and simplified explanations are seldom completely satisfactory. Many different explanations of lift have been proposed, reflecting different choices of what aspect of the flow to emphasize. In many cases, oversimplification has led to incompleteness and/or outright errors.[6][8][9][10] There has been a long history of disagreement and controversy, even into recent years, but only regarding the qualitative explanations, not the science itself.[64][65][66]
## Mathematical theories of lift
The mathematical theories are based on continuum fluid mechanics, in which it is assumed that air flows as if it were a continuous fluid. Lift is generated in accordance with the fundamental principles of physics, the most relevant being the following three principles:[67]
Because an airfoil affects the flow in a wide area around it, these physical principles must be enforced at all points throughout an extended region. To do this requires expressing the conservation principles in the form of partial-differential equations combined with a set of boundary conditions (requirements the flow has to satisfy at the airfoil surface and far away from the airfoil).[68]
To predict lift requires solving the equations for a particular airfoil shape and flow condition, which generally requires calculations that are so voluminous that they are practical only on a computer, through the methods of Computational Fluid Dynamics (CFD). Determining the net aerodynamic pressure from a CFD solution requires "adding up" (integrating) the pressures determined by the CFD over the surface of the airfoil as described under "Pressure integration".
The Navier-Stokes equations (NS) provide the potentially most accurate theory of lift, but in practice, capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy and using the Reynolds-Averaged Navier-Stokes equations (RANS), as explained below. Simpler but less accurate theories have also been developed and are described below.
### Navier-Stokes (NS) equations
These equations represent conservation of mass, Newton's second law (conservation of momentum), conservation of energy, the Newtonian law for the action of viscosity, the Fourier heat conduction law, an equation of state relating density, temperature, and pressure, and formulas for the viscosity and thermal conductivity of the fluid.[69] [70]
In principle, the NS equations, combined with boundary conditions of no through-flow and no slip at the airfoil surface, could be used to predict lift in any situation in ordinary atmospheric flight with high accuracy. However, lifting flows in practical situations always involve turbulence in the boundary layer next to the airfoil surface, at least over the aft portion of the airfoil. Predicting lift by solving the NS equations in their raw form would require the calculations to resolve the details of the turbulence, down to the smallest eddy. This is not yet possible, even on the most powerful current computer.[71] So in principle the NS equations provide a complete and very accurate theory of lift, but practical prediction of lift requires that the effects of turbulence be modeled in the RANS equations rather than computed directly.
### Reynolds-Averaged Navier-Stokes (RANS) equations
These are the NS equations with the turbulence motions averaged over time, and the effects of the turbulence on the time-averaged flow represented by turbulence modeling (an additional set of equations based on a combination of dimensional analysis and empirical information on how turbulence affects a boundary layer in a time-averaged average sense).[72][73] A RANS solution consists of the time-averaged velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil.
The amount of computation required is a minuscule fraction (billionths)[71] of what would be required to resolve all of the turbulence motions in a raw NS calculation, and with large computers available it is now practical to carry out RANS calculations for complete airplanes in three dimensions. Because turbulence models are not perfect, the accuracy of RANS calculations is imperfect, but it is good enough to be very helpful to airplane designers. Lift predicted by RANS is usually within a few percent of the actual lift.
### Inviscid-flow equations (Euler or potential)
The Euler equations are the NS equations with the viscosity, heat conduction, and turbulence effects deleted.[74] As with a RANS solution, an Euler solution consists of the velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. While the Euler equations are simpler than the NS equations, they still do not lend themselves to exact analytic solutions. Further simplification is available through potential flow theory, which reduces the number of unknowns that must be solved for and makes analytic solutions possible in some cases, as described below.
Either Euler or potential-flow calculations predict the pressure distribution on the airfoil surfaces roughly correctly for angles of attack below stall, where they might miss the total lift by as much as 10-20%. At angles of attack above stall, inviscid calculations do not predict that stall has happened, and as a result they grossly overestimate the lift.
In potential-flow theory, the flow is assumed to be irrotational, i.e. that small fluid parcels have no net rate of rotation. Mathematically, this is expressed by the statement that the curl of the velocity vector field is everywhere equal to zero. Irrotational flows have the convenient property that the velocity can be expressed as the gradient of a scalar function called a potential. A flow represented in this way is called potential flow.[75][76][77][78]
In potential-flow theory, the flow is usually further assumed to be incompressible. Incompressible potential-flow theory has the advantage that the equation (Laplace's equation) to be solved for the potential is linear, which allows solutions to be constructed by superposition of other known solutions. The incompressible-potential-flow equation can also be solved by conformal mapping, a method based on the theory of functions of a complex variable. In the early 20th century, before computers were available, conformal mapping was used to generate solutions to the incompressible potential-flow equation for a class of idealized airfoil shapes, providing some of the first practical theoretical predictions of the pressure distribution on a lifting airfoil.
A solution of the potential equation directly determines only the velocity field. The pressure field is deduced from the velocity field through Bernoulli's equation.
Comparison of a non-lifting flow pattern around an airfoil and a lifting flow pattern consistent with the Kutta condition, in which the flow leaves the trailing edge smoothly.
Applying potential-flow theory to a lifting flow requires special treatment and an additional assumption. The problem arises because lift on an airfoil in inviscid flow requires circulation in the flow around the airfoil (See "Circulation and the Kutta-Joukowski theorem" below), but a single potential function that is continuous throughout the domain around the airfoil cannot represent a flow with nonzero circulation. The solution to this problem is to introduce a branch cut, a curve or line from some point on the airfoil surface out to infinite distance, and to allow a jump in the value of the potential across the cut. The jump in the potential imposes circulation in the flow equal to the potential jump and thus allows nonzero circulation to be represented. However, the potential jump is a free parameter that is not determined by the potential equation or the other boundary conditions, and the solution is thus indeterminate. A potential-flow solution exists for any value of the circulation and any value of the lift. One way to resolve this indeterminacy is to impose the Kutta condition,[79][80] which is that, of all the possible solutions, the physically reasonable solution is the one in which the flow leaves the trailing edge smoothly. The streamline sketches illustrate one flow pattern with zero lift, in which the flow goes around the trailing edge and leaves the upper surface ahead of the trailing edge, and another flow pattern with positive lift, in which the flow leaves smoothly at the trailing edge in accordance with the Kutta condition.
### Linearized potential flow
This is potential-flow theory with the further assumptions that the airfoil is very thin and the angle of attack is small.[81] The linearized theory predicts the general character of the airfoil pressure distribution and how it is influenced by airfoil shape and angle of attack, but is not accurate enough for design work. For a 2D airfoil, such calculations can be done in a fraction of a second in a spreadsheet on a PC.
### Circulation and the Kutta-Joukowski theorem
Circulation component of the flow around a moving airfoil.
The Kutta-Joukowski theorem relates the lift on an airfoil to a circulatory component (circulation) of the flow around the airfoil.[56][82][83] Kutta-Joukowski is not a complete theory of lift in the same sense as those listed above because it does not predict how much circulation or lift a given airfoil will produce. Calculating the lift from Kutta-Joukowski requires a known value for the circulation.
The circulation ${\displaystyle \Gamma }$ is the contour integral of the tangential velocity of the air on a closed loop (also called a 'circuit') around the boundary of an airfoil. It can be understood as the total amount of "spinning" (or vorticity) of air around the airfoil. The section lift/span ${\displaystyle L'}$ can be calculated using the Kutta–Joukowski theorem:[18]
${\displaystyle L'=\rho v\Gamma \,}$
where ${\displaystyle \rho }$ is the air density, ${\displaystyle v}$ is the free-stream airspeed.
Several components of the overall velocity field contribute to the circulation: the upward flow ahead of the airfoil, the accelerated flow above the airfoil, the decelerated flow below the airfoil, and the downward flow behind the airfoil. One derivation of the Kutta-Joukowski theorem involves integrating the fluxes of vertical momentum ahead of the airfoil and behind (on vertical planes extending to large distances above and below), taking the difference, and showing that the result is related to both the lift and the circulation. The flux of upward momentum ahead of the airfoil is found to account for half the lift, and the flux of downward momentum behind the airfoil is found to account for the other half, a result that also applies to three-dimensional wings.[56][82][83]
The Kutta-Joukowski theorem is a key element in an explanation of lift that follows the development of the flow around an airfoil as the airfoil starts its motion from rest and a starting vortex is formed and left behind, leading to the formation of circulation around the airfoil.[84][85][86] Lift is then inferred from the Kutta-Joukowski theorem. This explanation is largely mathematical, and its general progression is based on logical inference, not physical cause-and-effect.[87]
The circulation around a conventional airfoil, and hence the lift it generates, is dictated by both its design and the flight conditions, such as forward velocity and angle of attack. Lift can be increased by artificially increasing the circulation, for example by boundary-layer blowing or the use of blown flaps. In the Flettner rotor the entire airfoil is circular and spins about a spanwise axis to create the circulation.
### Momentum balance in lifting flows
Control volumes of different shapes that have been used in analyzing the momentum balance in the 2D flow around a lifting airfoil. The airfoil is assumed to exert a downward force -L' per unit span on the air, and the proportions in which that force is manifested as momentum fluxes and pressure differences at the outer boundary are indicated for each different shape of control volume
Illustration of the distribution of higher-than-ambient pressure on the ground under an airplane in flight
The flow around a lifting airfoil must satisfy Newton's second law, or conservation of momentum, both locally at every point in the flow field, and in an integrated sense over any extended region of the flow. For an extended region, Newton's second law takes the form of the momentum theorem for a control volume, where a control volume can be any region of the flow chosen for analysis. The momentum theorem states that the integrated force exerted at the boundaries of the control volume (a surface integral), is equal to the integrated time rate of change (material derivative) of the momentum of fluid parcels passing through the interior of the control volume (a volume integral).[not in citation given (See discussion.)] For a steady flow, the volume integral can be replaced by the net surface integral of the flux of momentum through the boundary via Stokes' theorem.[88]
The lifting flow around a 2D airfoil is usually analyzed in a control volume that completely surrounds the airfoil, so that the inner boundary of the control volume is the airfoil surface, where the downward force per unit span ${\displaystyle -L'}$ is exerted on the fluid by the airfoil. The outer boundary is usually either a large circle or a large rectangle. At this outer boundary distant from the airfoil, the velocity and pressure are well represented by the velocity and pressure associated with a uniform flow plus a vortex, and viscous stress is negligible, so that the only force that must be integrated over the outer boundary is the pressure.[89][90][91] The free-stream velocity is usually assumed to be horizontal, with lift vertically upward, so that the vertical momentum is the component of interest.
For the free-air case (no ground plane), it is found that the force ${\displaystyle -L'}$ exerted by the airfoil on the fluid is manifested partly as momentum fluxes and partly as pressure differences at the outer boundary, in proportions that depend on the shape of the outer boundary, as shown in the diagram at right. For a flat horizontal rectangle that is much longer than it is tall, the fluxes of vertical momentum through the front and back are negligible, and the lift is accounted for entirely by the integrated pressure differences on the top and bottom.[89] For a square or circle, the momentum fluxes and pressure differences account for half the lift each.[89][90][91] For a vertical rectangle that is much taller than it is wide, the unbalanced pressure forces on the top and bottom are negligible, and lift is accounted for entirely by momentum fluxes, with a flux of upward momentum that enters the control volume through the front accounting for half the lift, and a flux of downward momentum that exits the control volume through the back accounting for the other half.[89]
The results of all of the control-volume analyses described above are consistent with the Kutta-Joukowski theorem described in the previous subsection. Both the tall rectangle and circle control volumes have been used in derivations of the theorem.[90][91]
When a ground plane is present, there is a pattern of higher-than-ambient pressure on the ground below an airplane in flight, as shown on the right[92] For steady, level flight, the integrated pressure force associated with this pattern is equal to the total aerodynamic lift of the airplane and to the airplane's weight. According to Newton's third law, this pressure force exerted on the ground by the air is matched by an equal-and-opposite upward force exerted on the air by the ground, which offsets all of the downward force exerted on the air by the airplane. The net force due to the lift, acting on the atmosphere as a whole, is therefore zero, and there is thus no integrated accumulation of vertical momentum in the atmosphere.[93]
## Lift of three-dimensional wings
Cross-section of an airplane wing-body combination showing the isobars of the three-dimensional lifting flow.
Cross-section of an airplane wing-body combination showing velocity vectors of the three-dimensional lifting flow.
For wings of moderate-to-high aspect ratio, the flow at any station along the span except close to the tips behaves much like flow around a two-dimensional airfoil, and most explanations of lift, like those above, concentrate on two-dimensional flow. However, even for wings of high aspect ratio, the three-dimensional effects associated with finite span are significant across the whole span, not just close to the tips.
The lift tends to decrease in the spanwise direction from root to tip, and the pressure distributions around the airfoil sections change accordingly in the spanwise direction. Pressure distributions in planes perpendicular to the flight direction tend to look like the illustration at right.[94] This spanwise-varying pressure distribution is sustained by a mutual interaction with the velocity field. Flow below the wing is accelerated outboard, flow outboard of the tips is accelerated upward, and flow above the wing is accelerated inboard, which results in the flow pattern illustrated at right.[95]
There is more downward turning of the flow than there would be in a two-dimensional flow with the same airfoil shape and sectional lift, and a higher sectional angle of attack is required to achieve the same lift compared to a two-dimensional flow.[96] The wing is effectively flying in a downdraft of its own making, as if the freestream flow were tilted downward, with the result that the total aerodynamic force vector is tilted backward slightly compared to what it would be in two dimensions. The additional backward component of the force vector is called lift-induced drag.
Euler computation of a tip vortex rolling up from the trailed vorticity sheet.
The difference in the spanwise component of velocity above and below the wing (between being in the inboard direction above and in the outboard direction below) persists at the trailing edge and into the wake downstream. After the flow leaves the trailing edge, this difference in velocity takes place across a relatively thin shear layer called a vortex sheet. As the vortex sheet is convected downstream from the trailing edge, it rolls up at its outer edges, eventually forming distinct wingtip vortices. The combination of the wingtip vortices and the vortex sheets feeding them is called the vortex wake.
Planview of a wing showing the horseshoe vortex system.
In addition to the vorticity in the trailing vortex wake there is vorticity in the wing's boundary layer, which is often called the bound vorticity and which connects the trailing sheets from the two sides of the wing into a vortex system in the general form of a horseshoe. The horseshoe form of the vortex system was recognized by the British aeronautical pioneer Lanchester in 1907.[97]
Given the distribution of bound vorticity and the vorticity in the wake, the Biot-Savart law (a vector-calculus relation) can be used to calculate the velocity perturbation anywhere in the field, caused by the lift on the wing. Approximate theories for the lift distribution and lift-induced drag of three-dimensional wings are based on such analysis applied to the wing's horseshoe vortex system.[98][99] In these theories, the bound vorticity is usually idealized and assumed to reside at the camber surface inside the wing.
Because the velocity is deduced from the vorticity in such theories, there is a tendency for some authors to describe the situation in terms that imply that the vorticity is the cause of the velocity perturbations, using terms such as "the velocity induced by the vortex," for example.[100] But attributing causation to the vorticity in this way is not consistent with the physics. The real cause of the velocity perturbations is the pressure field.[101][102][103]
## Viscous effects: Profile drag and stalling
Airflow separating from a wing at a high angle of attack
No matter how smooth the surface of an airfoil seems, any real surface is rough on the scale of air molecules. Air molecules flying into the surface bounce off the rough surface in random directions not related to their incoming directions. The result is that when the air is viewed as if it were a continuous material, it is seen to be unable to slide along the surface, and the air's tangential velocity at the surface goes to practically zero, something known as the no-slip condition.[104] Because the air at the surface has near-zero velocity, and air away from the surface is moving, there is a thin boundary layer in which the air close to the surface is subjected to a shearing motion.[105][106] The air's viscosity resists the shearing, giving rise to a shear stress at the airfoil's surface called skin-friction drag. Over most of the surface of most airfoils, the boundary layer is naturally turbulent, which increases skin-friction drag.[106][107]
Under usual flight conditions, the boundary layer remains attached to both the upper and lower surfaces all the way to the trailing edge, and its effect on the rest of the flow is modest. Compared to the predictions of inviscid-flow theory, in which there is no boundary layer, the attached boundary layer reduces the lift by a modest amount and modifies the pressure distribution somewhat, which results in a viscosity-related pressure drag over and above the skin-friction drag. The total of the skin-friction drag and the viscosity-related pressure drag is usually called the profile drag.[107][108]
The maximum lift an airfoil can produce at a given airspeed is limited by boundary-layer separation. As the angle of attack is increased, a point is reached where the boundary layer can no longer remain attached to the upper surface. When the boundary layer separates, it leaves a region of recirculating flow above the upper surface, as illustrated in the flow-visualization photo at right. This is known as the stall, or stalling. At angles of attack above the stall, lift is significantly reduced, though it is not zero. The maximum lift that can be achieved before stall, in terms of the lift coefficient, is generally less than 2.0 for single-element airfoils and can be more than 3.0 for airfoils with high-lift slotted flaps deployed.[109]
## Lift forces on bluff bodies
The flow around bluff bodies – i.e. without a streamlined shape, or stalling airfoils – may also generate lift, besides a strong drag force. This lift may be steady, or it may oscillate due to vortex shedding. Interaction of the object's flexibility with the vortex shedding may enhance the effects of fluctuating lift and cause vortex-induced vibrations.[110] For instance, the flow around a circular cylinder generates a Kármán vortex street: vortices being shed in an alternating fashion from each side of the cylinder. The oscillatory nature of the flow is reflected in the fluctuating lift force on the cylinder, whereas the mean lift force is negligible. The lift force frequency is characterised by the dimensionless Strouhal number, which depends (among others) on the Reynolds number of the flow.[111][112]
For a flexible structure, this oscillatory lift force may induce vortex-induced vibrations. Under certain conditions – for instance resonance or strong spanwise correlation of the lift force – the resulting motion of the structure due to the lift fluctuations may be strongly enhanced. Such vibrations may pose problems and threaten collapse in tall man-made structures like industrial chimneys.[110]
In the Magnus effect, a lift force is generated by a spinning cylinder in a freestream. Here the mechanical rotation acts on the boundary layer, causing it to separate at different locations on the two sides of the cylinder. The asymmetric separation changes the effective shape of the cylinder as far as the flow is concerned such that the cylinder acts like a lifting airfoil with circulation in the outer flow.[113]
## Alternative explanations, misconceptions, and controversies
Many other alternative explanations for the generation of lift by an airfoil have been put forward, a few of which are presented here. Most of them are intended to explain the phenomenon of lift to a general audience. Although the explanations may share features in common with the explanations above, additional assumptions and simplifications may be introduced. This can reduce the validity of an alternative explanation to a limited sub-class of lift generating conditions, or might not allow a quantitative analysis. Several theories introduce assumptions which proved to be wrong, like the equal transit-time theory.
### False explanation based on equal transit-time
An illustration of the incorrect equal transit-time explanation of airfoil lift.
Basic or popular sources often describe the "equal transit-time" theory of lift, which incorrectly assumes that the parcels of air that divide at the leading edge of an airfoil must rejoin at the trailing edge, forcing the air traveling along the longer upper surface to go faster. Bernoulli's Principle is then cited to conclude that since the air moves slower along the bottom of the wing, the air pressure must be higher, pushing the wing up.[114]
However, there is no physical principle that requires equal transit time and experimental results show that this assumption is false.[115][116][117][118][119][120] In fact, the air moving over the top of an airfoil generating lift moves much faster than the equal transit theory predicts.[121] Further, the theory violates Newton's third law of motion, since it describes a force on the wing with no opposite force.[122]
The assertion that the air must arrive simultaneously at the trailing edge is sometimes referred to as the "Equal Transit-Time Fallacy".[123][124][125][126][127]
### Controversy regarding the Coandă effect
In its original sense, the Coandă effect refers to the tendency of a fluid jet to stay attached to an adjacent surface that curves away from the flow, and the resultant entrainment of ambient air into the flow. The effect is named for Henri Coandă, the Romanian aerodynamicist who exploited it in many of his patents.
More broadly, some consider the effect to include the tendency of any fluid boundary layer to adhere to a curved surface, not just the boundary layer accompanying a fluid jet. It is in this broader sense that the Coandă effect is used by some to explain why the air flow remains attached to the top side of an airfoil.[128] Jef Raskin,[129] for example, describes a simple demonstration, using a straw to blow over the upper surface of a wing. The wing deflects upwards, thus demonstrating that the Coandă effect creates lift. This demonstration correctly demonstrates the Coandă effect as a fluid jet (the exhaust from a straw) adhering to a curved surface (the wing). However, the upper surface in this flow is a complicated, vortex-laden mixing layer, while on the lower surface the flow is quiescent. The physics of this demonstration are very different from that of the general flow over the wing.[130] The usage in this sense is encountered in some popular references on aerodynamics.[128][129] This is a controversial use of the term "Coanda effect." The more established view in the aerodynamics field is that the Coandă effect is defined in the more limited sense above,[130][131][132] and the flow following the upper surface simply reflects an absence of boundary-layer separation and is not an example of the Coandă effect.[133][134][135][136]
### Misconception regarding the role of viscosity
Explanations that use the term "Coandă effect" sometimes further assert that the viscosity of the flow in the boundary layer is responsible for the ability of the flow to follow the convex upper surface.[137][138] However, the idea that viscosity plays a significant role in flow turning is not consistent with the physics of curved boundary-layer flows. Analysis of the momentum balance in the flow in the boundary layer shows that the flow curvature is caused almost exclusively by the pressure gradient and that viscosity plays practically no direct role in the ability of the flow to follow a curved surface.[139]
### Misconception regarding "pulling down" of the flow
Explanations that refer to the Coandă effect sometimes also refer to the flow over the upper surface as "sticking" to the airfoil and being "pulled down" to follow the surface.[137] Taken literally, this description is not consistent with the physics of gasses. For air to be pulled in the literal sense, it would have to be put in tension (negative pressure). The kinetic theory of gasses shows that in a gas at a positive absolute temperature the pressure cannot be negative.[55] Thus for the flow to curve downward over the upper surface, it must be pushed down by higher pressure above than below.[140] The difference in pressure between the flow at the upper surface itself and the flow far above the airfoil is generally small compared with the background atmospheric pressure, so that the lowest pressure on the airfoil upper surface is still strongly positive in an absolute sense.[141]
## Footnotes
1. ^ "What is Lift?". NASA Glenn Research Center. Retrieved March 4, 2009.
2. ^ Kulfan (2010)
3. ^ The amount of lift will be (usually slightly) more or less than gravity depending on the thrust level and vertical alignment of the thrust line. A side thrust line will result in some lift opposing side thrust as well.
4. ^ Clancy, L. J., Aerodynamics, Section 14.6
5. ^ Clancy, L. J., Aerodynamics, Section 5.2
6. ^ a b "There are many theories of how lift is generated. Unfortunately, many of the theories found in encyclopedias, on web sites, and even in some textbooks are incorrect, causing unnecessary confusion for students." NASA http://www.grc.nasa.gov/WWW/K-12/airplane/wrong1.html
7. ^ "Most of the texts present the Bernoulli formula without derivation, but also with very little explanation. When applied to the lift of an airfoil, the explanation and diagrams are almost always wrong. At least for an introductory course, lift on an airfoil should be explained simply in terms of Newton’s Third Law, with the thrust up being equal to the time rate of change of momentum of the air downwards." Cliff Swartz et al. Quibbles, Misunderstandings, and Egregious Mistakes - Survey of High-School Physics Texts THE PHYSICS TEACHER Vol. 37, May 1999 pg 300 http://aapt.scitation.org/doi/abs/10.1119/1.880274
8. ^ a b "One explanation of how a wing of an airplane gives lift is that as a result of the shape of the airfoil, the air flows faster over the top than it does over the bottom because it has farther to travel. Of course, with our thin-airfoil sails, the distance along the top is the same as along the bottom so this explanation of lift fails." The Aerodynamics of Sail Interaction by Arvel Gentry Proceedings of the Third AIAA Symposium on the Aero/Hydronautics of Sailing 1971 http://www.arvelgentry.com/techs/The%20Aerodynamics%20of%20Sail%20Interaction.pdf
9. ^ a b "An explanation frequently given is that the path along the upper side of the aerofoil is longer and the air thus has to be faster. This explanation is wrong." A comparison of explanations of the aerodynamic lifting force Klaus Weltner Am. J. Phys. Vol.55 No.January 1, 1987
10. ^ a b "The lift on the body is simple...it's the re-action of the solid body to the turning of a moving fluid...Now why does the fluid turn the way that it does? That's where the complexity enters in because we are dealing with a fluid. ...The cause for the flow turning is the simultaneous conservation of mass, momentum (both linear and angular), and energy by the fluid. And it's confusing for a fluid because the mass can move and redistribute itself (unlike a solid), but can only do so in ways that conserve momentum (mass times velocity) and energy (mass times velocity squared)... A change in velocity in one direction can cause a change in velocity in a perpendicular direction in a fluid, which doesn't occur in solid mechanics... So exactly describing how the flow turns is a complex problem; too complex for most people to visualize. So we make up simplified "models". And when we simplify, we leave something out. So the model is flawed. Most of the arguments about lift generation come down to people finding the flaws in the various models, and so the arguments are usually very legitimate." Tom Benson of NASA's Glenn Research Center in an interview with AlphaTrainer.Com http://www.alphatrainer.com/pages/corner.htm
11. ^ "Both approaches are equally valid and equally correct, a concept that is central to the conclusion of this article." Charles N. Eastlake An Aerodynamicist’s View of Lift, Bernoulli, and Newton THE PHYSICS TEACHER Vol. 40, March 2002 http://www.df.uba.ar/users/sgil/physics_paper_doc/papers_phys/fluids/Bernoulli_Newton_lift.pdf
12. ^ Ison, David, "Bernoulli Or Newton: Who's Right About Lift?", Plane & Pilot, retrieved January 14, 2011
13. ^ "...the effect of the wing is to give the air stream a downward velocity component. The reaction force of the deflected air mass must then act on the wing to give it an equal and opposite upward component." In: Halliday, David; Resnick, Robert, Fundamentals of Physics 3rd Edition, John Wiley & Sons, p. 378
14. ^ a b Anderson and Eberhardt (2001)
15. ^ a b Langewiesche (1944)
16. ^ "When air flows over and under an airfoil inclined at a small angle to its direction, the air is turned from its course. Now, when a body is moving in a uniform speed in a straight line, it requires force to alter either its direction or speed. Therefore, the sails exert a force on the wind and, since action and reaction are equal and opposite, the wind exerts a force on the sails." In: Morwood, John, Sailing Aerodynamics, Adlard Coles Limited, p. 17
17. ^ "Lift is a force generated by turning a moving fluid... If the body is shaped, moved, or inclined in such a way as to produce a net deflection or turning of the flow, the local velocity is changed in magnitude, direction, or both. Changing the velocity creates a net force on the body." "Lift from Flow Turning". NASA Glenn Research Center. Retrieved July 7, 2009.
18. ^ a b Landau, L. D.; Lifshitz, E. M. (1987), Fluid mechanics, Course of Theoretical Physics, 6 (2nd revised ed.), Pergamon Press, pp. 68–69, 153–155, ISBN 0-08-033932-8, OCLC 15017127
19. ^ "Essentially, due to the presence of the wing (its shape and inclination to the incoming flow, the so-called angle of attack), the flow is given a downward deflection, as shown in Fig. 2. It is Newton’s third law at work here, with the flow then exerting a reaction force on the wing in an upward direction, thus generating lift." Vassilis Spathopoulos Flight Physics for Beginners: Simple Examples of Applying Newton’s Laws The Physics Teacher Vol. 49, September 2011 pg 373 http://tpt.aapt.org/resource/1/phteah/v49/i6/p373_s1
20. ^ "The main fact of all heavier-than-air flight is this: the wing keeps the airplane up by pushing the air down." In: Langewiesche, Wolfgang (1990), Stick and Rudder: An Explanation of the Art of Flying, McGraw-Hill, pp. 6–10, ISBN 0-07-036240-8
21. ^ "Birds and aircraft fly because they are constantly pushing air downwards: L = dp/dt Here L is the lift force and dp/dt is the rate at which downward momentum is imparted to the airflow." Flight without Bernoulli Chris Waltham THE PHYSICS TEACHER Vol. 36, Nov. 1998 http://www.df.uba.ar/users/sgil/physics_paper_doc/papers_phys/fluids/fly_no_bernoulli.pdf
22. ^ Clancy, L. J.; Aerodynamics, Pitman 1975, page 76: "This lift force has its reaction in the downward momentum which is imparted to the air as it flows over the wing. Thus the lift of the wing is equal to the rate of transport of downward momentum of this air."
23. ^ "...if the air is to produce an upward force on the wing, the wing must produce a downward force on the air. Because under these circumstances air cannot sustain a force, it is deflected, or accelerated, downward. Newton's second law gives us the means for quantifying the lift force: Flift = m∆v/∆t = ∆(mv)/∆t. The lift force is equal to the time rate of change of momentum of the air." Norman F. Smith "Bernoulli and Newton in Fluid Mechanics" The Physics Teacher 10, 451 (1972); doi:10.1119/1.2352317
24. ^ a b c Anderson (2004).
25. ^ "L = time rate of change of momentum of airflow in the downward direction"[24]
26. ^ Doug McLean Understanding Aerodynamics: Arguing from the Real Physics Section 7.3.3 Wiley http://onlinelibrary.wiley.com/doi/10.1002/9781118454190.ch3/pdf
27. ^ "A complete statement of Bernoulli's Theorem is as follows: "In a flow where no energy is being added or taken away, the sum of its various energies is a constant: consequently where the velocity increases the pressure decreases and vice versa." Smith, Norman F. "Bernoulli, Newton and Dynamic Lift Part I". School Science and Mathematics. 73 (3): 181–186. doi:10.1111/j.1949-8594.1973.tb08998.x.
28. ^ "The effect of squeezing streamlines together as they divert around the front of an airfoil shape is that the velocity must increase to keep the mass flow constant since the area between the streamlines has become smaller." Charles N. Eastlake An Aerodynamicist’s View of Lift, Bernoulli, and Newton THE PHYSICS TEACHER Vol. 40, March 2002 http://www.df.uba.ar/users/sgil/physics_paper_doc/papers_phys/fluids/Bernoulli_Newton_lift.pdf
29. ^ "There is no way to predict, from Bernoulli's equation alone, what the pattern of streamlines will be for a particular wing." Halliday and Resnick Fundamentals of Physics 3rd Ed. Extended pg 378
30. ^ "The generation of lift may be explained by starting from the shape of streamtubes above and below an airfoil. With a constriction above and an expansion below, it is easy to demonstrate lift, again via the Bernoulli equation. However, the reason for the shape of the streamtubes remains obscure..." Jaakko Hoffren Quest for an Improved Explanation of Lift American Institute of Aeronautics and Astronautics 2001 pg 3 http://corsair.flugmodellbau.de/files/area2/LIFT.PDF
31. ^ "There is nothing wrong with the Bernoulli principle, or with the statement that the air goes faster over the top of the wing. But, as the above discussion suggests, our understanding is not complete with this explanation. The problem is that we are missing a vital piece when we apply Bernoulli’s principle. We can calculate the pressures around the wing if we know the speed of the air over and under the wing, but how do we determine the speed?" How Airplanes Fly: A Physical Description of Lift David Anderson and Scott Eberhardt http://www.allstar.fiu.edu/aero/airflylvl3.htm
32. ^ "The problem with the "Venturi" theory is that it attempts to provide us with the velocity based on an incorrect assumption (the constriction of the flow produces the velocity field). We can calculate a velocity based on this assumption, and use Bernoulli's equation to compute the pressure, and perform the pressure-area calculation and the answer we get does not agree with the lift that we measure for a given airfoil." NASA Glenn Research Center http://www.grc.nasa.gov/WWW/K-12/airplane/wrong3.html
33. ^ "A concept...uses a symmetrical convergent-divergent channel, like a longitudinal section of a Venturi tube, as the starting point. It is widely known that, when such a device is put in a flow, the static pressure in the tube decreases. When the upper half of the tube is removed, a geometry resembling the airfoil is left, and suction is still maintained on top of it. Of course, this explanation is flawed too, because the geometry change affects the whole flowfield and there is no physics involved in the description." Jaakko Hoffren Quest for an Improved Explanation of Lift Section 4.3 American Institute of Aeronautics and Astronautics 2001 http://corsair.flugmodellbau.de/files/area2/LIFT.PDF
34. ^ "This answers the apparent mystery of how a symmetric airfoil can produce lift. ... This is also true of a flat plate at non-zero angle of attack." Charles N. Eastlake An Aerodynamicist’s View of Lift, Bernoulli, and Newton http://www.df.uba.ar/users/sgil/physics_paper_doc/papers_phys/fluids/Bernoulli_Newton_lift.pdf
35. ^ "This classic explanation is based on the difference of streaming velocities caused by the airfoil. There remains, however, a question: How does the airfoil cause the difference in streaming velocities? Some books don't give any answer, while others just stress the picture of the streamlines, saying the airfoil reduces the separations of the streamlines at the upper side (Fig. 1). They do not say how the airfoil manages to do this. Thus this is not a sufficient answer." Klaus Weltner Bernoulli's Law and Aerodynamic Lifting Force The Physics Teacher February 1990 p. 84. http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=PHTEAH000028000002000084000001&idtype=cvips&prog=normal
36. ^ "...there is nothing in aerodynamics requiring the top and bottom flows having to reach the trailing edge at the same time. This idea is a completely erroneous explanation for lift. The flow on top gets to the trailing edge long before the flow on the bottom because of the circulation flow field." Arvel Gentry Origins of Lift http://www.arvelgentry.com/techs/origins_of_lift.pdf
37. ^ A uniform pressure surrounding a body does not create a net force. (See buoyancy). Therefore pressure differences are needed to exert a force on a body immersed in a fluid. For example, see: Batchelor, G.K. (1967), An Introduction to Fluid Dynamics, Cambridge University Press, pp. 14–15, ISBN 0-521-66396-2
38. ^ "...if a streamline is curved, there must be a pressure gradient across the streamline..."Babinsky, Holger (November 2003), "How do wings work?" (PDF), Physics Education
39. ^ Thus a distribution of the pressure is created which is given in Euler's equation. The physical reason is the aerofoil which forces the streamline to follow its curved surface. The low pressure at the upper side of the aerofoil is a consequence of the curved surface." A comparison of explanations of the aerodynamic lifting force Klaus Weltner Am. J. Phys. Vol.55 No.January 1, 1987 pg 53 http://aapt.scitation.org/doi/pdf/10.1119/1.14960
40. ^ "You can argue that the main lift comes from the fact that the wing is angled slightly upward so that air striking the underside of the wing is forced downward. The Newton's 3rd law reaction force upward on the wing provides the lift. Increasing the angle of attack can increase the lift, but it also increases drag so that you have to provide more thrust with the aircraft engines" hyperphysics Georgia State University Department of Physics and Astronomy http://hyperphysics.phy-astr.gsu.edu/hbase/fluids/angatt.html
41. ^ "If we enlarge the angle of attack we enlarge the deflection of the airstream by the airfoil. This results in the enlargement of the vertical component of the velocity of the airstream... we may expect that the lifting force depends linearly on the angle of attack. This dependency is in complete agreement with the results of experiments..." Klaus Weltner A comparison of explanations of the aerodynamic lifting force Am. J. Phys. 55(1), January 1987 pg 52
42. ^ "The decrease of angles exceeding 25° is plausible. For large angles of attack we get turbulence and thus less deflection downward." Klaus Weltner A comparison of explanations of the aerodynamic lifting force Am. J. Phys. 55(1), January 1987 pg 52
43. ^ Clancy (1975), Section 5.2
44. ^ Abbott, and von Doenhoff (1958), Section 4.2
45. ^ "With an angle of attack of 0°, we can explain why we already have a lifting force. The air stream behind the aerofoil follows the trailing edge. The trailing edge already has a downward direction, if the chord to the middle line of the profile is horizontal." Klaus Weltner A comparison of explanations of the aerodynamic lifting force Am. J. Phys. 55(1), January 1987 p. 52
46. ^ "...the important thing about an aerofoil (say an aircraft wing) is not so much that its upper surface is humped and its lower surface is nearly flat, but simply that it moves through the air at an angle. This also avoids the otherwise difficult paradox that an aircraft can fly upside down!" N. H. Fletcher Mechanics of Flight Physics Education July 1975 http://iopscience.iop.org/0031-9120/10/5/009/pdf/0031-9120_10_5_009.pdf
47. ^ "It requires adjustment of the angle of attack, but as clearly demonstrated in almost every air show, it can be done." hyperphysics Georgia State University Department of Physics and Astronomy http://hyperphysics.phy-astr.gsu.edu/hbase/fluids/airfoil.html#c2
48. ^ Abbott and von Doenhoff (1958), Section 1.2
49. ^ Milne-Thomson (1966), Section 12.3
50. ^ Anderson, John D. (2004), Introduction to Flight (5th ed.), McGraw-Hill, pp. 257–261, ISBN 0-07-282569-3
51. ^ Yoon, Joe (2003-12-28), Mach Number & Similarity Parameters, Aerospaceweb.org, retrieved 2009-02-11
52. ^ Anderson (2008), Section 5.7
53. ^ a b c d McLean (2012), Section 7.3.3
54. ^ a b Milne-Thomson (1966), Section 1.41
55. ^ a b Jeans (1967), Section 33.
56. ^ a b c Clancy (1975), Section 4.5
57. ^ Milne-Thomson (1966.), Section 5.31
58. ^ McLean (2012), Section 3.5
59. ^ Batchelor (1967), Section 1.2
60. ^ Thwaites (1958), Section I.2
61. ^ von Mises (1959), Section I.1
62. ^ Durand (1932), Section D, Historical Sketch by R. Giacomelli
63. ^ Anderson (1997)
64. ^ McLean (2012), Sections 7.3.1.1 and 7.3.2
65. ^ Weltner (1987), pg 53
66. ^ "What is Lift?". NASA Glenn Research Center. Retrieved March 4, 2009.
67. ^ "Analysis of fluid flow is typically presented to engineering students in terms of three fundamental principles: conservation of mass, conservation of momentum, and conservation of energy." Charles N. Eastlake An Aerodynamicist’s View of Lift, Bernoulli, and Newton THE PHYSICS TEACHER Vol. 40, March 2002 http://www.df.uba.ar/users/sgil/physics_paper_doc/papers_phys/fluids/Bernoulli_Newton_lift.pdf
68. ^ White(1991), Chapter 1
69. ^ Batchelor (1967), Chapter 3
70. ^ Aris (1989)
71. ^ a b Spalart(2000) Amsterdam, The Netherlands. Elsevier Science Publishers.
72. ^ White(1991), Section 6-2
73. ^ Schlichting(1979), Chapter XVIII
74. ^ Anderson (1995)
75. ^ "...whenever the velocity field is irrotational, it can be expressed as the gradient of a scalar function we call a velocity potential φ: V = ∇φ. The existence of a velocity potential can greatly simplify the analysis of inviscid flows by way of potential-flow theory..." Doug McLean Understanding Aerodynamics: Arguing from the Real Physics p 26 Wiley http://onlinelibrary.wiley.com/doi/10.1002/9781118454190.ch3/pdf
76. ^ Elements of Potential Flow California State University Los Angeles http://instructional1.calstatela.edu/cwu/me408/Slides/PotentialFlow/PotentialFlow.htm
77. ^ Batchelor(1967), Section 2.7
78. ^ Milne-Thomson(1966), Section 3.31
79. ^ Clancy (1975), Section 4.8
80. ^ Anderson(1991), Section 4.5
81. ^ Clancy(1975), Sections 8.1-8
82. ^ a b von Mises (1959), Section VIII.2
83. ^ a b Anderson(1991), Section 3.15
84. ^ Prandtl and Tietjens (1934)
85. ^ Batchelor (1967), Section 6.7
86. ^ Gentry (2006)
87. ^ McLean (2012), Section 7.2.1
88. ^ Shapiro (1953), Section 1.5
89. ^ a b c d Lissaman (1996), Section titled "Lift in thin slices: the two dimensional case"
90. ^ a b c Durand (1932), Sections B. V. 6 and B. V. 7
91. ^ a b c Batchelor (1967), Section 6.4, p. 407
92. ^ Prandtl and Tietjens (1934), Figure 150
93. ^ Lanchester (1907), Sections 1-6
94. ^ McLean (2012), Section 8.1.3
95. ^ McLean (2012), Section 8.1.1
96. ^ Hurt, H. H. (1965) Aerodynamics for Naval Aviators, Figure 1.30, NAVWEPS 00-80T-80
97. ^ Lanchester (1907)
98. ^ Milne-Thomson (1966), Section 10.1
99. ^ Clancy (1975), Section 8.9
100. ^ Anderson (1991), Section 5.2
101. ^ Batchelor (1967), Section 2.4
102. ^ Milne-Thomson (1966), Section 9.3
103. ^ Durand (1932), Section III.2
104. ^ White (1991), Section 1-4
105. ^ White (1991), Section 1-2
106. ^ a b Anderson (1991), Chapter 17
107. ^ a b Abbott and von Doenhoff (1958), Chapter 5
108. ^ Schlichting (1979), Chapter XXIV
109. ^ Abbott and Doenhoff (1958), Chapter 8
110. ^ a b Williamson, C.H.K.; Govardhan, R. (2004), "Vortex-induced vibrations", Annual Review of Fluid Mechanics, 36: 413–455, Bibcode:2004AnRFM..36..413W, doi:10.1146/annurev.fluid.36.050802.122128
111. ^ Sumer, B. Mutlu; Fredsøe, Jørgen (2006), Hydrodynamics around cylindrical structures (revised ed.), World Scientific, pp. 6–13, 42–45 & 50–52, ISBN 981-270-039-0
112. ^ Zdravkovich, M.M. (2003), Flow around circular cylinders, 2, Oxford University Press, pp. 850–855, ISBN 0-19-856561-5
113. ^ Clancy, L. J., Aerodynamics, Sections 4.5 and 4.6
114. ^ "The airfoil of the airplane wing, according to the textbook explanation that is more or less standard in the United States, has a special shape with more curvature on top than on the bottom; consequently, the air must travel farther over the top surface than over the bottom surface. Because the air must make the trip over the top and bottom surfaces in the same elapsed time ..., the velocity over the top surface will be greater than over the bottom. According to Bernoulli's theorem, this velocity difference produces a pressure difference which is lift." Bernoulli and Newton in Fluid Mechanics Norman F. Smith The Physics Teacher November 1972 Volume 10, Issue 8, pp. 451 http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=PHTEAH000010000008000451000001&idtype=cvips&doi=10.1119/1.2352317&prog=normal
115. ^ "Unfortunately, this explanation falls to earth on three counts. First, an airfoil need not have more curvature on its top than on its bottom. Airplanes can and do fly with perfectly symmetrical airfoils; that is with airfoils that have the same curvature top and bottom. Second, even if a humped-up (cambered) shape is used, the claim that the air must traverse the curved top surface in the same time as it does the flat bottom surface...is fictional. We can quote no physical law that tells us this. Third—and this is the most serious—the common textbook explanation, and the diagrams that accompany it, describe a force on the wing with no net disturbance to the airstream. This constitutes a violation of Newton's third law." Bernoulli and Newton in Fluid Mechanics Norman F. Smith The Physics Teacher November 1972 Volume 10, Issue 8, pp. 451 http://tpt.aapt.org/resource/1/phteah/v10/i8
116. ^ Anderson, David (2001), Understanding Flight, New York: McGraw-Hill, pp. 15–16, ISBN 0-07-136377-7, The first thing that is wrong is that the principle of equal transit times is not true for a wing with lift.
117. ^ Anderson, John (2005). Introduction to Flight. Boston: McGraw-Hill Higher Education. p. 355. ISBN 0072825693. It is then assumed that these two elements must meet up at the trailing edge, and because the running distance over the top surface of the airfoil is longer than that over the bottom surface, the element over the top surface must move faster. This is simply not true
118. ^ http://www.telegraph.co.uk/science/science-news/9035708/Cambridge-scientist-debunks-flying-myth.html Cambridge scientist debunks flying myth UK Telegraph 24 Jan 2012
119. ^ Flow Visualization. National Committee for Fluid Mechanics Films/Educational Development Center. Retrieved January 21, 2009. A visualization of the typical retarded flow over the lower surface of the wing and the accelerated flow over the upper surface starts at 5:29 in the video.
120. ^ "...do you remember hearing that troubling business about the particles moving over the curved top surface having to go faster than the particles that went underneath, because they have a longer path to travel but must still get there at the same time? This is simply not true. It does not happen." Charles N. Eastlake An Aerodynamicist’s View of Lift, Bernoulli, and Newton THE PHYSICS TEACHER Vol. 40, March 2002 PDF
121. ^ "The actual velocity over the top of an airfoil is much faster than that predicted by the "Longer Path" theory and particles moving over the top arrive at the trailing edge before particles moving under the airfoil."Glenn Research Center (March 15, 2006). "Incorrect Lift Theory". NASA. Retrieved August 12, 2010.
122. ^ "...the air is described as producing a force on the object without the object having any opposite effect on the air. Such a condition, we should quickly recognize, embodies an action without a reaction, which is, according to Newton’s Third Law, impossible." Norman F. Smith Bernoulli, Newton,and Dynamic Lift Part I School Science and Mathematics, 73, 3, Mar 1973 http://eric.ed.gov/?id=EJ075197
123. ^ A false explanation for lift has been put forward in mainstream books, and even in scientific exhibitions. Known as the "equal transit-time" explanation, it states that the parcels of air which are divided by an airfoil must rejoin again; because of the greater curvature (and hence longer path) of the upper surface of an aerofoil, the air going over the top must go faster in order to "catch up" with the air flowing around the bottom. Therefore, because of its higher speed the pressure of the air above the airfoil must be lower. Despite the fact that this "explanation" is probably the most common of all, it is false. It has recently been dubbed the "Equal transit-time fallacy"."Fixed wing aircraft facts and how aircraft fly". Retrieved July 7, 2009.
124. ^ ...it leaves the impression that Professor Bernoulli is somehow to blame for the "equal transit time" fallacy... John S. Denker (1999). "Critique of "How Airplanes Fly"". Retrieved July 7, 2009.
125. ^ The fallacy of equal transit time can be deduced from consideration of a flat plate, which will indeed produce lift, as anyone who has handled a sheet of plywood in the wind can testify. Gale M. Craig. "Physical principles of winged flight". Retrieved July 7, 2009.
126. ^ Fallacy 1: Air takes the same time to move across the top of an aerofoil as across the bottom. Peter Eastwell (2007), "Bernoulli? Perhaps, but What About Viscosity?" (PDF), The Science Education Review, 6 (1), retrieved July 14, 2009.
127. ^ "There is a popular fallacy called the equal transit-time fallacy that claims the two halves rejoin at the trailing edge of the aerofoil." Ethirajan Rathakrishnan Theoretical Aerodynamics John Wiley & sons 2013 section 4.10.1
128. ^ a b Anderson, David; Eberhart, Scott (1999), How Airplanes Fly: A Physical Description of Lift, retrieved June 4, 2008
129. ^ a b Raskin, Jef (1994), Coanda Effect: Understanding Why Wings Work, archived from the original on September 28, 2007
130. ^ a b Auerbach, David (2000), "Why Aircraft Fly", Eur. J. Phys., 21 (4): 289–296, Bibcode:2000EJPh...21..289A, doi:10.1088/0143-0807/21/4/302
131. ^ Denker, JS, Fallacious Model of Lift Production, retrieved 2008-08-18
132. ^ Wille, R.; Fernholz, H. (1965), "Report on the first European Mechanics Colloquium, on the Coanda effect", J. Fluid Mech., 23 (4): 801–819, Bibcode:1965JFM....23..801W, doi:10.1017/S0022112065001702
133. ^ Auerbach (2000)
134. ^ Denker (1996)
135. ^ Wille and Fernholz(1965)
136. ^ White, Frank M. (2002), Fluid Mechanics (5th ed.), McGraw Hill
137. ^ a b Raskin (1994)
138. ^ Anderson, D. F., Eberhardt, S. , 2001, states that "differences in speed in adjacent layers cause shear forces, which cause the flow of the fluid to want to bend in the direction of the slower layer." This assertion is not consistent with the actual momentum balance in a curved boundary-layer flow. See equation 4c in Van Dyke (1969), for example.
139. ^ Van Dyke (1969). The derivation of equation 4c shows that the contribution of viscous stress to flow turning is negligible.
140. ^ Babinsky (2003)
141. ^ McLean (2012), Sections 7.3.3.9 and 7.3.3.13
## References
• Abbott, I. H.; von Doenhoff, A. E. (1958), Theory of Wing Sections, Dover Publications
• Anderson, D. F.; Eberhardt, S. (2001), Understanding Flight, McGraw-Hill.
• Anderson, J. D. (1991), , Fundamentals of Aerodynamics, 2nd edition, McGraw-Hill
• Anderson, J. D. (1995), Computational Fluid Dynamics, The Basics With Applications, ISBN 0-07-113210-4
• Anderson, J. D. (1997), A History of Aerodynamics, Cambridge University Press.
• Anderson, John D. (2004), Introduction to Flight (5th ed.), McGraw-Hill, pp. 352–361, §5.19, ISBN 0-07-282569-3
• Anderson, J. D. (2008), Introduction to Flight, 6th edition, McGraw Hill.
• Aris, R. (1989), Vectors, Tensors, and the basic Equations of Fluid Mechanics, Dover Publications
• Auerbach, D. (2000), Why Aircraft Fly, Eur. J. Phys. 21 (4): 289–296
• Babinsky, H. (2003), How do wings work?, Phys. Educ., vol. 38, p. 497.
• Batchelor, G. K. (1967), An Introduction to Fluid Dynamics, Cambridge University Press
• Clancy, L. J. (1975), Aerodynamics, Longman Scientific and Technical
• Craig, G. M. (1997), Stop Abusing Bernoulli, Anderson, Indiana: Regenerative Press
• Durand, W. F., ed. (1932), Aerodynamic Theory, vol. 1, Dover Publications
• Eastlake, C. N. (2002), An Aerodynamicist’s View of Lift, Bernoulli, and Newton, THE PHYSICS TEACHER Vol. 40
• Jeans, J. (1967), An Introduction to the Kinetic theory of Gasses, Cambridge at the University Press
• Kulfan, B. M. (2010), Paleoaerodynamic Explorations Part I: Evolution of Biological and Technical Flight, AIAA 2010-154.
• Lanchester, F. W. (1907), Aerodynamics, A. Constable and Company
• Langewiesche, W. (1944), , Stick and Rudder - An Explanation of the Art of Flying, McGraw-Hill.
• Lissaman, P. B. S. (1996), The facts of lift, AIAA 1996-161.
• Marchai, C. A. (1985), Sailing Theory and Practice, Putnam
• McBeath, S. (2006), Competition Car Aerodynamics, Sparkford, Haynes
• McLean, D. (2012), Understanding Aerodynamics - Arguing from the Real Physics, Wiley
• Milne-Thomson, L. M. (1966), Theoretical Aerodynamics, 4th edition., Dover Publications
• Prandtl, L.; Tietjens, O. G. (1934), Applied Hydro- and Aeromechanics, Dover Publications.
• Raskin, J. (1994), Coanda Effect: Understanding Why Wings Work, archived from the original on September 28, 2007
• Schlichting, H. (1979), Boundary-Layer Theory, Seventh Edition, McGraw-Hill
• Shapiro, A. H. (1953), The Dynamics and Thermodynamics of Compressible Fluid Flow, Ronald Press Company.
• Smith, N. F. (1972), Bernoulli and Newton in Fluid Mechanics, The Physics Teacher November 1972 Volume 10, Issue 8, pp. 451
• Spalart, P. R. (2000), Strategies for turbulence modeling and simulations, International Journal of Heat and Fluid Flow, vol. 21, iss. 3. June 2000, p. 252. Amsterdam, The Netherlands. Elsevier Science Publishers.
• Sumer, B.; Mutlu; Fredsøe, Jørgen (2006), Hydrodynamics around cylindrical structures (revised ed.)
• Thwaites, B., ed. (1958), Incompressible Aerodynamics, Dover Publications
• Tritton, D. J. (1980), Physical Fluid Dynamics, Van Nostrand Reinhold
• Van Dyke, M. (1969), Higher-Order Boundary-Layer Theory, Annual Review of Fluid Mechanics
• von Mises, R. (1959), Theory of Flight, Dover Publications, Inc
• Waltham, C. (1998), Flight without Bernoulli, The Physics Teacher, vol. 36, Nov. 1998
• Weltner, K. (1987), A comparison of explanations of the aerodynamic lifting force, Am. J. Phys. Vol.55 No. January 1, pg 53
• White, F. M. (1991), . Viscous Fluid Flow, 2nd edition, McGraw-Hill
• Wille, R; Fernholz, H. (1965), Report on the first European Mechanics Colloquium, on the Coanda effect, J. Fluid Mech. 23 (4): 801–819
• Williamson, C. H. K.; Govardhan, R (2004), Vortex-induced vibrations, Annual Review of Fluid Mechanics 36: 413–455, Bibcode:2004AnRFM..36..413W, doi:10.1146/annurev.fluid.36.050802.122128
• Zdravkovich, M. M. (2003), Flow around circular cylinders 2, Oxford University Press, pp. 850–855, ISBN 0-19-856561-5 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895300030708313, "perplexity": 1005.3765184750417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320130.7/warc/CC-MAIN-20170623184505-20170623204505-00423.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2012_v49n4_829 | ERROR ESTIMATES OF SEMIDISCRETE DISCONTINUOUS GALERKIN APPROXIMATIONS FOR THE VISCOELASTICITY-TYPE EQUATION
Title & Authors
ERROR ESTIMATES OF SEMIDISCRETE DISCONTINUOUS GALERKIN APPROXIMATIONS FOR THE VISCOELASTICITY-TYPE EQUATION
Ohm, Mi-Ray; Lee, Hyun-Young; Shin, Jun-Yong;
Abstract
In this paper, we adopt symmetric interior penalty discontinuous Galerkin (SIPG) methods to approximate the solution of nonlinear viscoelasticity-type equations. We construct finite element space which consists of piecewise continuous polynomials. We introduce an appropriate elliptic-type projection and prove its approximation properties. We construct semidiscrete discontinuous Galerkin approximations and prove the optimal convergence in $\small{L^2}$ normed space.
Keywords
visoelasticity-type equation;discontinuous Galerkin methods;semidiscrete approximations;$\small{L^2}$ optimal convergence;
Language
English
Cited by
References
1.
D. N. Arnold, An interior penalty finite element method with discontinuous elements, SIAM J. Numer. Anal. 19 (1982), no. 2, 724-760.
2.
I. Babuska and M. Suri, The h-p version of the finite element method with quasi-uniform meshes, RAIRO Model. Math. Anal. Numer. 21 (1987), no. 2, 199-238.
3.
I. Babuska and M. Suri, The optimal convergence rates of the p-version of the finite element method, SIAM J. Numer. Anal. 24 (1987), no. 4, 750-776.
4.
B. Darlow, R. Ewing, and M. F. Wheeler, Mixed finite element methods for miscible displacement problems in porous media, Soc. Pet. Erg. Report SPE 10500, 1982.
5.
J. Douglas and T. Dupont, Interior penalty procedures for elliptic and parabolic Galerkin methods, Computing methods in applied sciences (Second Internat. Sympos., Versailles, 1975), pp. 207216. Lecture Notes in Phys., Vol. 58, Springer, Berlin, 1976.
6.
J. Douglas, M. F. Wheeler, B. L. Darlow, and R. P. Kendall, Self-adaptive finite element simulation of miscible displacement in porous media, Compuer Methods in Applied Mechanics and Engineering 47 (1984), 131-159.
7.
R. E. Ewing, Time-stepping Galerkin methods for nonlinear Sobolev partial differential equations, SIAM J. Numer. Anal. 15 (1978), no. 6, 1125-1150.
8.
H. Koch and S. Antman, Stability and Hopf bifurcation for fully nonlinear parabolic hyperbolic equations, SIAM J. Math. Anal. 32 (2000), no. 2, 360-384.
9.
A. Lasis and E. Suri, hp-version discontinuous Galerkin finite element method for semi-linear parabolic problems, SIAM J. Math. Anal. 45 (2007), no. 4, 1544-1569
10.
Q. Lin and S. Zhang, A direct global superconvergence analysis for Sobolev and viscoelasticity type equations, Appl. Math. 42 (1997), no. 1, 23-34.
11.
J. A. Nitsche, Uber ein variationspringzip zur Losung von Dirichlet-problemen bei Verwendung von Teilraumen, die Keinen Randbedingungen underworfen sind, Abh. Math. Sem. Univ. Hamburg 36 (1971), 9-15.
12.
J. T. Oden, I. Babuska, and C. E. Baumann, A discontinuous hp finite element method for diffusion problems, J. Comput. Phys. 146 (1998), no. 2, 491-519.
13.
M. R. Ohm, H. Y. Lee, and J. Y. Shin, Error estimates for discontinuous Galerkin method for nonlinear parabolic equations, J. Math. Anal. Appl. 315 (2006), no. 1, 132143.
14.
B. Riviere and S. Shaw, Discontinuous Galerkin finite element approximation of nonlinear non-Fickian diffusion in viscoelastic polymers, SIAM J. Numer. Anal. 44 (2006), no. 6, 2650-2670.
15.
B. Riviere and M. F. Wheeler, A discontinuous Galerkin method applied to nonlinear parabolic equations, Discontinuous Galerkin methods (Newport, RI, 1999), 231-44, Lect. Notes Comput. Sci. Eng., 11, Springer, Berlin, 2000.
16.
S. Shaw and J. Whiteman, Numerical solution of linear quasistatic hereditary viscoelasticity problems, SIAM J. Numer. Anal. 38 (2000), no. 1, 80-97.
17.
M. F. Wheeler, An elliptic collocation finite element method with interior penalties, SIAM J. Numer. Anal. 15 (1978), no. 1, 152-161. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3625834882259369, "perplexity": 2045.2800377565616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156554.12/warc/CC-MAIN-20180920175529-20180920195929-00274.warc.gz"} |
http://artint.info/2e/html/ArtInt2e.Ch3.S6.html | # 3.6 Heuristic Search
The search methods in the preceding section are uninformed (or blind) in that they do not take the goal into account until they expand a path that leads to a node that satisfies the goal. Heuristic information about which nodes are most promising can guide the search by changing which node is selected in line 13 of the generic search algorithm in Figure 3.4.
A heuristic function $h(n)$, takes a node $n$ and returns a non-negative real number that is an estimate of the cost of the least-cost path from node $n$ to a goal node. The function $h(n)$ is an admissible heuristic if $h(n)$ is always less than or equal to the actual cost of a lowest-cost path from node $n$ to a goal.
There is nothing magical about a heuristic function. It must use only information that can be readily obtained about a node. Typically there is a trade-off between the amount of work it takes to compute a heuristic value for a node and the accuracy of the heuristic value.
A standard way to derive a heuristic function is to solve a simpler problem and to use the cost to the goal in the simplified problem as the heuristic function of the original problem [see Section 3.6.2].
###### Example 3.13.
For the graph of Figure 3.2, if the cost is the distance traveled, the straight-line distance between the node and its closest goal can be used as the heuristic function.
The examples that follow assume the following heuristic function:
$\begin{array}[]{rclrclrcl}h(\mbox{mail})&=&26&h(\mbox{ts})&=&23&h(o103)&=&21\\ h(o109)&=&24&h(o111)&=&27&h(o119)&=&11\\ h(o123)&=&4&h(o125)&=&6&h(r123)&=&0\\ h(b1)&=&13&h(b2)&=&15&h(b3)&=&17\\ h(b4)&=&18&h(c1)&=&6&h(c2)&=&10\\ h(c3)&=&12&h(\mbox{storage})&=&12\end{array}$
This $h$ function is an admissible heuristic because the $h$ value is less than or equal to the exact cost of a lowest-cost path from the node to a goal. It is the exact cost for node $o123$. It is very much an underestimate of the cost to the goal for node $b1$, which seems to be close, but there is only a long route to the goal. It is very misleading for $c1$, which also seems close to the goal, but it has no path to the goal.
The $h$ function can be extended to be applicable to paths by making the heuristic value of a path equal to the heuristic value of the node at the end of the path. That is:
$h(\left)=h(n_{k})$
A simple use of a heuristic function in depth-first search is to order the neighbors that are added to the stack representing the frontier. The neighbors can be added to the frontier so that the best neighbor is selected first. This is known as heuristic depth-first search. This search selects the locally best path, but it explores all paths from the selected path before it selects another path. Although it is often used, it suffers from the problems of depth-first search, and is not guaranteed to find a solution and may not find an optimal solution.
Another way to use a heuristic function is to always select a path on the frontier with the lowest heuristic value. This is called greedy best-first search. This method sometimes works well. However, it can follow paths that look promising because they appear (according to the heuristic function) close to the goal, but the path explored may keep getting longer.
###### Example 3.14.
Consider the graph shown in Figure 3.9, drawn to scale, where the cost of an arc is its length. The aim is to find the shortest path from $s$ to $g$. Suppose the Euclidean straight line distance to the goal $g$ is used as the heuristic function. A heuristic depth-first search will select the node below $s$ and will never terminate. Similarly, because all of the nodes below $s$ look good, a greedy best-first search will cycle between them, never trying an alternate route from $s$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 23, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5668482184410095, "perplexity": 243.62125235545054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695066.99/warc/CC-MAIN-20170926051558-20170926071558-00047.warc.gz"} |
https://www.lessonplanet.com/teachers/express-it-enrichment | # Express It: Enrichment
In this algebraic expressions worksheet, students work with a partner to write algebraic expressions for each of the word expressions in the box. Students then answer the questions after they have used all the expressions.
Concepts
Resource Details | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537329435348511, "perplexity": 1492.3919581117214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611569.86/warc/CC-MAIN-20170528220125-20170529000125-00367.warc.gz"} |
http://noobstarter.com/nootropic-stacks-for-beginners-nootropics-alpha-brain.html | It wasn't always helpful, but it does work sometimes. The first two days gave me stomach and head pain, so I began to test with taking before or after food, and with or without food. The bottle says to take before food, but I preferred taking this with food, more food is better. This doesn't go well in the stomach with something like chocolate, so take this with something like bread or a meal. More importantly, stay very hydrated unless you want a headache, these pills are very hydro-demanding. The pills also work better if you get your blood moving, just a short walk is fine. Energy drinks and coffee go very well with these, as I had a very clear minded experience when taking these with a Monster Java, it was like a cool breeze blowing away the mental fog.
Surgeries – Here's another unpleasant surprise. You're probably thinking we're referring to a brain surgery, but that's not the only surgery that can influence the blood flow to your brain the bad way. For example, a heart surgery can cause hypoperfusion. How? Fat globules, which are released during these kinds of procedures, can find their way to your brain and disrupt the optimal blood flow.
Took pill 1:27 PM. At 2 my hunger gets the best of me (despite my usual tea drinking and caffeine+piracetam pills) and I eat a large lunch. This makes me suspicious it was placebo - on the previous days I had noted a considerable appetite-suppressant effect. 5:25 PM: I don’t feel unusually tired, but nothing special about my productivity. 8 PM; no longer so sure. Read and excerpted a fair bit of research I had been putting off since the morning. After putting away all the laundry at 10, still feeling active, I check. It was Adderall. I can’t claim this one either way. By 9 or 10 I had begun to wonder whether it was really Adderall, but I didn’t feel confident saying it was; my feeling could be fairly described as 50%.
L-Glutamine- One Of The 13 Essential Ingredients In Brain Fuel Plus… Perhaps the best fitting ingredient in our product’s name, L-Glutamine is the only compound besides blood sugar that can both cross the blood brain barrier AND be used by the brain for energy, which is why it is commonly called “brain fuel.” In fact L-Glutamine is involved in more metabolic processes than any other amino acid in the entire body. It is shown to promote mental alertness, improve mood and memory, and help with depression and irritability. It has even been shown to improve IQ.
Zack explained that he didn't really like the term enhancement: "We're not talking about superhuman intelligence. No one's saying we're coming out with a pill that's going to make you smarter than Einstein! What we're really talking about is enabling people." He sketched a bell curve on the back of a napkin. "Almost every drug in development is something that will take someone who's working at, like, 40% or 50%, and take them up to 80," he said.
Drugs such as Adderall can cause nervousness, headaches, sleeplessness and decreased appetite, among other side-effects. An FDA warning on Adderall's label notes that "amphetamines have a high potential for abuse" and can lead to dependence. (The label also mentions that adults using Adderall have reported serious cardiac problems, though the role of the drug in those cases is unknown.) Yet college students tend to consider Adderall and Ritalin as benign, in part because they are likely to know peers who have taken the drugs since childhood for ADHD. Indeed, McCabe reports, most students who use stimulants for cognitive enhancement obtain them from an acquaintance with a prescription. Usually the pills are given away, but some students sell them.
Directions As a dietary supplement take two(2) veggie capsules once a day. For best results take 20-30 min before a meal with an 8oz. glass of water or as directed by your healthcare professional. as a dietary supplement take 2 veggie capsules once a day . For best results take 20-30 min before a meal with an 8oz. Glass of water or as directed by your healthcare professional. — — Suggested Use: As a dietary supplement, adults take one (1) capsule per day. Do not exceed 2 capsules per day. Take 1 capsule at a time with or after a meal. No more than 2 capsules a day.
For now, instead of reaching for a designer supplement, you're better off taking a multivitamin, according to some experts. It's well known that antioxidants like vitamins C and E protect cells from damage by disarming free radicals. Brain cells are especially vulnerable to these troublemakers because the brain generates more free radicals per gram of tissue than any other organ. Antioxidants also protect neurons by keeping blood vessels supple and open, ensuring the flow of nutrients to the brain.
Please take care when you’re out there on the web or in the world shopping for something to help that in progress novel or craft project of yours along. Take all care when planning on taking anything, be it a nootropic, smart drug, or brain enhancer, and do your research before buying. Make sure your so-called ‘best brain pill’ really is the best brain pill for you.
After many years recruiting teens from across the city to join us for a year of culinary adventures, we’re relying on the city’s network of talented youth service providers to fill the gap and cultivate the next generation of smart, resilient youth leaders. While this isn’t where we wanted to be, we’re reaching for gratitude and sharing KUDOS one last time.
So where did the idea of Blue Monday come from? The concept of Blue Monday was originally coined by Dr Cliff Arnall in 2005 and distributed by the PR company Sky Travel. It has now become an annual event and can fall on either the third or the fourth Monday of January, using Dr Cliff Arnall’s original mathematical equation that measures a combination of factors such as weather, potential debt post-Christmas, the amount of time since Christmas, potential failure of New Year resolutions and motivation levels, that apparently conspire to make the date the gloomiest of the year.
This product is a miracle! I have purchased it TWICE because it is so helpful with my memory and cognition. I bought this product because I needed to strengthen my memory and focus, and I wanted to be awake when I did it! I had just switched to a job that is second shift (2PM-11PM) and it was very difficult to adjust to those hours AND learn all of the new technical systems required for my new job. But after taking this supplement, I noticed a HUGE difference in a few days! I was awake and alert like it was 11AM everyday. But it wasn’t like the jolt you sometimes get from caffeine, more like an alertness after a good night’s sleep. No jitters, no headaches, no stomach upset. Just energy and the feeling of being AWAKE. I am now telling all of my co-workers about it!
Many people prefer the privacy and convenience of ordering brain boosting supplements online and having them delivered right to the front door. At Smart Pill Guide, we have made the process easier, so you can place your order directly through our website with your major credit card or PayPal. Our website is secure, so your personal information is protected and all orders are completely confidential.
Following up on the promising but unrandomized pilot, I began randomizing my LLLT usage since I worried that more productive days were causing use rather than vice-versa. I began on 2 August 2014, and the last day was 3 March 2015 (n=167); this was twice the sample size I thought I needed, and I stopped, as before, as part of cleaning up (I wanted to know whether to get rid of it or not). The procedure was simple: by noon, I flipped a bit and either did or did not use my LED device; if I was distracted or didn’t get around to randomization by noon, I skipped the day. This was an unblinded experiment because finding a randomized on/off switch is tricky/expensive and it was easier to just start the experiment already. The question is simple too: controlling for the simultaneous blind magnesium experiment & my rare nicotine use (I did not use modafinil during this period or anything else I expect to have major influence), is the pilot correlation of d=0.455 on my daily self-ratings borne out by the experiment?
Herbs and plants have been used for cognitive enhancement for at least 5,000 years in Indian and Chinese medicine, long before the first synthetic nootropic was created. The practice of Indian Ayurvedic medicine includes the use of a group of nootropic plants known as Medhya Rasayana, the four primary plants of which are Mandukaparni, Yastimadhu, Duduchi and Shankhapushpi, though other lesser known plants are also used. One of the most common supplements in Ayurvedic medicine is Brahmi, known scientifically as “Bacopa monnieri” or “B. monnieri “ and more commonly as water hyssop, Thyme-leaved Gratiola, herb of grace or Indian pennywort. It is named after Lord Brahma, the creator God and originator of Ayurveda, and has been used for centuries to treat disorders ranging from pain and epilepsy to inflammation and memory dysfunction. The exact mechanism behind its action is not fully understood, but it is believed to promote antioxidant activity as well as protect neurons in the prefrontal cortex, hippocampus and corpus striatum against cytotoxicity and DNA damage associated with Alzheimer’s. The prefrontal cortex is critical in rational, social and personality behavior, the hippocampus is believed to be the seat of memory and the autonomic nervous system and the striatum play a role in the reward system of action, so the protection Brahmi provides is extremely helpful in preventing the degeneration of many important cognitive faculties. An effective dose ranges from 300 to 450 mg per day. Winter cherry (ashwagandha) is another well-known Ayurvedic supplement that can promote improved cognitive development, memory and intelligence and reduce the effects of neurodegenerative diseases such as Parkinson’s, Huntington’s and Alzheimer’s. The optimal dose is 6,000 mg per day divided into three 2,000 mg doses. Aloeweed (shankhpushpi) is also used in Ayurvedic medicine to improve memory and intellect as well as treat hypertension, epilepsy and diabetes. Effective doses for most neuroenhancing benefits range as high as 40 g per day.
Armodafinil is sort of a purified modafinil which Cephalon sells under the brand-name Nuvigil (and Sun under Waklert21). Armodafinil acts much the same way (see the ADS Drug Profile) but the modafinil variant filtered out are the faster-acting molecules22. Hence, it is supposed to last longer. as studies like Pharmacodynamic effects on alertness of single doses of armodafinil in healthy subjects during a nocturnal period of acute sleep loss seem to bear out; anecdotally, it’s also more powerful, with Cephalon offering pills with doses as low as 50mg. (To be technical, modafinil is racemic: it comes in two forms which are rotations, mirror-images of each other. The rotation usually doesn’t matter, but sometimes it matters tremendously - for example, one form of thalidomide stops morning sickness, and the other rotation causes hideous birth defects.)
These days, nootropics are beginning to take their rightful place as a particularly powerful tool in the Neurohacker’s toolbox. After all, biochemistry is deeply foundational to neural function. Whether you are trying to fix the damage that is done to your nervous system by a stressful and toxic environment or support and enhance your neural functioning, getting the chemistry right is table-stakes. And we are starting to get good at getting it right. What’s changed?
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro’s Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom’s apparently chewed, but the powders are brewed as a tea.
Thanks to the many years of research in the field, we know now that what we eat can have a strong impact on our mental health. Not only can it protect us from developing Alzheimer's, but it's an act of self-care on its own. "Biology is all about harmony, about finding equilibrium and homeostasis," says Dr. Lisa, which is why her approach differs from food restrictions and focuses on minimizing intake of those foods that don't help us feel better.
Our #5 pick is BriteSmart which has a long list of ingredients, which look good on the bottle, but when we actually visited each one, we were left wondering about why some of them had been included. We did like the fact that it contained Vinpocetine and Huperzine A. We felt that this was a good product, but missing some key ingredients such as a supportive vitamin blend.
I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.)
With just 16 predictions, I can’t simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number:
12:18 PM. (There are/were just 2 Adderall left now.) I manage to spend almost the entire afternoon single-mindedly concentrating on transcribing two parts of a 1996 Toshio Okada interview (it was very long, and the formatting more challenging than expected), which is strong evidence for Adderall, although I did feel fairly hungry while doing it. I don’t go to bed until midnight and & sleep very poorly - despite taking triple my usual melatonin! Inasmuch as I’m already fairly sure that Adderall damages my sleep, this makes me even more confident (>80%). When I grumpily crawl out of bed and check: it’s Adderall. (One Adderall left.)
Your co-worker in the cubicle next to you could now very likely be achieving his or her hyperfocus via a touch of microdosed LSD, a hit of huperzine or a nicotine-infused arm patch. The fact is, concepts such as microdosing, along with words like “nootropic” and “smart drug” (yes, there’s a difference between the two, as you’re about to discover) are quickly becoming household terms, especially due to all the recent media hype that has disclosed how popular compounds such as smart drugs and psychedelics are among Silicon Valley CEOs and college students, along with the smart drug movies “Limitless” and “Lucy“ and popular TV shows like “Limitless”, “Wormwood” and “Hamilton’s Pharmacopeia”.
I find this very troubling. The magnesium supplementation was harmful enough to do a lot of cumulative damage over the months involved (I could have done a lot of writing September 2013 - June 2014), but not so blatantly harmful enough as to be noticeable without a randomized blind self-experiment or at least systematic data collection - neither of which are common among people who would be supplementing magnesium I would much prefer it if my magnesium overdose had come with visible harm (such as waking up in the middle of the night after a nightmare soaked in sweat), since then I’d know quickly and surely, as would anyone else taking magnesium. But the harm I observed in my data? For all I know, that could be affecting every user of magnesium supplements! How would we know otherwise?
Sometimes called smart drugs, brain boosters, or memory-enhancing drugs, the term "nootropics" was coined by scientist Dr. Corneliu E. Giurgea, who developed the compound piracetam as a brain enhancer, according to The Atlantic. The word is derived from the Greek noo, meaning mind, and trope, which means "change" in French. In essence, all nootropics aim to change your mind by enhancing functions like memory or attention.
This is why it was so refreshing to stumble across Dr. Lisa Mosconi's new book "Brain Food: The Surprising Power of Eating for Cognitive Power" . "Our brains aren't keeping up with the historical change in dietary consumptions", says Dr. Lisa. And it's quite evident in her book when she does a historical overview and draws an important relationship between what our ancestors were eating and the concept of longevity. Her contribution to the fascinating new world of "neuro-nutrition" differs drastically from the diet culture we are all so used to and can help us understand why including (and excluding) certain foods, will actually boost our brain health.
Nootropics can also show signs of neuro-preservation and neuro-protection. These compounds directly affect the levels of brain chemicals associated with slowing down the aging process. Some nootropics could in an increase in the production of Nerve Growth Factor and Brain-Derived Neurotrophic Factor to stimulate the growth of neurons and neurites while slowing down the rate of damage as well.
Some nootropics users are hopeful that the drugs could be permanently “neuroprotective”—in other words, that the compounds could slow down the neuronal aging process, and help avoid cognitive deterioration later in life. (For what it's worth, most of the users I spoke to said that didn't matter much to them. “I doubt anything I’ve tried has made me smarter in a long-term way,” Baker says. “That’s still science fiction.”)
Seltzer's decision to take piracetam was based on his own online reading, which included medical-journal abstracts. He hadn't consulted a doctor. Since settling on a daily regime of supplements, he had sensed an improvement in his intellectual work and his ability to engage in stimulating conversation. He continued: "I feel I'm better able to articulate my thoughts. I'm sure you've been in the zone - you're having a really exciting debate with somebody, your brain feels alive. I feel that more. But I don't want to say that it's this profound change."
Finally, it’s not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
Green tea is widely drunk in many cultures, especially in Asia, and is known to have potent health benefits. These benefits are attributed to its polyphenol content (particularly the flavanols and flavonols). In cell cultures and animal studies, the polyphenols have been proven to prevent neurotoxin-induced cell injury. Green tea also has anti-inflammatory properties and, according to a study performed on aged mice, may delay memory regression. It’s safe to drink several cups of green tea per day, though it may be more efficacious to take a green tea extract supplement to reach a daily dose of 400 to 500 mg of EGCG, one of the main active components of green tea.
Mosconi uses a pragmatic approach to improve your diet for brain health. The book is divided in three parts. The first one provides information regarding the brain nutritional requirement. The second one teaches you how to eat better. And, the third part tests you to find out where you are in terms of feeding yourself well. This includes an 80 question test that grades you as either Beginner/Intermediate/Advanced. “Beginner” entails you have little food awareness. You eat a lot of processed food. “Advanced” entails you eat very healthily, mainly organic foods. And, “Intermediate” falls in between.
Nothing happened until I was falling asleep, when I became distinctly aware that I was falling asleep. I monitored the entire process and remained lucid, with a measure of free will, as I dreamed, and woke up surprisingly refreshed. While I remembered many of my dreams, some of which were quite long, I couldn't recall how my underpants ended up around my ankles.
There are many books about nutrition and cognitive functions. The authors ground their nutrition protocol on what humans ate during the paleolithic era. Often these authors contradict each other. For some, we were better hunters than gatherers so we ate mostly meat. For others, we were better gatherers and ate primarily nuts, plants, fruits. Others advance our digestive system can’t tolerate grains because it was a modern invention of the first agricultural revolution (about 10,000 years ago).
…Four subjects correctly stated when they received nicotine, five subjects were unsure, and the remaining two stated incorrectly which treatment they received on each occasion of testing. These numbers are sufficiently close to chance expectation that even the four subjects whose statements corresponded to the treatments received may have been guessing.
Nootropics—the name given to a broad class of so-called "cognitive-enhancing" drugs—are all the rage in Silicon Valley these days. Programmers like nootropics because they’re said to increase productivity and sharpen focus without the intensity or side effects of a prescription drug like Adderall or modafinil. Some users mix their own nootropics using big bins of powders, purchased off the Internet or in supplement stores. And some take pre-made "stacks" that are designed to produce specific effects.
-Water [is also important]. Over 80% of the brain’s content is water. Every chemical reaction that takes place in the brain needs water, especially energy production. The brain is so sensitive to dehydration that even a minimal loss of water can cause symptoms like brain fog, fatigue, dizziness, confusion and, more importantly, brain shrinkage. The longevity and well-being of your brain are critically dependent upon consuming hard water. This refers to plain water that is high in minerals and natural electrolytes. Most people don’t realize that the water they’re drinking is not actually “water”.
There’s been a lot of talk about the ketogenic diet recently—proponents say that minimizing the carbohydrates you eat and ingesting lots of fat can train your body to burn fat more effectively. It’s meant to help you both lose weight and keep your energy levels constant. The diet was first studied and used in patients with epilepsy, who suffered fewer seizures when their bodies were in a state of ketosis. Because seizures originate in the brain, this discovery showed researchers that a ketogenic diet can definitely affect the way the brain works. Brain hackers naturally started experimenting with diets to enhance their cognitive abilities, and now a company called HVMN even sells ketone esters in a bottle; to achieve these compounds naturally, you’d have to avoid bread and cake. Here are 6 ways exercise makes your brain better.
Avocados are almost as good as blueberries in promoting brain health, Dr. Pratt told WebMD.com. These buttery fruits are rich in monounsaturated fat, which contributes to healthy blood flow in the brain, according to Ann Kulze, MD, author of Dr. Ann’s 10-Step Diet: A Simple Plan for Permanent Weight Loss & Lifelong Vitality. This helps every organ in your body—particularly the brain and heart. Avocados also lower blood pressure, thanks to their potassium. Because high blood pressure can impair cognitive abilities, lower blood pressure helps to keep the brain in top form and reduce your risks for hypertension or a stroke. The fiber in avocados also reduces the risk of heart disease and bad cholesterol. These foods are good for your brain later in life.
…It is without activity in man! Certainly not for the lack of trying, as some of the dosage trials that are tucked away in the literature (as abstracted in the Qualitative Comments given above) are pretty heavy duty. Actually, I truly doubt that all of the experimenters used exactly that phrase, No effects, but it is patently obvious that no effects were found. It happened to be the phrase I had used in my own notes.
Interesting however, that there’s no mention of the power of cocoa (chocolate extract) or green tea. I’ve reviewed dozens of studies from Harvard Science as well as internation publications that discuss cocoa in particular. We already know the value of antioxidants in green tea but chocolate seems to be up and coming. I’ve been taking a product called vavalert that combines cocoa and green tea and it’s been working like a miracle.
Recently I spoke on the phone with Barbara Sahakian, a clinical neuropsychologist at Cambridge University and the co-author of a 2007 article in Nature entitled "Professor's Little Helper". Sahakian, who also consults for several pharmaceutical companies, and her co-author, Sharon Morein-Zamir, reported that a number of their colleagues were using prescription drugs like Adderall and Provigil. Because the drugs are easy to buy online, they wrote, it would be difficult to stop their spread: "The drive for self-enhancement of cognition is likely to be as strong if not stronger than in the realms of 'enhancement' of beauty and sexual function." (In places like Cambridge, at least.)
Barbara Sahakian, a neuroscientist at Cambridge University, doesn’t dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. “Proving safety and efficacy is needed,” she says.
On the other hand, sometimes you’ll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I’ve done enough hacking of my brain that I’ve gotten over that programming… and that when I use nootropics they help me help people.
Brain consumption can result in contracting fatal transmissible spongiform encephalopathies such as Variant Creutzfeldt–Jakob disease and other prion diseases in humans and mad cow disease in cattle.[10] Another prion disease called kuru has been traced to a funerary ritual among the Fore people of Papua New Guinea in which those close to the dead would eat the brain of the deceased to create a sense of immortality.[11]
Using neuroenhancers, Seltzer said, "is like customising yourself - customising your brain". For some people, he added, it was important to enhance their mood, so they took antidepressants; but for people like him it was more important "to increase mental horsepower". He said: "It's fundamentally a choice you're making about how you want to experience consciousness." Whereas the 1990s had been about "the personalisation of technology", this decade was about the personalisation of the brain - what some enthusiasts have begun to call "mind hacking".
One thing to notice is that the default case matters a lot. This asymmetry is because you switch decisions in different possible worlds - when you would take Adderall but stop you’re in the world where Adderall doesn’t work, and when you wouldn’t take Adderall but do you’re in the world where Adderall does work (in the perfect information case, at least). One of the ways you can visualize this is that you don’t penalize tests for giving you true negative information, and you reward them for giving you true positive information. (This might be worth a post by itself, and is very Litany of Gendlin.)
In August 2011, after winning the spaced repetition contest and finishing up the Adderall double-blind testing, I decided the time was right to try nicotine again. I had since learned that e-cigarettes use nicotine dissolved in water, and that nicotine-water was a vastly cheaper source of nicotine than either gum or patches. So I ordered 250ml of water at 12mg/ml (total cost: \$18.20). A cigarette apparently delivers around 1mg of nicotine, so half a ml would be a solid dose of nicotine, making that ~500 doses. Plenty to experiment with. The question is, besides the stimulant effect, nicotine also causes habit formation; what habits should I reinforce with nicotine? Exercise, and spaced repetition seem like 2 good targets.
Terms and Conditions: The content and products found at feedabrain.com, adventuresinbraininjury.com, the Adventures in Brain Injury Podcast, or provided by Cavin Balaster or others on the Feed a Brain team is intended for informational purposes only and is not provided by medical professionals. The information on this website has not been evaluated by the food & drug administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. Readers/listeners/viewers should not act upon any information provided on this website or affiliated websites without seeking advice from a licensed physician, especially if pregnant, nursing, taking medication, or suffering from a medical condition. This website is not intended to create a physician-patient relationship.
Similar delicacies from around the world include Mexican tacos de sesos.[1] The Anyang tribe of Cameroon practiced a tradition in which a new tribal chief would consume the brain of a hunted gorilla, while another senior member of the tribe would eat the heart.[2] Indonesian cuisine specialty in Minangkabau cuisine also served beef brain in a coconut-milk gravy named gulai otak (beef brain curry).[3][4] In Cuban cuisine, "brain fritters" are made by coating pieces of brain with bread crumbs and then frying them.[5]
Do you start your day with a cup (or two, or three) of coffee? It tastes delicious, but it’s also jump-starting your brain because of its caffeine content. Caffeine is definitely a nootropic substance—it’s a mild stimulant that can alleviate fatigue and improve concentration, according to the Mayo Clinic. Current research shows that coffee drinkers don’t suffer any ill effects from drinking up to about four cups of coffee per day. Caffeine is also found in tea, soda, and energy drinks. Not too surprisingly, it’s also in many of the nootropic supplements that are being marketed to people looking for a mental boost. Take a look at these 7 genius brain boosters to try in the morning.
The nootropics community is surprisingly large and involved. When I wade into forums and the nootropics subreddit, I find members trading stack recipes and notifying each other of newly synthesized compounds. Some of these “psychonauts” seem like they’ve studied neuroscience; others appear to be novices dipping their toes into the world of cognitive enhancement. But all of them have the same goal: amplifying the brain’s existing capabilities without screwing anything up too badly. It’s the same impulse that grips bodybuilders—the feeling that with small chemical tweaks and some training, we can squeeze more utility out of the body parts we have. As Taylor Hatmaker of the Daily Dot recently wrote, “Together, these faceless armchair scientists seek a common truth—a clean, unharmful way to make their brains better—enforcing their own self-imposed safety parameters and painstakingly precise methods, all while publishing their knowledge for free, in plain text, to relatively crude, shared databases."
Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don’t feel so hot, although my conversation and arguments seem as cogent as ever. I’m also having a terrible time focusing on any actual work. At 8 I take another; I’m behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don’t seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it’s just that I don’t remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual.
purpose of this research study titled ‘Nootropics Market – Growth, Future Prospects, and Competitive Analysis, 2016 – 2024’ is to provide investors, developers, company executives and industry participants with in-depth analysis to allow them to take strategic initiatives and decisions related to the prospects in the global nootropics products market. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28913411498069763, "perplexity": 3211.5239920805393}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744348.50/warc/CC-MAIN-20181118093845-20181118115845-00073.warc.gz"} |
http://www.aimsciences.org/article/doi/10.3934/krm.2017048 | American Institute of Mathematical Sciences
December 2017, 10(4): 1255-1257. doi: 10.3934/krm.2017048
L∞ resolvent bounds for steady Boltzmann's Equation
Indiana University Bloomington, IN 47405, USA
Received November 2016 Revised January 2017 Published March 2017
Fund Project: The author is supported by NSF grant DMS-0300487
We derive lower bounds on the resolvent operator for the linearized steady Boltzmann equation over weighted $L^\infty$Banach spaces in velocity, comparable to those derived by Pogan & Zumbrun in an analogous weighted $L^2$ Hilbert space setting.These show in particular that the operator norm of the resolvent kernel is unbounded in $L^p(\mathbb{R})$ for all $1<p \leq \infty$, resolving an apparent discrepancy in behavior between the two settings suggested by previous work.
Citation: Kevin Zumbrun. L resolvent bounds for steady Boltzmann's Equation. Kinetic & Related Models, 2017, 10 (4) : 1255-1257. doi: 10.3934/krm.2017048
References:
[1] H. Grad, Asymptotic theory of the Boltzmann equation. Ⅱ, in Rarefied Gas Dynamics (Proc. 3rd Internat. Sympos. , Palais de l'UNESCO, Paris, 1962), Ⅰ, Academic Press, (1963), 26–59 [2] T.-P. Liu and S.-H. Yu, Invariant manifolds for steady Boltzmann flows and applications, Arch. Rational Mech. Anal., 209 (2013), 869-997. doi: 10.1007/s00205-013-0640-x. [3] Q. Métivier and K. Zumbrun, Existence and sharp localization in velocity of small-amplitude Boltzmann shocks, Kinet. Relat. Models, 2 (2009), 667-705. doi: 10.3934/krm.2009.2.667. [4] A. Pogan and K. Zumbrun, Stable manifolds for a class of degenerate evolution equations and exponential decay of kinetic shocks, preprint, https://arxiv.org/pdf/1607.03028.pdf.
show all references
References:
[1] H. Grad, Asymptotic theory of the Boltzmann equation. Ⅱ, in Rarefied Gas Dynamics (Proc. 3rd Internat. Sympos. , Palais de l'UNESCO, Paris, 1962), Ⅰ, Academic Press, (1963), 26–59 [2] T.-P. Liu and S.-H. Yu, Invariant manifolds for steady Boltzmann flows and applications, Arch. Rational Mech. Anal., 209 (2013), 869-997. doi: 10.1007/s00205-013-0640-x. [3] Q. Métivier and K. Zumbrun, Existence and sharp localization in velocity of small-amplitude Boltzmann shocks, Kinet. Relat. Models, 2 (2009), 667-705. doi: 10.3934/krm.2009.2.667. [4] A. Pogan and K. Zumbrun, Stable manifolds for a class of degenerate evolution equations and exponential decay of kinetic shocks, preprint, https://arxiv.org/pdf/1607.03028.pdf.
[1] Viorel Barbu, Gabriela Marinoschi. An identification problem for a linear evolution equation in a banach space. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-12. doi: 10.3934/dcdss.2020081 [2] Radjesvarane Alexandre, Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Bounded solutions of the Boltzmann equation in the whole space. Kinetic & Related Models, 2011, 4 (1) : 17-40. doi: 10.3934/krm.2011.4.17 [3] Xinkuan Chai. The Boltzmann equation near Maxwellian in the whole space. Communications on Pure & Applied Analysis, 2011, 10 (2) : 435-458. doi: 10.3934/cpaa.2011.10.435 [4] Ralf Kirsch, Sergej Rjasanow. The uniformly heated inelastic Boltzmann equation in Fourier space. Kinetic & Related Models, 2010, 3 (3) : 445-456. doi: 10.3934/krm.2010.3.445 [5] Roberto Guglielmi. Indirect stabilization of hyperbolic systems through resolvent estimates. Evolution Equations & Control Theory, 2017, 6 (1) : 59-75. doi: 10.3934/eect.2017004 [6] Alfredo Lorenzi, Ioan I. Vrabie. An identification problem for a linear evolution equation in a Banach space and applications. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 671-691. doi: 10.3934/dcdss.2011.4.671 [7] Yingzhe Fan, Yuanjie Lei. The Boltzmann equation with frictional force for very soft potentials in the whole space. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4303-4329. doi: 10.3934/dcds.2019174 [8] Wen Deng. Resolvent estimates for a two-dimensional non-self-adjoint operator. Communications on Pure & Applied Analysis, 2013, 12 (1) : 547-596. doi: 10.3934/cpaa.2013.12.547 [9] Yuri Latushkin, Valerian Yurov. Stability estimates for semigroups on Banach spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5203-5216. doi: 10.3934/dcds.2013.33.5203 [10] Niclas Bernhoff. On half-space problems for the weakly non-linear discrete Boltzmann equation. Kinetic & Related Models, 2010, 3 (2) : 195-222. doi: 10.3934/krm.2010.3.195 [11] Seung-Yeal Ha, Ho Lee, Seok Bae Yun. Uniform $L^p$-stability theory for the space-inhomogeneous Boltzmann equation with external forces. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 115-143. doi: 10.3934/dcds.2009.24.115 [12] Laurent Gosse. Well-balanced schemes using elementary solutions for linear models of the Boltzmann equation in one space dimension. Kinetic & Related Models, 2012, 5 (2) : 283-323. doi: 10.3934/krm.2012.5.283 [13] Robert M. Strain. Optimal time decay of the non cut-off Boltzmann equation in the whole space. Kinetic & Related Models, 2012, 5 (3) : 583-613. doi: 10.3934/krm.2012.5.583 [14] Jiao Chen, Weike Wang. The point-wise estimates for the solution of damped wave equation with nonlinear convection in multi-dimensional space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 307-330. doi: 10.3934/cpaa.2014.13.307 [15] Tai-Ping Liu, Shih-Hsien Yu. Boltzmann equation, boundary effects. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 145-157. doi: 10.3934/dcds.2009.24.145 [16] Leif Arkeryd, Anne Nouri. On a Boltzmann equation for Haldane statistics. Kinetic & Related Models, 2019, 12 (2) : 323-346. doi: 10.3934/krm.2019014 [17] Giselle A. Monteiro, Milan Tvrdý. Generalized linear differential equations in a Banach space: Continuous dependence on a parameter. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 283-303. doi: 10.3934/dcds.2013.33.283 [18] Chun-Gil Park. Stability of a linear functional equation in Banach modules. Conference Publications, 2003, 2003 (Special) : 694-700. doi: 10.3934/proc.2003.2003.694 [19] Claude Bardos, François Golse, Ivan Moyano. Linear Boltzmann equation and fractional diffusion. Kinetic & Related Models, 2018, 11 (4) : 1011-1036. doi: 10.3934/krm.2018039 [20] Yan Guo, Juhi Jang, Ning Jiang. Local Hilbert expansion for the Boltzmann equation. Kinetic & Related Models, 2009, 2 (1) : 205-214. doi: 10.3934/krm.2009.2.205
2018 Impact Factor: 1.38
Metrics
• PDF downloads (28)
• HTML views (83)
• Cited by (2)
Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7164536714553833, "perplexity": 6106.782649114729}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999964.8/warc/CC-MAIN-20190625233231-20190626015231-00328.warc.gz"} |
https://www.open.edu/openlearncreate/mod/oucontent/view.php?id=52747§ion=4.4 | # 4.4 Entering fractions with the combined question type
In a combined question it is possible to arrange two input boxes such that they look like the numerator and denominator in a fraction using the following HTML.
<div><span style="border-bottom: 1px solid black; padding-bottom: 4px;">[[2:numeric:__2__]]</span></div> | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585103750228882, "perplexity": 2212.045914217169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00015.warc.gz"} |
https://forum.allaboutcircuits.com/threads/simplifying-my-usart-code-char-string.154147/ | # Simplifying my USART code [ char string]
#### HappyC4mper
Joined Oct 13, 2017
49
Hey guys,
so i created this code
Code:
int main(void)
{
// PLL_Config();
SystemCoreClockUpdate();
char string[]= "Hello World";
int i=0;
while(!(USART_MODULE->SR&USART_SR_RXNE))
for (i=0; i<11; i++)
{
init_USART();
send_usart(string[i]);
}
while (string[i]!=0);
ignoring the while loop and the other functions. I would like to simplify my char string and somehow turn it into a function. I'm using putty to read the usart code.
I want it so that i can write as many characters as I want without having to change the number of the integer [(i)(i<11)]. Eg. send out "Hello world, blah blah." without having to type down the how many characters there are (does/calculate it automatically). Hopefully that makes sense.
Thanks!
#### AlbertHall
Joined Jun 4, 2014
8,810
Assuming your string is properly nul terminated then you can use strlen(string) to get the length of the string.
#### John P
Joined Oct 14, 2008
1,782
Well, that's easy. Write the for() loop as:
Code:
for (i=0; ; i++)
{
init_USART();
send_usart(string[i]);
if (string[i] == '\0')
break;
}
or maybe
Code:
i = 0;
do
{
init_USART();
send_usart(string[i]);
} while (string[i++] != '\0');
Last edited:
#### AlbertHall
Joined Jun 4, 2014
8,810
Well, that's easy. Write the for() loop as:
Code:
for (i=0; ; i++)
{
init_USART();
send_usart(string[i]);
if (string[i] == '/0')
break;
}
or maybe
Code:
i = 0;
do
{
init_USART();
send_usart(string[i]);
} while (string[i++] != '\0');
These will also transmit the '\0' - that might be desired behaviour but TS code doesn't do this.
#### John P
Joined Oct 14, 2008
1,782
Yes, that was deliberate. If the terminating \0 isn't sent, how will the computer know that the string has ended? If the string has a variable length, counting characters won't do it--there has to be a marker of some kind.
#### shteii01
Joined Feb 19, 2010
4,647
Assuming your string is properly nul terminated then you can use strlen(string) to get the length of the string.
Basically to add to what Albert said.
Your string is defined in several different places.
The first element of string is defined at: int i=0
The last element of string is defined in the for loop: i<11
Those two are the limits of your string. They are user defined, meaning "you" put them there because you knew how long your string is ahead of time.
You want to be more flexible. You may or may not know the length of the string ahead of time.
So.
You define first element.
Then you find the length of the string. This can be done with a count where you write code to count the number of characters in the string. Or. You do what Albert told you. Smart people has been dealing with this situation for 30? or 40? years, so they wrote a library function that finds the length of the string. Use it.
You would have:
int i=0; /first element
int j=strlen(string); /length of the string
for (i=0; i<=j; i++) {} /your for loop
#### HappyC4mper
Joined Oct 13, 2017
49
Basically to add to what Albert said.
Your string is defined in several different places.
The first element of string is defined at: int i=0
The last element of string is defined in the for loop: i<11
Those two are the limits of your string. They are user defined, meaning "you" put them there because you knew how long your string is ahead of time.
You want to be more flexible. You may or may not know the length of the string ahead of time.
So.
You define first element.
Then you find the length of the string. This can be done with a count where you write code to count the number of characters in the string. Or. You do what Albert told you. Smart people has been dealing with this situation for 30? or 40? years, so they wrote a library function that finds the length of the string. Use it.
You would have:
int i=0; /first element
int j=strlen(string); /length of the string
for (i=0; i<=j; i++) {} /your for loop
thanks a lot!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28567707538604736, "perplexity": 3249.7217830039604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00363.warc.gz"} |
http://clay6.com/qa/29915/the-average-energy-flux-of-sunlight-is-1-kw-m-2-this-energy-is-falling-norm | # The average energy flux of sunlight is $1\: kW/m^2$. This energy is falling normally on the plate surface of area $10\: cm^2$ which completely absorbs the energy. How much force is exerted on the plate if it is exposed to sunlight for 20 minutes?
$\begin {array} {1 1} (a)\;33 \times 10^{-8}N & \quad (b)\;3.3 \times 10^{-8}N \\ (c)\;33 \times 10^{-9}N & \quad (d)\;3.3 \times 10^{-9}N \end {array}$
Total energy falling on the plate, $U =$ energy flux $\times$ Area $\times$ time
$= 1000 \times 10 \times 10^{-4} \times 20 \times 60 = 1200 J$
Momentum, $p = \large\frac{U}{c }$$= \large\frac{1200}{3}$$\times 10^8 = 400 \times 10^{-8}kgm/s$
Force, $F = \large\frac{p}{time }$$= \large\frac{400 \times 10^{-8}}{1200}$$ = 3.3 \times 10^{-9}N$
Ans : (d) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989333748817444, "perplexity": 1240.927379860236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00218.warc.gz"} |